oss-sec mailing list archives

Re: Heap overflow and DoS in unzip 6.0


From: Stefan Cornelius <scorneli () redhat com>
Date: Mon, 21 Sep 2015 15:52:07 +0200

On Tue, 15 Sep 2015 13:10:17 -0300
Gustavo Grieco <gustavo.grieco () gmail com> wrote:
AFAIK, upstream is still working on the heap overflow issue (the DoS
is fixed in the last unzip beta). In concrete, they said:

"My initial (quick) analysis suggested that the basic problem is
corrupt (unrealistic) Extra-Field sizes, which UnZip does not check
well enough. I need to verify that that is the cause, and then figure
out what to do about it. Currently, if the program sees an Extra-Field
length of, say, 0x1007, it tends to believe it, even if that's bigger
than the actual archive member's length (or perhaps even bigger than
the whole archive). If it actually tries to read data which haven't
been read, then valgrind gets upset (with good reason). At least
that's what I believe so far."


I've tried to contact upstream via the contact form to share my
analysis. So far, I've had no response, so I'll share it here:

I believe the root cause for this issue is in extract.c around line
1250:
"""
        if (G.lrec.compression_method == STORED) {
            zusz_t csiz_decrypted = G.lrec.csize;

            if (G.pInfo->encrypted)
                csiz_decrypted -= 12; ##### BOOOOM 
            if (G.lrec.ucsize != csiz_decrypted) {
                Info(slide, 0x401, ((char *)slide,
                  LoadFarStringSmall2(WrnStorUCSizCSizDiff),
                  FnFilter1(G.filename),
                  FmZofft(G.lrec.ucsize, NULL, "u"),

So there can be an integer overflow if csiz_decrypted is smaller than
12: if (G.pInfo->encrypted)
                csiz_decrypted -= 12;
"""

Did anyone come to a similar conclusion? Also, the subsequent unzip
processing after this underflow exposed a lot of other places where
additional hardening may be a very good idea. Is somebody working on
that?

Thanks,
-- 
Stefan Cornelius / Red Hat Product Security


Current thread: