Bugtraq mailing list archives

Re: countermeasure against attacks through HTML shared files


From: fcorella () pomcor com
Date: Fri, 07 Nov 2008 23:40:24 +0000

Hi Peter,

Thanks for your comments!

The gist of your suggestion is to use different base URLs
for the untrusted content, so that "same origin" policies
act as a sort of firewall. You propose different hostnames;
back in 2001, the acmemail webmail project did something
similar, but rather than hostnames, we chose to offer the
option of using different port numbers. Many of us ran 
acmemail on https URLs, and that meant either using wildcard 
certs for https (which would expose other hosts to any 
flaws in acmemail) or different ports. You can see the source here:

http://acmemail.cvs.sourceforge.net/viewvc/acmemail/acmemail/AcmemailConf.pm?view=log

Revision 1.27 on 18 Aug 2001 introduced the change:

# For better protection against JavaScript attacks in messages
# and attachments, it is recommended that you configure your
# Web server to listen to two ports. One of these ports should
# be designated as the "control" port, where acmemail will display
# pages it has high confidence have safe content. The other will
# be designated the "message" port, and will be used to display
# emails and their attachments

IIRC, acmemail used querystring/URL arguments to pass authentication
tokens in the requests to the "message" host:port requests; our hope
was that all (? important?) cookies would only go to the "control"
URLs.

Interesting.  I'll mention this in the revised paper.

Using different ports can be a little tricky; corporate firewall admins
are very fond of disallowing https to atypical ports, for instance.
Your
hostname suggestion has other benefits if you're able to mitigate other
risks (e.g., SSO cookies scoped for all RegisteredDomain hostnames) --

Good point, but this should not be a problem if
the application service provider uses a dedicated
RegisteredDomain for the particular application.

being able to sandbox each document+viewer combo is great. I think you 
should do some usability testing with your suggestion that the file
retrieval session record be deleted when the document is accessed,
though.
This is very likely to cause problems with user agents like Internet
Explorer
that have aggressive anti-caching stances for https content, and I
imagine
could easily cause trouble for things like chunked partial requests.

Very good point!!!  Plus, this makes me think of
another problem with deleting the record: what if
the user wants to go back to the file using the
back arrow, or the browser's history, or a
bookmark.

I'd
tend to treat the retrieval keys more like typical web session objects 
-- in fact, I'd probably stick a hashtable of filename -> hostkey
values
in each user's web session objects, so the keys would remain valid as
long as the user was still logged in.

My motivation for deleting the file retrieval
session record was that the extended hostname is
recorded in the browser history.  So if the user
neglects to log out, and is using a laptop, and
the laptop is stolen (even if turned off), the
thief can access the file from the history until
the login session times out.

But the chunked request problem you brought up
trumps this.  So I think now that the file
retrieval session record should not be deleted
until the login session record is deleted, and the
user will have to be careful about logging out
before leaving the laptop unattended.  Also, the
file retrieval session record should now be
specific to a particular file, so it should have a
field for the filepath, which should be checked
before downloading the file.  (This is equivalent
to your hashtable, but I like to think of sessions
as implemented by normalized relational database
records, in this case by the login session record
plus the collection of file-retrieval session
records that refer to the login session record.)

It remains to solve the
back-arrow/history/bookmark problem.  Here is what
I propose for that: if the file retrieval session
ID does not map to a file retrieval session
record, the application redirects the browser to
the standard user file URL.  If the user is logged
in, the redirected request will come in with the
user-file authentication cookie, and the
application will create a file retrieval session
record and redirect to a new extended user-file
URL.  Yes, that's two redirects for each download
from a bookmark, but hopefully that will not cause
a noticeable additional delay, especially if
keepalive is used.

Will add all this to the revised paper.

Thanks again,

Francisco




Current thread: