Secure Coding mailing list archives

Re: [WEB SECURITY] On sandboxes, and why you should care


From: stephen at corsaire.com (Stephen de Vries)
Date: Fri, 26 May 2006 12:43:33 +0700


Hi Dinis,

On 24 May 2006, at 05:34, Dinis Cruz wrote:
<snip>

In the solution that I am envisioning, you will have multiple  
Sandboxes, one inside the other, separated by very clearly defined  
layers (the input choke points / attack surface) where each sandbox  
is allocated privileges accordingly to what it needs to get the job  
done (principle of least privilege),  the amount of trust that we  
have in that code (Code Access Security) and the identity used to  
executed it (Role Based Security).

If I understand this correctly, then this implies a _huge_  
architecture change for developers.  It's already a difficult task  
for developers to map a problem domain onto an Object Oriented  
language, what you're suggesting is to throw another constraint in  
the mix which may break a lot of the design.  A very simple MVC  
pattern for the web tier could become quite complex with multiple  
nested sandboxes.  This could mean that the sandboxing could become  
central to the design of the app.  Of course, this may be required in  
certain high risk high value environments, but the additional cost of  
implementing this for your average web app would be too high IMO.

Unfortunately the Partial Trust Sandboxes that currently exist on  
the .Net Framework (namely Medium Trust) are not good examples of  
Sandboxes since they still allow the creation of easy exploitable  
Asp.Net code (i.e. the security vulnerabilities that you mention  
below would still occur on a web application executed under Medium  
Trust).

I can't speak for .NET, but the Java security manager can be defined  
purely as a configuration item.  There doesn't need to be anything  
special in the code to take advantage of the security manager.  This  
means the sandbox doesn't intrude on the code at all, it merely sets  
runtime restrictions.

So spending time and effort to strengthen the walls isn't going to  
do any real good in preventing an attacker from getting hold of them.
The plan is to put has many walls as possible between the attacker  
and the assets.

This is a good analogy, and I agree with you that sandboxes will  
limit the kind of attacks that move from one layer to the next.  But  
they won't be able to stop attacks that don't traverse the walls,  
such as SQL injection and XSS.

I can create a .Net environment which prevents those developers  
from accessing directly the database (in the case where malicious  
code was uploaded to the server).

CAS allows the creation of custom permissions which could be used  
to implement 'Data-Sandboxed environments' which enforce  
application-logic to the sandboxed code (for example not-allowing  
access to private data stored within the database or other user's  
data)

Not allowing access to certain database tables, fine.  And preventing  
access to stored procedures, also fine.  But what about the tables  
that _have_ to be accessed as part of the requirements?  Example:  
SELECT * FROM USERS WHERE USERID=10
How would a sandbox prevent a simple parameter manipulation attack to  
gain access to someone elses data?  So even though you may prevent  
attackers from running xp_cmdshell or reading system tables, you  
can't prevent them from accessing the USERS table.

Sandboxing is not going to make any difference here, but external  
controls such as vetting your developers and auditing the code  
would make a very real contribution to improving the security.
Although this is important, and will have to be done for certain  
types of code (namely the ones we will place more trust (and will  
pay more for)), this will not scale up (i.e. work for ALL software  
that is executed in your computer).

Just do this simple test: analyze your computer and list every  
single application that you have installed (if you have time, try  
also listing the writers of the individual components (dlls, static  
libraries, etc...) used by those applications), once you have that  
list of applications which will have access to ALL your user-land  
assets (let's ignore for now the ones that also have (or had)  
administrative privileges to your box), ask yourself  the question:  
"Do I really trust every single developer that worked on this  
applications/modules?".

It's a matter of degree of trust.  I store all my sensitive data in a  
Mac OS X keychain, and I trust that that data isn't worth someone  
subverting a developer at Apple (and his colleagues who would have  
spotted the malicious code) and the QA team and the code auditors  
etc.  BUT I would _not_ trust this process if I stored nuclear launch  
codes in the app!

A couple more examples of ways malicious code can be uploaded to the
server: SQL Injection,
if the code 'injected' by an SQL Injection is executed in a  
Sandboxed environment, then the damage potential for that SQL  
Injection is very limited.

Limited yes, mitigated no.  See my example above.

XSS (payload deployed to the admin section),
XSS (since being a client-side exploit) is one where the Sandbox  
approach would be harder to implement (unless the affected user is  
also using a Sandboxed browser where some types of exploits could  
be prevented).

To prevent XSS via a Sandbox, one approach would be to use the  
Sandbox model to clearly define the 'input chokepoints' and force  
(via CAS Demands) that data validation was performed on those  
requests. This way, the developers would have no option but to  
validate their data. Another option would be to encode all inputs  
and outputs from the untrusted sandboxes (i.e. only the 'trusted'  
sandboxes would have the ability to manipulate Html directly.

Again, this makes the sandboxes central to the application design.   
And for applications where security is a primary driver this is  
appropriate.  But this is not the case for the vast majority of apps.

Of course that somewhere, in one of those Sandboxes, there will be  
code that will be able to access the database directly. But if we  
are able to limit the amount of code that needs these privileges  
(Sandboxes B and C in the example above), then the amount of code  
that needs to be audited (and for example certified by a third  
party security-audit-company) will be smaller and manageable.

Good point, and definitely a benefit of using sandboxes.

To summarise, sandboxing an app is useful in preventing specific  
attacks such as executing OS commands, making unauthorized  
connections and accessing arbitrary system resources but it will  
not do anything to prevent the vast majority of serious security  
issues affecting web apps, because the valuable stuff is inside  
the sandbox.
After my explanations in this email do you still think that this is  
correct? Or can you accept now that it is possible to build a  
Sandboxed environment that is able to protect against the majority  
of the serious security issues that affect web apps today?

I still don't see sandboxes addressing all the issues as explained  
above. Another important disadvantage is the cost and impact of  
implementing sandboxes in the first place.  Creating multiple layered  
sandboxes in the code is much more of an obstacle to their  
implementation than simply defining constraints at runtime through a  
configuration change, because it would make security _the central_  
design constraint of the application (it may also break OO  
patterns).  And while this is fine for some high risk apps, this is  
not the case for the majority of organisations who have other  
functional concerns as the reasons they built the app.
Consider the JVM that provides a full sandbox model that's reasonable  
easy to implement for almost any Java app, and then consider the 1%  
(using your metrics) of java applications that enable this  
sandboxing.  If a simple configuration change is too much for  
projects to manage, how much less so an entire new sandbox  
development framework!
Saying that, I don't want to cast too much negativity on the idea -  
it's a good idea, but for niche markets.


If you do accept that it is possible to build such sandboxes, then  
we need to move to the next interesting discussion, which is the 'HOW'

The 'How' would also give us an idea of how difficult it would be to  
implement these sandboxes and shed some light on exactly which  
security issues they would prevent and which they would not.

regards,

-- 
Stephen de Vries
Corsaire Ltd
E-mail: stephen at corsaire.com
Tel:    +44 1483 226014
Fax:    +44 1483 226068
Web:    http://www.corsaire.com






Current thread: