Security Basics mailing list archives

Re: Secure host newbie - fun - humm


From: Ranjeet Shetye <ranjeet.shetye2 () zultys com>
Date: Tue, 06 Apr 2004 13:49:45 -0700


Please dont tell me how much I have or have not worked with security.

For security to work correctly, the user should have black and white
choices. For the user to make a concious choice between secure and
insecure mode, the person deploying it should offer black and white
choices (e.g. hotmail login). In order that the people deploying the
solution can make concious black and white choices, people who design
the solutions have to MAKE and OFFER black and white choices. This is
where I come in. I do network protocols design, security solutions, and
kernel work for a living.

Now if people in my position were to not view security as a black or
white issue, the deployers would end up with a greyish "solution", and
the users would end up with a smudgy hazy grey that would be wrongly
passed around as a secure solution.

You are focussed on the fact that the "real world" is not black and
white. In my job, I HAVE to view security as black and white, and design
stuff accordingly, and offer choices accordingly, because the farther up
this design chain that someone compromises the concept of "either
completely secure or else its insecure", the more compromised the final
deployed solution that the users actually use. e.g. I can design the
most secure system in the world, but that's meaningless if the users put
their password on a sticky on their monitor. On the other hand, if I
allow the use of telnet to backup sensitive data like passwords, then
the I've just compromised even the most savvy user, and I force them to
work outside the system to achieve their security goals. Therefore,
should telnet be allowed ? NO - its a simple, black and white decision.
You have a vendor who forces telnet onto you ? then you have a vendor
who doesn't care about the security of your digital assets and I suggest
that you vote with your money.

As I said before, if you have run systems where there is no known
weakness - that's secure. If you run systems where there is a known
exploit - that's insecure. That fact that you "know" that all boxes in
the world are insecure is not the "know" that I am talking about. I am
talking about tangible "knowing" such as bugs listed on securityfocus
mailing-lists or CERT advisories. Unfortunately even open source is not
perfect. e.g. the Linux do_brk() vulnerability was vastly underestimated
and took time to fix, allowing a few major breakins.

I think we should just agree to disagree.

Ranjeet.

On Tue, 2004-04-06 at 12:50, Barry Fitzgerald wrote:
Ranjeet Shetye wrote:

Just cos an admin is helpless cos there is NO fix, does NOT exonerate
the network admin of any blame, IF he or she KNEW that is an exploit
available.
 

Yes, actually, it does. 

Let's say I work for company A.  Company A has a policy that uptime is 
critical and a server with a vulnerability is found.  It's not my 
decision, as the admin, whether or not to take down the server.  It's an 
executive decision.  And I guarantee you that almost every executive out 
there will say "keep the server up".  The cost of not keeping the server 
up is often extreme loss of business.  As the admin, I'm not responsible 
for that decision nor should I be held responsible for it if I gave 
people the proper information.


In today's 24x7 broadband interconnected world, you have 2 options:
1. Take down the server yourself.
2. Hope that you do not get compromised and continue business as usual.
(When you do get taken down, try to put it back together from backups.)

Are there any other options ? That is what I was pointing out. Find out
the cost of each option, and take the path with the lesser cost.
Decision-making 101.

 

I'm not disagreeing with that per-se.  I'm disagreeing that it's a human 
issue. 

I don't consider the presence of a buffer overflow to ever be a human 
issue.  It's a technological issue - plain and simple - that stems from 
purely technological decisions.  If you get hit with an attack the day 
after the vulnerability is disclosed, that's not the fault of those who 
left the server up -- it's the fault of those who attacked the server 
and, to some extent, a problem caused by poor technical decisions on the 
part of the producer of the software. 

If you blame the admin because she chose not to DoS herself, you're 
blaming the wrong person.

e.g. Lets take a case where there IS a severe price to be paid for the
NEGLIGENCE of KNOWINGLY running an insecure solution.

If a US based service is hosting health records on a Linux server, and
they KNOW that there is a kernel exploit that's available, BUT there is
no fix available for it, then either they play safe and TAKE DOWN the
server themselves, or prepare for a costly legal battle and/or a lengthy
prison sentence if it can be proven that the admin was (deliberately ?)
NEGLIGENT.

 

Actually, this is a great reason to use Free Software.  If you're 
running a critical application and need to fix something like this, it's 
possible to do so.  That's a case where, if the admin has enough time 
and the entity hires enough admins so that they have time to address 
issues like this, it would at least be possible to fix the problem.  
With proprietary software, that's not the case.

The court is surely NOT going to think that running data servers for
24x7 (admin's desire) OR the health of the business (CEO's desire) is
more important than the privacy of the health records. By law, EACH
leaked health record will cost you $8 million + other civil and criminal
proceedings if warranted + other intangibles like loss of customer
trust, loss of reputation, etc. If that is worth keeping your servers up
and running, you should make the decision accordingly. I wouldn't. I'd
try to keep the service secure.

 

Listen, as a security specialist, I *know* that every single box that I, 
you, and everyone else on this list touches is running some piece of 
software that has a security hole that someone else knows about and has 
an exploit for.  I *know* that that exploit has not been released yet.  
I don't consider that to be hypothetical, I consider it to be a plain 
and simple fact. 

By your logic, since I *know* that these exploits exist, it would be 
irresponsible for me to not unplug all of our systems, and stay down 
until these exploits are patched or until we can protect our systems 
against them.  But, when/if they are -- that means that there will be 
more unpatched/unmitigated vulnerabilities laying around that I *know* 
are out there but can't defend against -- causing me to be in a 
perpetual state of disability.

At that point, we might as well just close down the internet because no 
one will ever have a server up. 

If I walk out onto the street, there's a chance that I could get hit by 
a truck. 

Let's say that I know that there's a certain brand of tire that are sold 
with SUVs that often burst.  By your logic, if I knew this and chose to 
walk on the street, and I suddenly got hit by an SUV with a blown tire, 
it would clearly be my fault because I knew that the risk was there.  I 
disagree with that outright because the problem was inherently technical 
and choosing to stay in my house perpetually was not an option.

Your model here rewards only the ignorant -- because those of us who 
aren't ignorant understand that just being on the internet puts you at 
some level of risk and that there is no "100% I'm secure" level.  You 
are always vulnerable and you will always be vulnerable.  The key is to 
mitigate the risk and the name of the game is staying up and secure.

If, in the process of doing that, you spend much of your time down -- 
then the crackers and black hats have won against you.  That's the name 
of the game.  If you don't like that answer -- I don't know what to tell 
you, it's the situation we're in right now.

This is very different from DoS attacks because in DoS, you dont get a
choice, your server gets taken down for you. It's not a business
decision taken on the basis of some calculated risk.

 

By your logic above it is, because by running a system that has a known 
DoS-able condition, you're making the choice to be DoS'ed - by your 
logic, that is. 

A DoS condition is simply a denial of service.  The state of a DoS is 
not reliant on source of the denial.  The effect is the same either way.

And I DO think that security is a very black and white issue. Either you
have it, or you dont.

 

Then, no offense, but you haven't done much real security work...

If security were so black and white, there wouldn't be many 
compromises.  Admins would get it and coders would find writing secure 
programs easy.  Bounds checking and input verification are only part of 
writing secure applications.  And the human factor of security will 
NEVER be black and white.  Security is not black and white.  Simply by 
believing that you've tricked yourself into insecurity.

                -Barry
-- 

Ranjeet Shetye
Senior Software Engineer
Zultys Technologies
Ranjeet dot Shetye2 at Zultys dot com
http://www.zultys.com/
 
The views, opinions, and judgements expressed in this message are solely
those of the author. The message contents have not been reviewed or
approved by Zultys.



---------------------------------------------------------------------------
Ethical Hacking at the InfoSec Institute. Mention this ad and get $545 off 
any course! All of our class sizes are guaranteed to be 10 students or less 
to facilitate one-on-one interaction with one of our expert instructors. 
Attend a course taught by an expert instructor with years of in-the-field 
pen testing experience in our state of the art hacking lab. Master the skills 
of an Ethical Hacker to better assess the security of your organization. 
Visit us at: 
http://www.infosecinstitute.com/courses/ethical_hacking_training.html
----------------------------------------------------------------------------


Current thread: