Full Disclosure mailing list archives

Re: OpenSSH is a good choice?


From: Ron DuFresne <dufresne () winternet com>
Date: Sat, 25 Dec 2004 16:02:20 -0600 (CST)

On Sat, 25 Dec 2004, Kevin wrote:

On Fri, 24 Dec 2004 16:00:45 -0600 (CST), Ron DuFresne
<dufresne () winternet com> wrote:
It might depend upon how the algorithim is implimented, say, search for
easy to find vuln systems with stadard port open, till perhaps 10 or 100
or some given number are found and infected, then go back through the
non-vuln hosts and search those specifically for non-standard ports.
Insures spread of the worm and quick infection rate, and then allows it to
retarget 'hidden' systems.  Seems to me this would merely be a change to
infection code similiar to those  wrms that had in them coded a
date and time to attack a site.

One consideration -- sysadmins who are bright enough to configure
services onto non-standard ports are likely also bright enough to
patch their systems, install IDS and HIPS, and such hosts are in
general less likely to be exploitable than default configurations.


upgrading openssh involves quite often also upgrading openssl, and that
can link into web systems and redoing apache/modssl <are there any other
known openssl dependencies?>.  Makes for a nasty scenario in determing
what borked when making all these changes in production at once.  Makes
for a nasty phase in change mgt as well.  Might well be the reason that
others have found folks not fixing ssh issues in a timely maner which has
been documented in earlier studies on patching efficiencies.  Then we add
in the rebuilding of sigs for the IDS/IPS/HIPS, and it gets to be a
totalmess that can bog one down out of prod for days in trying to locate
the root cause in a large rollout...<sigh>



I'm not sure that a routine to find hidden, vulnerable services would
add much value to an automated "flash worm".   This approach makes
sense for a human attacker trying to penetrate a specific site or
class of target, but for a "flash worm", wouldn't it make more sense
to put the work towards finding more easy targets?

What does it benefit a worm or the worm's author to compromise 99% of
vulnerable systems rather than a mere 85% of the vulnerable
population?


The easy targets are likey small sites, simple DLS/cable/dialup users,
that higher percentage category are the cream, worth many more points if
one's scoring in such actions...

Additionally, port scanning raises the profile of the source, both on
the network and at the target.   Whether just blasting out the exploit
code or doing banner scanning, the worm will need to do a full TCP
session to each potential target IP:Port.  This is not only slow, but
also very "noisy", causing unusual events to be logged by listening
daemons on the target system.

The only time I've ever been reprimanded for running (authorized) nmap
scans against non-hardened solaris systems was when I used the '-sV'
option and freaked out a (non security conscious) sysadmin due to the
large volume of timeout and protocol errors logged by rpcbind and
other default TCP listeners.


Lots of noise can lead towards folks missing something in vast ammounts of
output, at least dramatically lengthen the time it takes to analyise all
the traffic...



Seriously, why do folks think sshd should be open for the world to pound
upon, no matter which port it's assinged to run on?

When you cannot know the source IP in advance, *something* must serve
as the gatekeeper for access to network services.


 It provides an encrypted channel into the network.  And channel in,
especially encrypted channels, should be guarded and allow only those
that require access to get access.

Many systems have a business need to allow customers to connect in
from arbitrary source addresses -- vendor support for maintenance,
customers uploading content, etc.   There is an unavoidable
requirement to have *some* channel into the system, and it's tough
enough for web hosting providers to push customers to migrate off of
cleartext password protocols like telnet and FTP, now we need to
convince the customer to use public keys and strong authentication
tokens?

customers uploading content, on exposed staging systems only, Production
then pulls in content on a scheduled or manually driven timeline.

vendor support, grant as and when needed and then close the door.  Too
many hands can spoiil the pot when making changes to a system, and
allowing such without supervision will result in one hand not knowing what
the other did at some point in time that borked the whole setup.


IPSEC might make sense for employee inbound sessions,   But for
"customers" of web hosting and the like, ssh itself is already the
primary gatekeeper -- there isn't any other (easy) check to implement
before letting an unknown source talk to the ssh listener.


"customers", see above.

Might surprise some but there are companies that will not allow encrypted
traffic behind the perimiter, as it can't be monitored.  Unrestricted
access inside from out from any ole place is a problem in the making
though.

The present state of VPNmadness is not the solution either.  Encrypted
tunels inside can be more insidious then a worm or virus being let loose
by a manager clicking x-mas.exe or someother.


Thanks,


Ron DuFresne
-- 
"Sometimes you get the blues because your baby leaves you. Sometimes you get'em
'cause she comes back." --B.B. King
        ***testing, only testing, and damn good at it too!***

OK, so you're a Ph.D.  Just don't touch anything.


_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html


Current thread: