nanog mailing list archives

Re: backbone transparent proxy / connection hijacking


From: Hank Nussbacher <hank () ibm net il>
Date: Sat, 27 Jun 1998 22:04:01 +0300 (IDT)

On Fri, 26 Jun 1998, Paul Gauthier wrote:

For the past 4 years, AS378 has blocked port 80 (all academic network of 8
universities and many colleges) and forced all users thru multiple proxy
servers (most recently Squid).  Over the years we had to build up a list
of sites to bypass.  But in addition, one has to provide the ability to
bypass based on source IP address (some users just don't want it - even if
it speeds up their page downloads and toasts their bread at the same
time.)

From what I have seen, the Alteon/Inktomi/Netcache/Cisco solutions do
*not* allow for an unlimited bypass list - both based on destination or
source IP address.  When that happens, the ISP, Digex in this case, can
have a simple authenticated web page where a customer can add their CIDR
block to a bypass list in the transparent proxy.  Till then, all the
bashing will continue. 

Add to the things that will break - simplex or asymetrric routing.  More
and more customers are ordering simplex satellite lines.  Imagine a
European company that buys a 512kb line from an ISP but also buys a T1
simplex satellite line to augment b/w.  The http request goes out with the
sat-link CIDR block as source.  The request hits the transparent proxy for
a USA based page.  The proxy retrieves the page from the USA, using its
expensive transAtlantic link.  Page hits the proxy.  Now the transparent
proxy needs to deliver the page.  But the requestors IP address is located
at some satellite provider in the USA (previously discussed here), so the
transparent proxy routes the page back across the Atlantic for delivery
via the satellite simplex line. 

Same problems happen with assymetric routing.  I blv Vern has a study that
shows that 60% of all routes on the Internet are assymetric.

Bottom line: w/o bypass based on source or destination, the bashing will
continue.

Hank Nussbacher
Israel

I wanted to take a moment to respond to this thread which has gotten
somewhat inflamed. The problems being highlighted are not
new or unknown and there are standard remedies in use by
Inktomi's Traffic Server customers and other users of transparent
caching. In fact, the two posters reporting concrete problems have
both already had them swiftly remedied.

Transparent caching brings with it significant benefits to
the ISPs and backbones who deploy it, but also to the
dialup users, corporate customers and downstream ISPs who
utilize those links. Cached content is delivered accurately
and quickly, improving the experience of web surfing.
Further, caching helps unload congested pipes permitting increased
performance for non-HTTP protocols. Many people believe that
large-scale caching is necessary and inevitable in order to scale
the Internet into the future.

I will spend a few paragraphs talking about each of the concerns
which have been expressed in this thread. Roughly, I think they
are the following: disruption of existing services, correctness
of cached content, and confidentiality/legal issues with
transparent caching. We take all of these issues very seriously
and have had dedicated resources in our development and technical
support groups addressing them for some time.

The center of this debate concerns the rare disruption of
existing services which can occur when transparent caching is
deployed. Two concrete examples of this have been cited on this
list: access to a Cybercash web server and access from an old Netscape
proxy server. Both of these incidents were swiftly and easily
corrected by the existing facilities available in Traffic Server.

The Cybercash server performed client authentication based on
the IP address of the TCP connection. Placing a proxy (transparent
or otherwise) in between clients and that server will break
that authentication model. The fix was to simply configure Traffic
Server to pass Cybercash traffic onwards without any attempt to
proxy or cache the content.

The second example was of a broken keepalive implementation in
an extremely early Netscape proxy cache. The Netscape proxy
falsely propagated some proxy-keepalive protocol pieces, even
though it was not able to support it. The fix was to configure
Traffic Server to not support keepalive connections from that
client. Afterwards, there were no further problems.

These two problems are examples of legacy issues. IP-based
authentication is widely known to be a weak security measure.
The Netscape server in question was years old. As time goes
on, there will be a diminishing list of such anomalies to deal
with. Inktomi works closely with all of our customers to
diagnose any reported anomaly and configure the solution.

Beyond that, to scale this solution, Inktomi serves as a
clearinghouse of these anomaly lists for all of our customers.
A report from any one customer is validated and made available
to other Traffic Server installations to preempt any
further occurrences.

Inktomi also conducts proactive audits both inside live Traffic
Servers and via the extensive "web crawling" we perform as part
of our search engine business. The anomalies discovered by these
mechanisms are similarly made available to our customers.

The second issue being discussed is the correctness of cached
content. Posters have suggested mass boycotting of caching by
content providers concerned with the freshness of their content.
Most content providers have no such concerns, frankly. The problem
of dealing with cached content is well understood by publishers
since caching has been in heavy use for years. Every web browser
has a cache in it. AOL has been caching the lion's share of
US home web surfers for years. For more information on the ways
in which publishers benefit from caching see our white paper
on the subject of caching dynamic and advertising content:
http://www.inktomi.com/products/traffic/tech/ads.html

And finally, there has been confusion concerning the
confidentiality and legal issues of transparent caching.
Transparent caching does not present any new threat to the
confidentiality of data or usage patterns. All of these issues
are already present in abundance in the absence of caching.
Individuals responsible for managing networks will have to weigh
the advantages of caching against these more nebulous
considerations. We, and many others looking towards the future
of a scalable Internet, are confident that caching is becoming an
integral part of theinfrastructure, and provides many benefits to
hosters, ISPs, backbones and surfers alike.

Paul Gauthier

-- 
Paul Gauthier,                                   (650)653-2800
CTO, Inktomi Corporation                  gauthier () inktomi com






Current thread: