nanog mailing list archives

Re: cooling door


From: "vijay gill" <vgill () vijaygill com>
Date: Mon, 31 Mar 2008 07:53:32 -0700

On Sat, Mar 29, 2008 at 3:04 PM, Frank Coluccio <frank () dticonsulting com>
wrote:


Michael Dillon is spot on when he states the following (quotation below),
although he could have gone another step in suggesting how the distance
insensitivity of fiber could be further leveraged:


Dillon is not only not spot on, dillon is quite a bit away from being spot
on. Read on.



The high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are.

In fact, those same servers, and a host of other storage and network
elements,
can be returned to the LAN rooms and closets of most commercial buildings
from
whence they originally came prior to the large-scale data center
consolidations
of the current millennium, once organizations decide to free themselves of
the
100-meter constraint imposed by UTP-based LAN hardware and replace those
LANs
with collapsed fiber backbone designs that attach to remote switches
(which could
be either in-building or remote), instead of the minimum two switches on
every
floor that has become customary today.


Here is a little hint - most distributed applications in traditional
jobsets, tend to work best when they are close together. Unless you can map
those jobsets onto truly partitioned algorithms that work on local copy,
this is a _non starter_.



We often discuss the empowerment afforded by optical technology, but we've
barely
scratched the surface of its ability to effect meaningful architectural
changes.


No matter how much optical technology you have, it will tend to be more
expensive to run, have higher failure rates, and use more power, than simply
running fiber or copper inside your datacenter. There is a reason most
people, who are backed up by sober accountants, tend to cluster stuff under
one roof.


The earlier prospects of creating consolidated data centers were once
near-universally considered timely and efficient, and they still are in
many
respects. However, now that the problems associated with a/c and power
have
entered into the calculus, some data center design strategies are
beginning to
look more like anachronisms that have been caught in a whip-lash of
rapidly
shifting conditions, and in a league with the constraints that are imposed
by the
now-seemingly-obligatory 100-meter UTP design.



Frank, lets assume we have abundant dark fiber, and a 800 strand ribbon
fiber cable costs the same as a utp run. Can you get me some quotes from a
few folks about terminating and patching 800 strands x2?

/vijay




Frank A. Coluccio
DTI Consulting Inc.
212-587-8150 Office
347-526-6788 Mobile

On Sat Mar 29 13:57 ,  sent:


Can someone please, pretty please with sugar on top, explain
the point behind high power density?

It allows you to market your operation as a "data center". If
you spread it out to reduce power density, then the logical
conclusion is to use multiple physical locations. At that point
you are no longer centralized.

In any case, a lot of people are now questioning the traditional
data center model from various angles. The time is ripe for a
paradigm change. My theory is that the new paradigm will be centrally
managed, because there is only so much expertise to go around. But
the racks will be physically distributed, in virtually every office
building, because some things need to be close to local users. The
high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop
trend will make it much easier to place an application without
worrying about the exact locations of the physical servers.

Back in the old days, small ISPs set up PoPs by finding a closet
in the back room of a local store to set up modem banks. In the 21st
century folks will be looking for corporate data centers with room
for a rack or two of multicore CPUs running XEN, and Opensolaris
SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.

--Michael Dillon






Current thread: