nanog mailing list archives

Re: Anyone can share the Network card experience


From: Greg Whynott <Greg.Whynott () oicr on ca>
Date: Tue, 5 Oct 2010 11:23:07 -0400

Hi,

most of our traffic is heading directly into memory,  not hitting the local disks,  on the HPC end of things.   Our 
file servers are feeding the network with around 24 x 10Gibit   (active/active clusters),  and regularly run at over 80 
percent on all ports during runs..   this is all HPC / file movement traffic.   we have instruments which generate over 
6TB of data per run,  every 3 days,  7/365.  we have about 20 of these instruments.      so most of the data on 10Gbit 
is indeed static,  or to/from a file server to/from HPC clusters.  

 iSCSI we run on its own network hardware,  autonomous from the 'data' network.   its not in wide deployment here,  
only the file server is connected via 10Gbit,  the hosts using iSCIS (predominately KVM and Vmware clusters) are being 
feed over multiple 1Gbit links for their iSCIS requirements.

Our external internet servers are connected to the internet via 1Gbit links,  not 10Gibt,  but apparently that is 
coming next year.  The type of traffic they'll see will not be very chatty/interactive.  it'll be researchers 
downloading data sets ranging in size from a few hundred megs, to a few TB..  

take care,
-g






On Oct 5, 2010, at 10:59 AM, Heath Jones wrote:

For 10Gbit we use Intel cards for production service machines,  and ConnextX/Intel in the HPC cluster.

Greg - I've not been exposed to 10G on the server side..
Does the server handle the traffic load well (even with offloading) -
that's a LOT of web requests / app queries per second!

Or are you using 10G mainly for iSCSI / file serving / static content?

Cheers



Current thread: