nanog mailing list archives

Re: Rasberry pi - high density


From: Rafael Possamai <rafael () gav ufsc br>
Date: Sat, 9 May 2015 08:29:25 -0500

From the work that I've done in the past with clusters, your need for
bandwidth is usually not the biggest issue. When you work with "big data",
let's say 500 million data points, most mathematicians would condense it
all down into averages, standard deviations, probabilities, etc, which then
become much smaller to save in your hard disks and also to perform data
analysis with, as well as transfer these stats from master to nodes and
vice-versa. So for one project at a time, your biggest concern is cpu
clock, ram, interrupts, etc. If you want to run all of the BIG 10s academic
projects into one big cluster for example, then networking might become an
issue solely due to volume.

The more data you transfer, the longer it would take to perform any
meaningful analysis on it, so really your bottleneck is TFLOPS rather than
packets per second. With Facebook it's the opposite, it's mostly pictures
and videos of cats coming in and out of the server with lots of reads and
writes on their storage. In that case, switching tbps of traffic is how
they make money.

A good example is creating a dockr container with your application and
deploying a cluster with CoreOS. You save all that capex and spend by the
hour. I believe Azure and EC2 already have support for CoreOS.




On Sat, May 9, 2015 at 12:48 AM, Tim Raphael <raphael.timothy () gmail com>
wrote:

The problem is, I can get more processing power and RAM out of two 10RU
blade chassis and only needing 64 10G ports...

32 x 256GB RAM per blade = 8.1TB
32 x 16 cores x 2.4GHz = 1,228GHz
(not based on current highest possible, just using reasonable specs)

Needing only 4 QFX5100s which will cost less than a populated 6513 and
give lower latency. Power, cooling and cost would be lower too.

RPi = 900MHz and 1GB RAM. So to equal the two chassis, you'll need:

1228 / 0.9 = 1364 Pis for compute (main performance aspect of a super
computer) meaning double the physical space required compared to the
chassis option.

So yes, infeasible indeed.

Regards,

Tim Raphael

On 9 May 2015, at 1:24 pm, charles () thefnf org wrote:



So I just crunched the numbers. How many pies could I cram in a rack?

Check my numbers?

48U rack budget
6513 15U (48-15) = 33U remaining for pie
6513 max of 576 copper ports

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

Cable management and heat would probably kill this before it ever
reached completion, but lol...






Current thread: