nanog mailing list archives

Re: Has virtualization become obsolete in 5G?


From: Mark Tinka <mark.tinka () seacom com>
Date: Thu, 6 Aug 2020 17:52:13 +0200



On 6/Aug/20 17:43, Mel Beckman wrote:

I don’t think you’re going to move those volumes with Intel X86 chips.
For example, AT&T’s Open Compute Project whitebox architecture is
based on Broadcom Jericho2 processors, with aggregate on-chip
throughput of 9.6 Tbps, and which support 24 ports at 400 Gbps each.
This is where AT&T’s 5G slicing is taking place.

My point exactly.

If much of the cloud-native is happening on servers with Intel chips,
and part of the micro-services is to also provide data plane
functionality at that level, I don't see how it can scale for legacy
mobile operators. It might make sense for niche, start-up mobile
operators with little-to-no traffic serving some unique case, but not
the classics we have today.

Now, if they are writing their own bits of code on or for white boxes
based on Broadcom et al, not sure that falls in the realm of
"micro-services with Kubernetes". But I could be wrong.


Intel has developed nothing like this, and has had to resort to
acquisition of multi-chip solutions to get these speeds (e.g. its
purchase of Barefoot Networks Tofino2 IP).

The X86 architecture is too complex and carries too much
non-network-related baggage to be a serious player in 5G slicing.

Which we, as network operators, can all agree on.

But the 5G folk seem to have other ideas, so I just want to see what is
actually truth, and what's noise.

Mark.


Current thread: