Interesting People mailing list archives

The Iron Laws of Network Cost Scaling


From: David Farber <dave () farber net>
Date: Wed, 26 May 2010 14:37:23 -0400



Begin forwarded message:

From: "W. Craig Trader" <craig () trader name>
Date: May 26, 2010 2:25:20 PM EDT
To: David Farber <dave () farber net>
Subject: For IP: The Iron Laws of Network Cost Scaling

http://esr.ibiblio.org/?p=2024

Eric Raymond on The Iron Laws of Network Cost Scaling
<http://esr.ibiblio.org/?p=2024>


The shape of the cost curves that show up as we build and run
communications networks have properties that seem counterintuitive to
many people, but that have been surprisingly consistent across lots of
different technologies since at least the days of the telegraph, and
probably further back than that.

Herewith, the Iron Laws of Network Cost Scaling:

1. Upgrade cost per increment of capacity decreases as capacity rises.

2. Network costs scale primarily with the number of troubleshooters
required to run them, not with capacity.

3. Under market pressure, network pricing evolves from metered to
flat-rate.

When you learn to apply all three of these together, you can make useful
qualitative predictions across a surprisingly broad set of real-world cases.

The easiest way to see why upgrade cost per increment of capacity
decreases as capacity rises is to think about the high capital cost
associated with laying the first cable from A to B. You’re going to have
to pay to dig a trench and lay conduit, or put the functional equivalent
of telephone poles in a right-of-way. If we’re talking wireless, you
need two antenna towers – OK, maybe one if the start end is on your net
already. Trenches are expensive; rights-of-way and poles are expensive;
towers are expensive.

But once you’ve got that physical conduit or poles or towers in place,
pulling replacement wire or upgrading your radio repeaters is much less
expensive. As your tech level rises, you stop having to do that, even;
you find cleverer ways to squeeze bandwidth out of fiber, copper, or air
by using denser encodings, better noise cancellation – better
algorithms. The action moves from hardware to software and upgrade costs
drop.

As a very recent example of how the shift from hardware to software
affects developing communications networks, the differences between the
two major fourth-generation wireless data technologies, WiMAX and LTE,
are so slight that the same hardware, running different software, can
support either. This means that on any timescale longer than that
required to push firmware upgrades to your repeaters, the differences
between the two aren’t of consequence for planning.

The amortized cost of network capacity gets cheaper fast, partly due to
the first Iron Law and partly due to the Moore’s Law cost curve of
hardware. Skilled people don’t. Therefore the dominant cost driver is
salaries for people required to run the network. Furthermore,
hardware/software maintainance costs tend to be low for the links (which
are simple) and high for the switching nodes (which are complex).

The consequence is that cost scales not with network capacity but
roughly with the number of routers and switches in the network, and is
primarily salaries for people to watch and troubleshoot the routers and
switches. This fact is well known to anyone who has ever had to actually
/run/ a data center or a network; it’s a reality that recurs very
forcefully every time you have to pay the monthly bills.

There’s actually more we can say about this. In a roughly scale-free
network <http://en.wikipedia.org/wiki/Scale-free_network> (which
communication networks with smart routing tend to become; it’s an
effective way of maximizing robustness against random failures), the
node count is coupled to the link count by about n log n. This means
that as network reach or coverage (proportional to the number of leaf
nodes, aka customers) rises linearly, the number of interior nodes
(which counts routers and switches) actually rises /sublinearly/.

This is all in stark contrast with most peoples’ intuitions about
network costs, which heavily overweight capital expenditures, heavily
overweight bandwidth cost, and predict linear or superlinear rises in
administrative costs as coverage increases (the “high-friction” model of
network costs). But with a more correct model in hand, we can approach
the third Iron Law: under market pressure, network pricing evolves from
metered to flat-rate.

This is certainly the way network pricing has moved historically
<http://esr.ibiblio.org/?p=2021&cpage=1#comment-254808>. Can we say
anything generative about why this is so?

Yes, as a matter of fact, we can. We saw before that as customer count
rises linearly, the major cost drivers in the network (router and switch
count, and salaries for people to watch them) rise sublinearly. But to
do per-transaction metering you have to store, manage, and process an
amount of state that rises directly with customer usage – that is,
linearly. This means that, especially on a maturing network, the cost to
meter usage rises faster than the cost to serve new customers!

The qualifier “under market pressure” is important. Customers really
don’t like being charged for both usage and maximum capacity
<http://esr.ibiblio.org/?p=2021&cpage=1#comment-254804>. But they
dislike being charged for usage more, because it makes their costs
harder to predict (and usually higher). Comms providers are ruthless
about exploiting the myth of high network friction to justify high
prices and metering, and they generally get away with this for a while
in the early stages of a new communications technology. But at least two
things cooperate to change this over time, both actually effects of the
widening gap between that mythical “high-friction” cost curve and the
actual one.

One is that metering overhead rises as a proportional drag on on
per-customer revenue (and thus profits) as the network’s coverage
increases. Again, this is strictly predictable from the fact that the
cost of service rises more slowly than the cost to meter. The other is
that profit margins, as in any other sort of market, tend to get
competed down towards actual cost levels. Telecomms vendors, like all
other producers of non-positional goods, feel constant pressure to price
in a way that actually matches their cost structure more closely.

Usually this means pricing by maximum capacity with no metering.
Eventually, as link capacity reaches a level the average customer isn’t
capable of saturating, flat-rate unlimited starts to make more sense.

Thus, communications-network prices have a very specific trajectory
that’s repeated over and over with new technologies: from metered by
transaction to billed by maximum capacity to flat-rate unlimited. The
providers resist each change as ferociously as possible, because each
one is accompanied by a shift to decreasing profit margins on increasing
volume, but the underlying logic is inexorable.






-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: