nanog mailing list archives

Re: Dual stack IPv6 for IPv4 depletion


From: Owen DeLong <owen () delong com>
Date: Wed, 15 Jul 2015 13:06:37 -0700


I wasn't intending yourself as the recipient keep in mind. However, IS
their premise wrong? Is prudence looking at incomprehensible numbers and
saying "we're so unlikely to run out that it just doesn't matter" or is
prudence "Well, we have no idea what's coming, so let's be a little less
wild-haired in the early periods"? The theory being it's a lot harder to
take away that /48 30 years from now than it is to just assign the rest of
it to go along with the /56 (or /52 or whatever) if it turns out they're
needed. I personally like your idea of reserving the /48 and issuing the
/56.

Yes… Their premise is wrong. We’re not talking about /48s for soda cans.
We’re talking about /48s for end-sites… Households, individual business
buildings, etc. The soda can is probably just one host on a /64 somewhere
within that /48. Every six-pack in your house probably fits in the same /64.




So you asked an interesting question about whether or not we NEED to give
everyone a /48. Based on the math, I think the more interesting question
is, what reason is there NOT to give everyone a /48? You want to future
proof it to 20 billion people? Ok, that's 1,600+ /48s per person. You want
to future proof it more to 25% sparse allocation? Ok, that's 400+ /48s per
person (at 20 billion people).

At those levels even if you gave every person's every device a /48, we're
still not going to run out, in the first 1/8 of the available space.

Split the difference, go with a /52


That's not splitting the difference. :)  A /56 is half way between a /48
and a /64. That's 256 /64s, for those keeping score at home.


It's splitting the difference between a /56 and a /48. I can't imagine
short of the Nanotech Revolution that anyone really needs eight thousand
separate networks, and even then... Besides, I recall someone at some point
being grumpy about oddly numbered masks, and a /51 is probably going to
trip that. :)

It’s not about the number of networks. It’s about the number of bits available to
provide flexibility in automating topology to create pluggable network components
that just work.

We’ve only begun to explore the new capabilities of DHCPv6-PD within a site.

Clamping down the size of a “site” to less than the original design-criteria considered
when DHCPv6-PD was developed will, in fact, stifle innovation in this area most likely.

I think folks are missing the point in part of the conservationists, and
all the math in the world isn't going to change that. While the... let's
call them IPv6 Libertines... are arguing that there's no mathematically
foreseeable way we're going to run out of addresses even at /48s for the
proverbial soda cans, the conservationists are going, "Yes, you do math
wonderfully. Meantime is it REALLY causing anguish for someone to only get
256 (or 1024, or 4096) networks as opposed to 65,536 of them? If not, why
not go with the smaller one? It bulletproofs us against the unforeseen to
an extent.”

The problem is that the conservationists are arguing against /48s for soda
cans while the libertines are not arguing for that.

/48 protects against the unforeseen pretty well. Even if it doesn’t, that’s why we
also have the /3 safeguard.

If we turn out to have gotten it wrong… If it turns out that we use up all of the first
/3 before we run out of useful life in the protocol, then I’m all for coming up with
different policy for the remaining /3s. Even if you want to argue that we have to
keep handing out addresses while we develop that policy, I’ll say you can hand
out the second /3 in the same way while we develop the more restrictive policy
for the remaining 74.9999999999% of the total address space.

Short of something incredibly surprising, we’re not going to exhaust that first /3
in less than 50 years even if we give many /48s to every end site on the planet.

In case of something really surprising, it would be even more surprising if we
found a way to make IPv6 scale to that on many levels other than address
space:

        1.      IPv6 developers punted on the routing problem and insisted that
                aggregation was the desired alternative to solving the routing problem.

                In reality, I think this will most likely be the first scaling limit we see
                in IPv6, but I could be wrong.

                Aggregation is somewhere between inconvenient and infeasible as
                we have seen with IPv4 where we managed to get some small
                gains when aggregation was first introduced and we took the
                routing table from ~65,000 entries to ~45,000. Look where it is now.
                More than 500,000 entries and continuing to grow.

                More multihoming and more individual PI prefixes are going to become
                the norm, not less.

        2.      IP is a relatively high overhead protocol. Putting an IPv6 stack into a
                molecule-sized device is, in fact, unlikely because of the limits of storage
                for software in a molecule-sized device. Nanobots ala Big Hero-6,
                sure… Do you really think they needed more than a /48 to address
                every nanobot on the screen throughout the entire movie? I bet you
                could probably give a unique address to every nanobot in every frame
                of the movie and still not burn through a /48.

                So of the 400 /48s we can give each person on earth, let’s dedicate
                100 of them to running their nanobots. We’ve still got 300 /48s available,
                of which 1 will run the rest of their household and we have 299 in reserve.

Yes, their premise is wrong. They are optimizing for the wrong thing.

As an aside, someone else has stated that for one reason or another IPv6 is
unlikely to last more than a couple of decades, and so even if something
crazy happened to deplete it, the replacement would be in place anyhow
before it could. I would like to ask what about the last 20 years of IPv6
adoption in the face of v4 exhaustion inspires someone to believe that just
because it's better that people will be willing to make the change over?

I try to avoid “20 years”, but instead say “50 years”. The reality is that in any wildest
projection of current policies and crazy scientific development, the first /3 lasts more
than 100 years.

If you don’t think IPv4 is past its useful lifetime, you aren’t paying attention. However,
even assuming that IPv4 (RFC791, September, 1981) lasts another 10 years, that
would be a total protocol life time of (2025-1981 =) 44 years. Yes, there will still be
islands of IPv4 utilization floating around well past that time just as there are still
islands of NCP today. Oh, wait… Just as there are islands of IPX today.

However, nobody will route IPv4 natively on the global internet 10 years from now.
It will have gone the way of latin. Scholars will remember it, but it will not have much
in the way of day-to-day utilization except in specialized circumstances.

I don’t think people will make the change just because it’s better. I think just as is the
case now, somewhere within the next 50 years, people will move away from IPv6
because we hit some other limitation baked into the protocol and we had to move on.

As long as we have enough addresses to make sure that address shortage is not the
cause of protocol end-of-life, any additional conservatism is, in fact, just waste. If it
has negative impact on development, then it is even worse than waste, it is actually
harmful.

Owen


Current thread: