nanog mailing list archives

RE: Practical numbers for IPv6 allocations


From: "Tony Hain" <alh-ietf () tndh net>
Date: Tue, 6 Oct 2009 07:32:18 -0700

Doug Barton wrote:
[ I normally don't say this, but please reply to the list only, thanks.
]

I've been a member of the "let's not assume the IPv6 space is
infinite" school from day 1, even though I feel like I have a pretty
solid grasp of the math. Others have alluded to some of the reasons
why I have concerns about this, but they mostly revolve around the
concepts that the address space is not actually flat (i.e., it's going
to be carved up and handed out to RIRs, LIRs, companies, individuals,
etc.) and that both the people making the requests and the people
doing the allocations have a WIDE (pardon the pun) variety of
motivations, not all of which are centered around the greater good.

I'm also concerned that the two main pillars of what I semi-jokingly
refer to as the "profligate" school of IPv6 allocation actually
conflict with one another (even if they both had valid major premises,
which I don't think they do). On the one hand people say, "The address
space is so huge, we should allocate and assign with a 50-100 year
time horizon" and on the other they say, "The address space is so
huge, even if we screw up 2000::/3 we have 7 more bites at the apple."
I DO believe that the space is large enough to make allocation
policies with a long time horizon, but relying on "trying again" if we
screw up the first time has a lot of costs that are currently
undefined, and should not be assumed to be trivial. 

I agree with the point about undefined costs, but the biggest one that is
easy to see is that 100-300 years from now when someone thinks about moving
on to the second /3, this entire discussion will have been lost and there
will be an embedded-for-generations expectation that the model is cast in
stone for all of the /3's.

It also ignores
the fact that if we reduce the pool of /3s because we do something
stupid with the first one we allocate from it reduces our
opportunities to do "cool things" with the other 7 that we haven't
even thought of yet.

www.tndh.net/~tony/ietf/draft-hain-ipv6-geo-addr-00.txt shows a different
way to allocate space, using only 1/16 the total space to achieve a /48
globally on a 6m grid. Other ideas will emerge, so you are correct that we
can't assume we have 8 shots at this, but if the first pass is really bad
the second will be so draconian in restrictions that you will never get to
the third.


In regards to the first of the "profligate" arguments, the idea that
we can do anything now that will actually have even a 25 year horizon
is naively optimistic at best. It ignores the day-to-day realities of
corporate mergers and acquisitions, residential customers changing
residences and/or ISPs, the need for PI space, etc. IPv6 is not a "set
it and forget it" tool any more than IPv4 is because a lot of the same
realities apply to it.

Well mostly. It is not set & forget, but a lot of the day-to-day in IPv4 is
wrapped up in managing subnet sizes to 'avoid waste'. In an IPv6 environment
the only concern is the total number of subnets needed to meet
routing/access policy, avoiding the nonsense of continually shifting the
subnet size to align with the number of endpoints over time.


You also have to keep in mind that even if we could come up with a
theoretically "perfect" address allocation scheme at minimum the
existing space is going to be carved up 5 ways for each of the RIRs to
implement. (When I was at IANA I actually proposed dividing it along
the 8 /6 boundaries, which is sort of what has happened subsequently
if you notice the allocations at 2400::/12 to APNIC, 2800::/12 to
LACNIC and 2c00::/12 to AfriNIC.)

Since it's not germane to NANOG I will avoid rehashing the "why RA and
64-bit host IDs were bad ideas from the start" argument. :)

People need to get over it... the original design was 64 bits for both hosts
and routing exceeding the design goal by >10^3, then routing wanted more so
it was given the whole 64 bits. The fact that 64 more bits was added is not
routing's concern, but the IPv4-conservation mindset can't seem to let it go
despite having >10^6 more space to work with than the target. It could have
been 32 bits (resulting in a 96 bit address), but given that 64 bit
processors were expected to be widespread, it makes no sense to use less
than that.


In the following I'm assuming that you're familiar with the fact that
staying on the 4-byte boundaries makes sense because it makes reverse
DNS delegation easier. It also makes the math easier.

I assume you meant 4-bit.   ;)
                     ^^^


As a practical matter we're "stuck" with /64 as the smallest possible
network we can reliably assign. A /60 contains 16 /64s, which
personally I think is more than enough for a residential customer,
even taking a "long view" into consideration. 

Stop looking backward. To achieve the home network of the last millennium a
small number of subnets was appropriate. Constraining the world to that
through allocation is a self-fulfilling way to force people that need more
than that to develop/deploy hacks to work around the braindead practice.


The last time I looked
into this there were several ISPs in Japan who were assigning /60s to
their residential users with good success. OTOH, a /56 contains 256
/64s, which is way WAY more than enough for a residential user. The
idea that a residential user needs a full /48 (65,536 /64s) is absurd.

A consumer is not a professional network operator and can't do 'well
managed' subnet structures the way that this community would. To allow a
plug-n-play consumer network, a fair amount of 'waste' is required.


OTOH, assigning a /48 to even a fairly large commercial customer is
perfectly reasonable. This would give them 256 /56 networks (which
would in turn have 256 /64 networks) which should be plenty to handle
the problems of multiple campuses with multiple subnets, etc.

So let's assume that we'll take /56 as the standard unit of assignment
to residential customers, and /48 as the standard unit of assignment
to commercial customers. A /32 has 65,536 /48s in it. If your business
was focused mainly on commercial customers that's not a very big pool.

A /32 is a starter-kit for a new ISP with no customers, so this argument is
a continuation of the bogus 'denial' mindset used to avoid deployments. A
real ISP should be documenting the number of customers they have * /48, then
add in normal hierarchy loss and get a 'REAL BLOCK'.

OTOH if your business was focused primarily on residential customers
you'd have 16,777,216 /56s to work with. That's enough for even a very
large regional ISP. One could also easily imagine a model where out of
a /32 you carve out one /34 for /56 assignments (4,194,304) and use
the other 3/4ths of the space for /48s (49,152).

A really large ("national" or even "global") ISP would obviously need
more space if they were going to intelligently divide up addresses on
a regional basis. A /28 would have 16 /32s which should be enough for
even a "very large" ISP, but let's really make sure that we cover the
bases and go /24 (256 /32s). Even if you assume splitting that address
space in half, that's 2.147483e+09 (approximately 2,147,483,000) /56s,
and 8,388,608 /48s.

There are roughly 2,097,152 /24s in 2000::/3 (I say "roughly" because
I'm ignoring space that's already been carved out, like 6to4, etc.),
or 262,144 /24s per /6, or 67,108,864 /32s per /6. Which means that
yes, we really do have "a lot" of space to work with. I also think it
means that even with "a lot" of space there is no point in wasting it
with foolish allocation policies that give out way more space than is
realistically necessary just "because we can."

I agree we should not be foolish, but that includes not partitioning the
space into a bunch of discontiguous blocks just 'because that is how we did
IPv4'. From a big picture perspective we have already avoided 'waste' by
agreeing to limitations on how much a new ISP gets and using a justification
process for those that need more. The fact that end customers are not
required to do the same level of justification (until they get really
large), is avoiding waste in terms of process. The lack of tight control at
the edge allows for innovation, which in turn drives the whole system
forward.


I've ignored PI space up till now but I think it's reasonable for
there to be a midpoint for PI somewhere between /48 and /32.
Personally I think that a /40 has a nice sound to it. That's 256 /48
networks. I don't see any reason why the RIRs couldn't also agree to a
/36, which would be 4,096 /48s. Even I don't see any reason why they
should mess around with numbers like /41 or /43.

I personally agree, but we have a structure in place, and it has to have
something to do to justify its continued existence.


To get back to the question that started the original thread, if I
were the one who was requesting an IPv6 allocation I would use the
following formula:

1 /56 per # of residential customers expected in 10 years
+
1 /48 per # of business customers expected in 10 years

I would make that /48 & /44


Then assuming your current numbers are roughly 1/16th of what you hope
they'll be in 10 years; when actually handing out addresses I'd give
out the first /60 from each /56 to the residential customers. That way
if you need to you can go back and chop up those /56s. 

BS ... go back and get the larger block that supports the current customer
base. The argument you present here says that an ISP gets one & only one
shot at an allocation. The only way the approach of growing the request to
match the customer base will ever be a problem is if every person on the
planet individually subscribed to every service provider, but even that
nonsense would work up to 32k providers globally. 

I'd also start
off handing out the first /48 out of every /44 to my commercial
customers. That way they will have room to expand painlessly. This is
sort of a bastardized version of the "sparse allocation" model that
the RIRs have promoted. (Obviously the 1/16th number was chosen for
convenience, but hopefully you get the idea of what I'm going for
here.)

I realize that this is quite long, so if you've gotten this far,
congratulations! I hope it was useful.

As people emerge from denial they need this type of lengthy and well
reasoned line of thought, even though I disagree with some of the details. 

Tony




Current thread: