nanog mailing list archives

Re: Dual stack IPv6 for IPv4 depletion


From: Owen DeLong <owen () delong com>
Date: Thu, 9 Jul 2015 13:05:03 -0700

In short, much of what you say below has been discussed before and with the general conclusion “geography != topology 
and no, geographic allocation would not improve summarization”.

I’m not saying that assignments need to be static, but I am saying that we need to put the default size somewhere that 
doesn’t inhibit future development and close off options at the application level.

That’s why I’m arguing for a default /48.

Owen

On Jul 9, 2015, at 12:24 , Naslund, Steve <SNaslund () medline com> wrote:

Seems to me that the problem might be thinking that the allocation toward the customer is a static thing.  I think it 
is limiting to think that was going forward.  Our industry created DHCP so we didn't have to deal with statically 
configured users who did not want to deal with IP addressing.  Seems to me that a natural progression is to hand a 
network block to the CPE (DHCP-PD) and let it deal with it.  No reason a CPE device cannot be created that will 
request more addresses when it needs them and dynamically receive a larger assignment.

When you think about it long term our network infrastructure is pretty archaic in that we have to do paperwork to get 
an block assignment from the regional numbering authority and then manually chop that up.  I would expect that model 
to die over time and become more of a hierarchy whereby addresses are dynamically assigned top to bottom.  Seems like 
the numbering authority could be a lot more effective if a network could tell them about its utilization and have 
additional addresses assignments happen automatically.  The converse would be true as well, a network could 
reconfigure to free underutilized blocks on its own.  If a customer CPE needs more addresses it will request them.  
If you add a pop to your network it should automatically get an allocation from an upstream device.

The only reason why anyone cares what their address is results from the fact that our name to address mapping via DNS 
is so slow to update.  The end user does not care what addresses they get as long as everyone can reach what they 
need to.  Your customers would not care about renumbering pain if there wasn't any.  Today they could care less if it 
is V4 or V6 as long as everyone can see each other.  My dad gets V6 on his cell phone and he can't even spell IP.

Another inefficient legacy is the assignment of address space on a service provider basis when geographic assignment 
would allow for better summarization.  If that happened you could create a better model where less routers need to 
carry a "full table" view of the Internet.  As long as I know how to get around my area and to regional routers that 
can reach out globally, that is all we need.  Now you would not have the limitation that a wide variety of routers 
need to carry every route and the /64 routing limitation goes away.  Today our routing is very much all or nothing.  
Either use defaults or get a whole table via are probably the two most common options (yeah, I know there are others 
but those are the main two).

The ideas on the reasons for building VLANs is pretty out of date too.  It drives me nuts when I see the usual books 
giving you the usual example that "accounting and their server are on one VLAN and engineering and their server are 
on another VLAN" and that this is for performance and security reasons.  Some of the biggest vendors in the business 
use examples like this (yes, Cisco, I'm looking at you) and it just does not work that way in the real world.  Who 
gets to what server is most often decided by the server (AD membership or group policy of some type).  If the 
accounting and engineering department are both going to a cloud service VLAN separation is pretty moot.  In a world 
where my refrigerator wants to talk to the power company and send a shopping list to my car, VLAN based security is 
not really a solution.  In the "Internet of things" we keep hearing about, everything is talking to everything. 
Security is highly dependent in that world on a device defending itself and not relying on a VLAN boundary.  From 
what I am seeing out there today, there are usually far too many VLANs and too much layer three going on in most 
large networks.

In the future it would seem that systems would create their own little networks ad-hoc as needed for the best 
efficiency.  I know this is not all out there today but planning address allocation 10 years down the road might be 
an exercise in futility.  I would suggest plan for today and build it so you can easily change it when your 
prediction invariably prove wrong or short-sighted.

Steven Naslund
Chicago IL



On Jul 9, 2015, at 09:16 , Matthew Huff <mhuff () ox com> wrote:

When I see a car that needs a /56 subnet then I’ll take your use case seriously. Otherwise, it’s just plain 
laughable. Yes, I could theorize a use case for this, but then I could theorize that someday everyone will get to 
work >>using jetpacks.

When I see a reason not to give out /48s, I might start taking your argument seriously.

We have prefix delegation already via DHCP-PD, but some in the IPv6 world don’t even want to support DHCP, how does 
SLAAC do prefix delegation, or am I missing something else? I assume each car is going to be running as  RA? Given 
quality of implementations of IPv6 in embedded devices so far, I found that pretty ludicrous.

Clearly the quality of IPv6 in embedded devices needs to improve. There’s clearly work being done on LWIP IPv6, but I 
don’t think it’s ready for prime time yet. (LWIP is one of the most popular embedded IP stacks. You’ll find it in a 
wide range of devices, including, but not limited to the ESP8266).

Seriously, the IPv6 world needs to get a clue. Creating new protocols and solutions at this point in the game is 
only making it more difficult for IPv6 deployment, not less. IPv6 needs to stabilize and get going.. instead it 
seems everyone is musing about theoretical world where users need 64k networks. I understand the idea of not 
wanting to not think things through, but IPv6 is how many years old, and we are still arguing about these things? 
Don’t let the prefect be the enemy of the good.

/48s for end sites are NOT new… They have been part of the IPv6 design criteria from about the same time 128-bit 
addresses were decided. It is these silly IPv4-think notions of /56 and /60 that are new changes to the protocol.

The good news is that it’s very easy to deploy /48s and if it turns out we were wrong, virtually everyone currently 
advocating /48s will happily help you get more restrictive allocation policies when 2000::/3 runs out. (assuming any 
of us are still alive when that happens).


Owen



Current thread: