nanog mailing list archives

Re: IPv6 fc00::/7 — Unique local addresses


From: Mark Smith <nanog () 85d5b20a518b8f6864949bd940457dc124746ddc nosense org>
Date: Thu, 21 Oct 2010 12:06:48 +1030

Hi Owen,

On Wed, 20 Oct 2010 17:51:11 -0700
Owen DeLong <owen () delong com> wrote:


On Oct 20, 2010, at 5:29 PM, Mark Smith wrote:

On Wed, 20 Oct 2010 19:39:19 -0400
Deepak Jain <deepak () ai net> wrote:

Use a pseudo random number, not follow bad examples. Where are these
examples? I'd be curious as to what they say regarding why they haven't
followed the pseudo random number requirement.

Use something like fd00::1234, or incorporate
something like the interface's MAC address into the address? It'd
make
the address quite unreadable though.

Unique Local IPv6 Unicast Addresses
http://tools.ietf.org/rfc/rfc4193.txt


[snipped a bunch of stuff above]. 

According to the RFC: 

3.2

  The local assignments are self-generated and do not need any central
  coordination or assignment, but have an extremely high probability of
  being unique.

3.2.1.  Locally Assigned Global IDs

  Locally assigned Global IDs MUST be generated with a pseudo-random
  algorithm consistent with [RANDOM].  Section 3.2.2 describes a
  suggested algorithm.  It is important that all sites generating
  Global IDs use a functionally similar algorithm to ensure there is a
  high probability of uniqueness.

  The use of a pseudo-random algorithm to generate Global IDs in the
  locally assigned prefix gives an assurance that any network numbered
  using such a prefix is highly unlikely to have that address space
  clash with any other network that has another locally assigned prefix
  allocated to it.  This is a particularly useful property when
  considering a number of scenarios including networks that merge,
  overlapping VPN address space, or hosts mobile between such networks.

----

Global ID in this case means the 40 bit pseudo random thing. The point here is, we are all supposed  to pick our 
own poison and pray that we are unique.

The chance of collision is dependent on both the randomness of 40 bits
*and* how interconnected the ULA domains are. You'll have to sin a lot to be that unlucky.

Here's the table from the RFC showing the odds of collision based on interconnectedness -

The following table shows the probability of a collision for a range
  of connections using a 40-bit Global ID field.

     Connections      Probability of Collision

         2                1.81*10^-12
        10                4.54*10^-11
       100                4.54*10^-09
      1000                4.54*10^-07
     10000                4.54*10^-05

  Based on this analysis, the uniqueness of locally generated Global
  IDs is adequate for sites planning a small to moderate amount of
  inter-site communication using locally generated Global IDs.


Though an algorithm is suggested in 3.2.2. Perhaps SIXXS uses it. Anyway, the SIXXS tool seems pretty slick.


One thing I'm not keen on that sixxs have done is to create a voluntary
registry of the non-central ULAs. By creating a registry, I think some
people who use it will then think that their ULA prefix is now
guaranteed globally unique and is theirs forever. If there ever was a
collision, those people are likely to point to that completely
voluntary registry and say "I had it first" and are likely to refuse
to accept that the voluntary registry has no status or authority over
the random ULA address space.

Which is part one of the three things that have to happen to make ULA
really bad for the internet.

Part 2 will be when the first provider accepts a large sum of money to
route it within their public network between multiple sites owned by
the same customer.


That same customer is also going to have enough global address
space to be able to reach other global destinations, at least enough
space for all nodes that are permitted to access the Internet, if not
more. Proper global address space ensures that if a global destination
is reachable, then there is a high probability of successfully reaching
it. The scope of external ULA reachability, regardless of how much
money is thrown at the problem, isn't going to be as good as proper
global addresses.

For private site interconnect, I'd think it more likely that the
provider would isolate the customers traffic and ULA address space via
something like a VPN service e.g. MPLS, IPsec.

Part 3 will be when that same provider (or some other provider in the
same boat) takes the next step and starts trading routes of ULA space
with other provider(s).

At that point, ULA = GUA without policy = very bad thing (tm).

Since feature creep of this form is kind of a given in internet history,
I have no reason to believe it won't happen eventually with ULA.


So I'm not sure I can see much benefit would be of paying a
huge amount of money to have ULA address space put in only a
limited part/domain of the global route table. The only way to
have ULA = GUA is to pay everybody on the Internet to carry it, and
that is assuming that everybody would be willing to accept the money
in the first place. That'd be far more expensive than just using GUA
addresses for global reachability.

Regards,
Mark.


Current thread: