nanog mailing list archives

RE: was bogon filters, now "Brief Segue on 1918"


From: "TJ" <trejrco () gmail com>
Date: Wed, 6 Aug 2008 15:35:49 -0400

I think the problem is that operational reality (ease of use, visual
clarity, etc.) has long since won the war against the numerical
capabilities.

Things like assigning /24's per vlan make the routing table easy to read,
subnets easy to assign, etc.
        Starting from the bottom up, the next easy segregation point is /16s
per site.
        Yielding just over 250 sites, each with just over 250 network
segments, each supporting up to 250 or so users.
        Easy aggregation & summarization, easy to "own and operate" ...
grossly inefficient.  But common.

So, right or wrong is largely irrelevant - it just is.
Now go into that environment and push for a strictly-speaking efficient
allocation mechanism and let me know what kind of traction you get.


Moving forward, we can try to do things right in our IPv6 networks ...
assuming we don't inherit too much of the cruft from above.
        Use the bits to do flexible allocation while also maintaining
aggregation / summarization - it can be done.



... now let's get some work done.
/TJ


-----Original Message-----
From: Darden, Patrick S. [mailto:darden () armc org]
Sent: Wednesday, August 06, 2008 1:48 PM
To: Joel Jaeggli
Cc: nanog () nanog org
Subject: RE: was bogon filters, now "Brief Segue on 1918"


I'll reply below with //s.  My point is still: most companies do not use
RFC1918 correctly.  Your point seemed to be that it is not a large enough
allocation of IPs for an international enterprise of 80K souls.  My
rebuttal
is: 16.5 million IPs isn't enough?
--p

-----Original Message-----
From: Joel Jaeggli [mailto:joelja () bogus com]
Sent: Wednesday, August 06, 2008 1:31 PM
To: Darden, Patrick S.
Cc: nanog () nanog org
Subject: Re: was bogon filters, now "Brief Segue on 1918"


That's comical thanks. come back when you've done it.
//Ok.

Marshall is correct.
//Ok.

If you'd like to avoid constant renumbering you need a sparser allocation
model.  You're still going to have collisions with your suppliers and
acquisitions and some applications (eg labs, factory automation systems
etc)
have orders of magnitude large address space requirements than the number
of
humans using them implies.
//You used the metric of 80K people.  Now you say it is a bad metric when I
reply using it.  Your fault, you compound it--you don't provide a better
one.  What are we talking about then?  100 IPs per person--say each person
has 10 PCs, 10 printers, 10 automated factory machines, 10 lab instruments,
49 servers and the soda machine on their network?  80,000*100==8 million IP
addresses.  That leaves you with 8.5 million....  And that includes 80,000
networked soda machines.  I don't think you have that many soda machines.
Even on 5 continents.  Even with your growing Asian market, your suppliers,
and the whole marketing team.


In practice indivudal sites might be assigned between a 22 and a 16 with
sites with exotic requirements having multiple assignments potentially from
different non-interconnected networks (but still with internal uniqueness
requirements).
//Err.  Doing it wrong does not justify doing it wrong.





Current thread: