nanog mailing list archives

Re: standards for giving out blocks of IP addresses


From: up () 3 am
Date: Sat, 16 Jun 2001 12:46:38 -0400 (EDT)



It's not really a question of what makes sense, it's what you need to do
to keep ARIN happy.  As an ISP, if you only apply the 25% / 50% rule to
your customers, how are you supposed to demonstrate 80% utilization to
ARIN when requesting any kind of allocation?

If you've handed out a whole bunch of /24 - /29 subnets to your customers
and they are compliant with RFC2050, this could well result in a situation
where you've depleted nearly all of your address space, yet are nowhere
near 80% utilization or your, say, /21 from your upstream.  Is ARIN going
to allocate you a /20?

On Sat, 16 Jun 2001, Christopher A. Woodfield wrote:

The 80% utilization rule makes sense for ADDITIONAL allocations, where an 
end user already has space but needs more. Of course, exceptions can be 
made for large deployments (customer has 150 hosts on a single /24, but 
needs two more for a 400-host data center, etc.)

For initial allocations, the 50% rule makes the most sense, IMHO.

-Chris

On Sat, Jun 16, 2001 at 11:10:08AM -0400, Charles Scott wrote:

On Sat, 16 Jun 2001 up () 3 am wrote:

IIRC, Sprint wanted us to show 80% utilization within 3 months(!), citing
ARIN guidelines...

James:
  That's for allocation to ISP's. The RFC refered to end user utilization
of the address space (see http://www.arin.net/regserv/ip-assignment.html).
I've seen some ISP's incorrectly quote the 80% utilization to customers
and expect them to achieve that before assigning them more IP address
space.

Chuck



-- 
---------------------------
Christopher A. Woodfield              rekoil () semihuman com

PGP Public Key: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xB887618B


James Smallacombe                     PlantageNet, Inc. CEO and Janitor
up () 3 am                                                          http://3.am
=========================================================================



Current thread: