nanog mailing list archives

Re: [NOC] ARIN contact needed: something bad happens with legacy IPv4 block's reverse delegations


From: Brett Frankenberger <rbf+nanog () panix com>
Date: Mon, 20 Mar 2017 14:27:52 -0500

On Sat, Mar 18, 2017 at 09:27:11PM -0700, Doug Barton wrote:

As to why DNS-native zone operations are not utilized, the challenge
is that reverse DNS zones for IPv4 and DNS operations are on octet
boundaries, but IPv4 address blocks may be aligned on any bit
boundary.

Yes, deeply familiar with that problem. Are you dealing with any address
blocks smaller than a /24? If the answer is no (which it almost certainly
is), what challenges are you facing that you haven't figured out how to
overcome yet? (Even < /24 blocks can be dealt with, obviously, but I'd be
interested to learn that there are problems with /24 and up that are too
difficult to solve.)

Hypotheically:

10.11.0.0/16 (11.10.in-addr.arpa) is managed by ARIN
10.11.16.0/20 is ARIN space
10.11.32.0/20 is RIPE space

If ARIN delegated 32.11.10.in-addr.arpa through 47.11.10.in-addr.arpa
to a RIPE nameserver, there's no good way for RIPE to then delegate,
say, 10.11.34.0/24 (34.11.10.in-addr.arpa) to the nameserver of the
entity to which RIPE has allocated 10.11.34.0.  (Sure, it can be done,
using the same techniques as are used for allocations of
longer-than-/24, but recipients of /24 and larger reasonably expect to
have the X.X.X.in-addr.arpa delegated to their nameservers.)

So, instead, RIPE communicates to ARIN the proper delegations for
32.11.10.in-addr.arpa through 47.11.10.in-addr.arpa, and ARIN merges
those into 11.10.in-addr.arpa.

One way for RIPE to communicate those delegations to ARIN is to put
then into some other zone, which ARIN could then zone-transfer.  But
ARIN would still need a process to merge the data from that other e
with the real 11.10.in-addr.arpa zone.  But that has the same risks as
the current process, which apparently communicates those delegations
via something other than zone-transfer.

     -- Brett


Current thread: