nanog mailing list archives

Re: Theorical question about cyclic dependency in IRR filtering


From: "J. Hellenthal via NANOG" <nanog () nanog org>
Date: Mon, 29 Nov 2021 19:33:51 -0600

Coin phrase ... IRR (dedup)

-- 
 J. Hellenthal

The fact that there's a highway to Hell but only a stairway to Heaven says a lot about anticipated traffic volume.

On Nov 29, 2021, at 07:17, Job Snijders via NANOG <nanog () nanog org> wrote:


Hi Anurag,

Circular dependencies definitely are a thing to keep in mind when designing IRR and RPKI pipelines!

In the case of IRR: It is quite rare to query the RIR IRR services directly. Instead, the common practise is that 
utilities such as bgpq3, peval, and bgpq4 query “IRRd” (https://IRRd.net) instances at for example whois.radb.net and 
rr.ntt.net. You can verify this with tcpdump. These IRRd instances serve as intermediate caches, and will continue to 
serve old cached data in case the origin is down. This phenomenon in the global IRR deployment avoids a lot of 
potential for circular dependencies.

Also, some organisations use threshold checks before deploying new IRR-based filters to reduce risk of “misfiring”.


The RPKI case is slightly different: the timers are far more aggressive compared to IRR, and until “Publish in 
Parent” (RFC 8181) becomes common place, there are more publication points, thus more potential for operators to 
paint themselves into a corner.

Certainly, in the case of RPKI, all Publication Point (PP) operators need to take special care to not host CAs which 
have the PP’s INRs listed as subordinate resources inside the PP.

See RFC 7115 Section 5 for more information: “Operators should be aware that there is a trade-off in placement of an 
RPKI repository in address space for which the repository’s content is authoritative. On one hand, an operator will 
wish to maximize control over the repository. On the other hand, if there are reachability problems to the address 
space, changes in the repository to correct them may not be easily access by others”

Ryan Sleevi once told me: "yes, it strikes me that you should prevent self-compromise from being able to perpetually 
own yourself, by limiting an attacker’s ability to persist beyond remediation."

A possible duct tape approach is outlined at  https://bgpfilterguide.nlnog.net/guides/slurm_ta/
However, I can’t really recommend the SLURM file approach. Instead, RPKI repository operators are probably best off 
hosting their repository *outside* their own address space.

Just like with Authoritative DNS servers, make sure you also can serve your records via a competitor! :-)

For example, if ARIN moved one of their three publication point clusters into address space managed by any of the 
other four RIRs, some risk would be reduced.

Kind regards,

Job

On Mon, 29 Nov 2021 at 13:37, Anurag Bhatia <me () anuragbhatia com> wrote:
Hello everyone, 

While discussing IRR on some groups recently, I was thinking if there can be (and if there is) cycling dependency in 
filtering where IRR (run by whoever APNIC, RIPE, RADB etc) uses some upstream and accepts only routes with existing 
& valid route object. 



So hypothetical case (can apply to any IRR): 

APNIC registry source is whois.apnic.net and points to 202.12.28.136 / 2001:dc0:1:0:4777::136. The aggregate of both 
these has a valid route object at the APNIC registry itself. 

Their upstreams say AS X, Y and Z have tooling in place to generate and push filters by checking all popular IRRs. 
All is well till this point. 

Say APNIC has some server/service issue for a few mins and X Y and Z are updating their filters at the same time. 
They cannot contact whois.apnic.net and hence miss generating filters for all APNIC IRR hosted prefixes. 

X, Y and  Z drop APNIC prefixes including those of IRR & the loop goes on from this point onwards. 

So my question is: Can that actually happen? 
If not, do X, Y and Z and possible all upstreams till default-free zone treat these prefixes in a special manner to 
avoid such loop in resolution? 




Thanks! 

-- 
Anurag Bhatia
anuragbhatia.com

Current thread: