nanog mailing list archives

Re: Open Souce Network Operating Systems


From: Raymond Burkholder <ray () oneunified net>
Date: Wed, 17 Jan 2018 23:18:21 -0400

On 01/17/2018 07:48 PM, Hugo Slabbert wrote:
On Wed 2018-Jan-17 23:11:14 +0000, Matthew Smee <matthew.smee () sydney edu au> wrote:
Yeah, it'd be silly for organisations to try and standardise their environments for services or infrastructure.

Was this spoken tongue-in-check, or in all seriousness?

I would say there is power in standardization. And, yes, there is risk depending upon your attack surface.

When we concern ourselves with the attack service, how much time do we spend on mitigating issues on the edge (the attacked surface), vs internal infrastructure where we can apply labour and effort saving automation and orchestration?


I'm somewhat in two minds there.  Options to tackle operational complexity/expense:

Option 1: Require a homogeneous environment or minimize vendors/platforms as much as possible.

Maybe deal with on a case by case basis? Or implement solutions which are 'easy' to mitigate?


Option 2: Accept vendor/platform diversity as inevitable and build systems/abstractions around that.

And do this in an intelligent manner, depending upon management and risk profiles.

Infrastructure tends to be wide ranging. When one thinks about the big picture, where do you _really_ need the diversity, and where you can you gain the most by standardization? Standard engineering response: it depends.

And I hope that readers are not trying to draw a line in the sand. I'm hoping that we are open to optimization and orchestration based upon the infrastructure at hand.


Is #1 achievable?  If you're expending time/effort/resources achieving #1 and fall short, don't you have to do #2 anyway?

Doesn't this go the other way? ... that if you are spending so much effort that building infrastructure, and you can't get a maintainable/upgradeable/orchestratable solution in place, is #2 relevant?


Much has also been said on monocultures in infrastructure: having a single bug impact all of your gear sucks.  If I can manage a pair of border routers, for instance, from two different vendors in an

but when I think about this, I'm not thinking about just border routers, I'm thinking about core routing, virtualization infrastructure, carrying customer private circuits, delivering traffic to individual customers, gear for telemetry, implementing security, ....

When you think about all the devices involved in various levels and styles of service delivery, there are ways to make that homogenous, and much of it has various attack surfaces, and, well, vendors have their strengths and their weaknesses, so ...

#1 a homogenous network makes it easier to intimately understand the possible weaknesses, and attempt protection mechansims, but for

#2 with multiple vendors, the effort on platform education increases, depending upon the size of your shop, ...

abstracted/consistent enough manner that I don't deal with their idiosyncrasies on a daily basis, am I not better off than running a single platform / code train in that function?

or across many functions?



--
Raymond Burkholder
ray () oneunified net
https://blog.raymond.burkholder.net

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


Current thread: