nanog mailing list archives

Re: The Real AI Threat?


From: Mel Beckman <mel () beckman org>
Date: Thu, 10 Dec 2020 18:38:41 +0000

Jeez... some guys seem to take a joke literally - while ignoring a real and present danger - which was the point.

Miles,

With all due respect, you didn’t present this as a joke. You presented "AI self-healing systems gone wild” as a genuine 
risk. Which it isn’t. In fact, AI fear mongering is a seriously debilitating factor in technology policy, where 
policymakers and pundits — who also don’t get “the joke” — lobby for silly laws and make ridiculous predictions, such 
as Elon Musks claim that, by 2025, “AI will be where AI conscious and vastly smarter than humans.”

That’s the kind of ignorance that will waste billions of dollars. No joke.

 -mel



On Dec 10, 2020, at 8:47 AM, Miles Fidelman <mfidelman () meetinghouse net<mailto:mfidelman () meetinghouse net>> wrote:

Ahh.... invasive spambots, running on OpenStack ... "the telephone bell is tolling... "

Miles

adamv0025 () netconsultings com<mailto:adamv0025 () netconsultings com> wrote:
Automated resource discovery + automated resource allocation = recipe for disaster
That is literally how OpenStack works.

For now, don’t worry about AI taking away your freedom on its own, rather worry about how people using it might…


adam

From: NANOG <nanog-bounces+adamv0025=netconsultings.com () nanog org><mailto:nanog-bounces+adamv0025=netconsultings.com 
() nanog org> On Behalf Of Miles Fidelman
Sent: Thursday, December 10, 2020 2:44 PM
To: 'NANOG' <nanog () nanog org><mailto:nanog () nanog org>
Subject: Re: The Real AI Threat?

adamv0025 () netconsultings com<mailto:adamv0025 () netconsultings com> wrote:

Put them together, and the nightmare scenario is:

- machine learning algorithm detects need for more resources

All good so far



- machine learning algorithm makes use of vulnerability analysis library

to find other systems with resources to spare, and starts attaching

those resources

Right so a company would built, trained and fine-tuned an AI, or would have bought such a product and implemented it as 
part of its NMS/DDoS mitigation suite, to do the above?
What is the probability of anyone thinking that to be a good idea?
To me that does sound like an AI based virus rather than a tool one would want to develop or buy from a third party and 
then integrate into the day to day operations.

You can’t take for instance alpha-0 or GPT-3 and make it do the above. You’d have to train it to do so over millions of 
examples and trials.
Oh and also these won’t “wake up” one day and “think” to themselves oh I’m fed up with Atari games I’m going to learn 
myself some chess and then do some reading on wiki about the chess rules.

Jeez... some guys seem to take a joke literally - while ignoring a real and present danger - which was the point.

Meanwhile, yes, I think that a poorly ENGINEERED DDoS mitigation suite might well have failure modes that just keep 
eating up resources until systems start crashing all over the place.  Heck, spinning off processes until all available 
resources have been exhausted has been a failure mode of systems for years.  Automated resource discovery + automated 
resource allocation = recipe for disaster.  (No need for AIs eating the world.)

Miles






--

In theory, there is no difference between theory and practice.

In practice, there is.  .... Yogi Berra



Theory is when you know everything but nothing works.

Practice is when everything works but no one knows why.

In our lab, theory and practice are combined:

nothing works and no one knows why.  ... unknown



--
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown


Current thread: