Interesting People mailing list archives

Re How Might Artificial Intelligence Affect the Risk of Nuclear War?


From: "Dave Farber" <farber () gmail com>
Date: Sun, 29 Apr 2018 20:24:32 -0400




Begin forwarded message:

From: Bob Hinden <bob.hinden () gmail com>
Date: April 29, 2018 at 8:22:39 PM EDT
To: Dave Farber <dave () farber net>
Cc: Bob Hinden <bob.hinden () gmail com>
Subject: Re: [IP] How Might Artificial Intelligence Affect the Risk of Nuclear War?

Dave,

For IP if you please.

I would think there are parallel articles of the form:

  How Might <FOO> Affect the Risk of Nuclear War?

Where <FOO> is one of Artificial Intelligence, Pollution, Tump, Block Chain, Tariffs, Butterflies, the Internet, 
etc., etc.  Pick you own favorite topic.

I suppose that one could write an AI program to write articles like this with only the input of the topic.

No slight intended to the author of this article.

Bob



On Apr 29, 2018, at 10:36 AM, Dave Farber <dave () farber net> wrote:


---------- Forwarded message ---------
From: José María Mateos <chema () rinzewind org>
Date: Sun, Apr 29, 2018 at 12:42 PM
Subject: How Might Artificial Intelligence Affect the Risk of Nuclear War?
To: Dave Farber <dave () farber net>


For IP, if you want.

https://www.rand.org/pubs/perspectives/PE296.html

Abstract:

Advances in artificial intelligence (AI) are enabling previously
infeasible capabilities, potentially destabilizing the delicate balances
that have forestalled nuclear war since 1945. Will these developments
upset the nuclear strategic balance, and, if so, for better or for
worse? To start to address this question, RAND researchers held a series
of workshops that were attended by prominent experts on AI and nuclear
security. The workshops examined the impact of advanced computing on
nuclear security through 2040. The culmination of those workshops, this
Perspective — one of a series that examines critical security challenges
in 2040 — places the intersection of AI and nuclear war in historical
context and characterizes the range of expert opinions. It then
describes the types of anticipated concerns and benefits through two
illustrative examples: AI for detection and for tracking and targeting
and AI as a trusted adviser in escalation decisions. In view of the
capabilities that AI may be expected to enable and how adversaries may
perceive them, AI has the potential to exacerbate emerging challenges to
nuclear strategic stability by the year 2040 even with only modest rates
of technical progress. Thus, it is important to understand how this
might happen and to assure that it does not.

Cheers,

--
José María (Chema) Mateos
https://rinzewind.org/blog-es || https://rinzewind.org/blog-en
Archives | Modify Your Subscription | Unsubscribe Now




-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-a538de84&post_id=20180429202440:DD23252E-4C0C-11E8-9E9B-A08EBDD0ACD3
Powered by Listbox: http://www.listbox.com

Current thread: