Interesting People mailing list archives

A bit more on -- Magical thinking about machine learning won't bring the reality of AI any closer


From: "Dave Farber" <farber () gmail com>
Date: Mon, 6 Aug 2018 10:12:45 +0900



Begin forwarded message:

From: Dewayne Hendricks <dewayne () warpspeed com>
Subject: [Dewayne-Net] Magical thinking about machine learning won't bring the reality of AI any closer
Date: August 6, 2018 6:32:19 JST
To: Multiple recipients of Dewayne-Net <dewayne-net () warpspeed com>
Reply-To: dewayne-net () warpspeed com

Magical thinking about machine learning won’t bring the reality of AI any closer
Unchecked flaws in algorithms, and even the technology itself, should put a brake on the escalating use of big data
By John Naughton
Aug 5 2018
<https://www.theguardian.com/commentisfree/2018/aug/05/magical-thinking-about-machine-learning-will-not-bring-artificial-intelligence-any-closer>

”Any sufficiently advanced technology,” wrote the sci-fi eminence grise Arthur C Clarke, “is indistinguishable from 
magic.” This quotation, endlessly recycled by tech boosters, is possibly the most pernicious utterance Clarke ever 
made because it encourages hypnotised wonderment and disables our critical faculties. For if something is “magic” 
then by definition it is inexplicable. There’s no point in asking questions about it; just accept it for what it is, 
lie back and suspend disbelief.

Currently, the technology that most attracts magical thinking is artificial intelligence (AI). Enthusiasts portray it 
as the most important thing since the invention of the wheel. Pessimists view it as an existential threat to 
humanity: the first “superintelligent” machine we build will be the beginning of the end for humankind; the only 
question thereafter will be whether smart machines will keep us as pets.

In both cases there seems to be an inverse correlation between the intensity of people’s convictions about AI and 
their actual knowledge of the technology. The experts seem calmly sanguine, while the boosters seem blissfully 
unaware that the artificial “intelligence” they extol is actually a relatively mundane combination of machine 
learning (ML) plus big data.

ML uses statistical techniques to give computers the ability to “learn” – ie use data to progressively improve 
performance on a specific task, without being explicitly programmed. A machine-learning system is a bundle of 
algorithms that take in torrents of data at one end and spit out inferences, correlations, recommendations and 
possibly even decisions at the other end. And the technology is already ubiquitous: virtually every interaction we 
have with Google, Amazon, Facebook, Netflix, Spotify et al is mediated by machine-learning systems. It’s even got to 
the point where one prominent AI guru, Andrew Ng, likens ML to electricity.

To many corporate executives, a machine that can learn more about their customers than they ever knew seems magical. 
Think, for example, of the moment Walmart discovered that among the things their US customers stocked up on before a 
hurricane warning – apart from the usual stuff – were beer and strawberry Pop-Tarts! Inevitably, corporate enthusiasm 
for the magical technology soon spread beyond supermarket stock-controllers to public authorities. Machine learning 
rapidly found its way into traffic forecasting, “predictive” policing (in which ML highlights areas where crime is 
“more likely”), decisions about prisoner parole, and so on. Among the rationales for this feeding frenzy are 
increased efficiency, better policing, more “objective” decision-making and, of course, providing more responsive 
public services.

This “mission creep” has not gone unnoticed. Critics have pointed out that the old computing adage “garbage in, 
garbage out” also applies to ML. If the data from which a machine “learns” is biased, then the outputs will reflect 
those biases. And this could become generalised: we may have created a technology that – however good it is at 
recommending films you might like – may actually morph into a powerful amplifier of social, economic and cultural 
inequalities.

In all of this sociopolitical criticism of ML, however, what has gone unchallenged is the idea that the technology 
itself is technically sound – in other words that any problematic outcomes it produces are, ultimately, down to flaws 
in the input data. But now it turns out that this comforting assumption may also be questionable. At the most recent 
Nips (Neural Information Processing Systems) conference – the huge annual gathering of ML experts – Ali Rahimi, one 
of the field’s acknowledged stars, lobbed an intellectual grenade into the audience. In a remarkable lecture he 
likened ML to medieval alchemy. Both fields worked to a certain extent – alchemists discovered metallurgy and 
glass-making; ML researchers have built machines that can beat human Go champions and identify objects from pictures. 
But just as alchemy lacked a scientific basis, so, argued Rahimi, does ML. Researchers, he claimed, often can’t 
explain the inner workings of their mathematical models: they lack rigorous theoretical understandings of their tools 
and in that sense are currently operating in alchemical rather than scientific mode.

[snip]

Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
Twitter: https://twitter.com/wa8dzp






-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-a538de84&post_id=20180805211254:D68C11D4-9915-11E8-8DA1-F4055BC57E5A
Powered by Listbox: https://www.listbox.com

Current thread: