Interesting People mailing list archives

The Dark Secret at the Heart of AI


From: "Dave Farber" <farber () gmail com>
Date: Sun, 28 May 2017 14:18:25 -0400




Begin forwarded message:

From: Dewayne Hendricks <dewayne () warpspeed com>
Date: May 28, 2017 at 1:41:09 PM EDT
To: Multiple recipients of Dewayne-Net <dewayne-net () warpspeed com>
Subject: [Dewayne-Net] The Dark Secret at the Heart of AI
Reply-To: dewayne-net () warpspeed com

The Dark Secret at the Heart of AI
No one really knows how the most advanced algorithms do what they do. That could be a problem.
By Will Knight
Apr 11 2017
<https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/>

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The 
experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous 
cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of 
artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, 
it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely 
clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of 
artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the 
brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one 
day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be 
difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to 
isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so 
that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI 
technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been 
widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that 
the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless 
other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more 
understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures 
might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who 
gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their 
reasoning. But banks, the military, employers, and others are now turning their attention to more complex 
machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most 
common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is 
already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who 
works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a 
military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a 
fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to 
give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that 
seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend 
songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot 
understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which 
using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we 
find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make 
decisions differently from the way a human would? We’ve never before built machines that operate in ways their 
creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could 
be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI 
algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers 
of our time. 

[snip]

Dewayne-Net RSS Feed: <http://dewaynenet.wordpress.com/feed/>





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/18849915-ae8fa580
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915&id_secret=18849915-aa268125
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-32545cb4&post_id=20170528141835:0D0A1DF4-43D2-11E7-943E-E9B04E0A02F9
Powered by Listbox: http://www.listbox.com

Current thread: