Interesting People mailing list archives

Re Tesla Driver In Fatal Florida Crash Got Numerous Warnings To Take Control Back From Autopilot


From: "Dave Farber" <farber () gmail com>
Date: Tue, 20 Jun 2017 13:34:30 -0500




Begin forwarded message:

From: Eric Grimm <ecgrimm () me com>
Date: June 20, 2017 at 1:12:09 PM CDT
To: dave () farber net
Cc: ip <ip () listbox com>, synthesis.law.and.technology () gmail com
Subject: Re: [IP] Re Tesla Driver In Fatal Florida Crash Got Numerous Warnings To Take Control Back From Autopilot

Dan Steinberg asks:

Why are we holding autonomous vehicles to a much higher standard than we do for people? 

Not sure who "we" is, or whether a standard of care in a legal, negligence, sense has been decided for autonomous 
vehicles -- in Canada, Florida, or anyplace in the US.

When "we" do get around to setting a standard (will courts, Terms of Service drafted by Tesla, regulators or 
legislators perform this function? maybe something like RFCs for the Internet would be worth considering), why not 
expect automated systems to outperform humans most or all of the time?   And why not set the safety bar higher?

These seem like entirely fair questions.  Presupposing that human performance (and average performance is not the 
same as the performance of the best professional drivers) ought to be the right benchmark ... is a premise that 
should be considered with care and jettisoned if it does not survive reasonable scrutiny. 

Agreed that there are foreseeable situations in which a human, when alerted to resume driving, may be entirely unable 
to resume driving.  Or, as humans have shown through behavioral examples from time to time ("hold my beer, watch 
this"), they may capriciously be unwilling to take the wheel -- just to see what will happen, or even for a 
nefariousness purpose like sabotaging the market for autonomous vehicles and attempting to delay rollout.

A well-designed and roadworthy autonomous autopilot system, upon discovering new data -- namely, the human does not 
resume operating the vehicle when reminded to do so -- ought to have a "plan B" built in that is (at the least) more 
likely than not to produce better results than a collision.

In the 1960s, some legal scholars noted that industrial supply chains and processes (the classic case taught in law 
schools was about an exploding bottle of coca-cola) have advantages in terms of information collection and memory, 
risk-avoidance, and society-wide risk-spreading, compared with ordinary consumers.  Like the McDonalds database of 
prior coffee burn events that came to light in the Stella Lybek case, non-human "people" like McDonalds, are more 
capable than individual consumers to quantify risk, and to collect data ("data" is the plural of anecdote) and about 
risks of death and injury. 

They are better-positioned to change processes that are largely hidden from the end-user, in order to avoid harmful 
events in the first place. 

And building social insurance against risks that nonetheless materialize, into the cost of the product, is a good way 
to align incentives so that big artificial "persons" (industrial corporations in the coca-cola and McDonalds cases, 
or automated "AI" systems and the industrial purveyors of these systems, in the case if Tesla) are as 
well-incentivised as possible accurately to balance social cost, and investment in safety, when designing processes 
that no one human on his or her own can fully understand, let alone replicate or match in performance.

In other words, think about how we have held industrial corporations (or should have, to the extent we have done so) 
to a "higher standard," when considering what the standard of culpability ought to be for AI.  AI promises to make 
mistakes fewer, overall.  But when mistakes
still happen, then incentives ought to be aligned to promote appropriate investment in making adjustments to learn 
from experience, so as not to repeat the same mistakes over and over
again.

IMHO, there is nothing the least bit unfair or unreasonable in holding individual humans to the reasonableness 
standard of negligence, while holding AIs and industrial corporations to a more rigorous standard of safety and 
risk-avoidance.

No doubt many insurance defense attorneys and PR professionals, will disagree vehemently.

And that debate is a good one to have.

ECG


Sent from a handheld device.

On Jun 20, 2017, at 12:03, Dave Farber <farber () gmail com> wrote:




Begin forwarded message:

From: "Synthesis:Law and Technology" <synthesis.law.and.technology () gmail com>
Date: June 20, 2017 at 11:19:32 AM CDT
To: David Farber <dave () farber net>
Subject: Re: [IP] Tesla Driver In Fatal Florida Crash Got Numerous Warnings To Take Control Back From Autopilot

Dave, 

I can imagine plenty of hypotheses under which a driver will not take back control:  unconsciousness, extreme 
nausea, blurred vision, muscle spasm, etc, etc.  The premise that a driver somehow "must" resume control seems very 
flawed.  what should the vehicle do? slam on the brakes? drive faster? drive slower?  The whole point of the alert 
system is that something is happening that the vehicle suspects a driver would do better. "please take control" 
"please take control" "ok fine I had a plan anyways..."  This makes no sense.

Why are we holding autonomous vehicles to a much higher standard than we do for people? 

Dan Steinberg

SYNTHESIS:Law & Technology
2-45 Helene-Duval     
Gatineau Quebec
J8X 3C5        
               



On Tue, Jun 20, 2017 at 12:05 PM, Dave Farber <farber () gmail com> wrote:



Begin forwarded message:

From: Lauren Weinstein <lauren () vortex com>
Date: June 20, 2017 at 10:31:28 AM CDT
To: nnsquad () nnsquad org
Subject: [ NNSquad ] Tesla Driver In Fatal Florida Crash Got Numerous Warnings To Take Control Back From Autopilot


Tesla Driver In Fatal Florida Crash Got Numerous Warnings To Take Control Back From Autopilot

http://jalopnik.com/tesla-driver-in-fatal-florida-crash-got-numerous-warnin-1796226021

     Joshua Brown, the Tesla driver killed last year while using
   the semi-autonomous Autopilot mode on his Model S, received
   several visual and audio warnings to take control of the
   vehicle before he was killed, according to a report from the
   National Transportation Safety Board. Despite the warnings,
   Brown kept his hands off the wheel before colliding with a
   truck on a Florida highway.

- - -

The big problem here is that the Tesla continued motoring along
normally even though the driver wasn't complying with the autopilot
warnings to take back control. This is unacceptable, and puts innocent
parties at risk (that is, other drivers of other vehicles). If the
driver isn't complying, the Tesla cannot be permitted to simply
continue as if nothing was wrong.

--Lauren--
Lauren 


Archives  | Modify  Your Subscription | Unsubscribe Now       



-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/18849915-ae8fa580
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915&id_secret=18849915-aa268125
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-32545cb4&post_id=20170620143439:1BD52A3C-55E7-11E7-BA5C-D4F74045FFE8
Powered by Listbox: http://www.listbox.com

Current thread: