Interesting People mailing list archives

Predicting Crime in SF - a toy WMD


From: "Dave Farber" <dave () farber net>
Date: Mon, 15 Jan 2018 17:04:14 +0000

---------- Forwarded message ---------
From: Dewayne Hendricks <dewayne () warpspeed com>
Date: Mon, Jan 15, 2018 at 10:53 AM
Subject: [Dewayne-Net] Predicting Crime in SF - a toy WMD
To: Multiple recipients of Dewayne-Net <dewayne-net () warpspeed com>


[Note:  This item comes from friend David Rosenthal.  DLH]

Predicting Crime in SF- a toy WMD
Machine Learning 101: from Linear Regression To Deep Learning
By Orlando Torres
<http://www.orlandotorres.org/predictive-policing-sf.html>


When new technologies emerge, our ethics and our laws normally take some
time to adjust. As a social scientist and a philosopher by training, I've
always been interested in this intersection of technology and morality. A
few months ago I read Cathy O'Neil's book Weapons of Math Destruction (link
to my review) and realized its message was too important yet neglected by
data scientists.

I started this project to show the potential ethical conflicts created by
our new algorithms. In every conceivable field, algorithms are being used
to filter people. In many cases, the algorithms are obscure, unchallenged,
and self-perpetuating. This is what O'Neil refers to as Weapons of Math
Destruction - WMDs. They are unfair by design: they are our biases turned
into code and let loose. Worst of all, they create feedback loops that
reinforce said models.

I decided to create a WMD for illustration purposes. This project is meant
to be as simple and straightforward as possible. The two goals are, first,
to show how easy it is to create a Weapon of Math Destruction. Secondly, to
help aspiring data scientist see the process of a project from start to
finish. I hope people are inspired to think twice about the ethical
implications of their models.

For this project, I will create a predictive policing model to determine
where crime is more likely to occur. I will show how easy it is to create
such a model, and why it can be so dangerous. Models like these are being
adopted by police agencies all over the United States. Given the pervasive
racism inherent in all human beings, and given how people of color are
already twice as likely to be killed by police, this is a scary trend.
Here's how data science can make the problem worse.

The Data
The data used for this project is found as part of the open data initiative
by the City of San Francisco, a great resource for data scientists
interested in public policy. Hopefully more cities will continue follow
this initiative and make their data public and machine-readable.

[snip]

Dewayne-Net RSS Feed: http://dewaynenet.wordpress.com/feed/
Twitter: https://twitter.com/wa8dzp



-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/18849915-ae8fa580
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915&id_secret=18849915-aa268125
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-32545cb4&post_id=20180115120432:26281912-FA16-11E7-8DE2-9D3568C95868
Powered by Listbox: http://www.listbox.com

Current thread: