RISKS Forum mailing list archives

Risks Digest 33.94


From: RISKS List Owner <risko () csl sri com>
Date: Sat, 18 Nov 2023 19:57:08 PST

RISKS-LIST: Risks-Forum Digest  Saturday 18 November 2023  Volume 33 : Issue 94

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/33.94>
The current issue can also be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
How the Railroad Industry Intimidates Employees Into Putting Speed Before
 Safety (ProPublica)
Hikers Rescued After Following Nonexistent Trail on Google Maps (NTimes)
Admission of the state of software (David Lamkin)
500 chatbots read the news and discussed it on social media. Guess
 how that went.  (Business Insider)
The Problem with Regulating AI (Tim Wu)
ChatGPT Created a Fake Dataset With Skewed Results (MedPage Today)
Researchers Discover New Vulnerability in Large Language Models
 (Carnegie Mellon University)
Ten ways AI will change democracy (Bruce Schneier)
Fake Reviews Are Rampant Online. Can a Crackdown End Them? (NYTimes)
OpenAI co-founder & president Greg Brockmane quits after firing of
 CEO Altman (TechCrunch)
The AI Pin (Rob Slade)
Ukraine's 'Secret Weapon' Against Russia Is a U.S. Tech Company
 (Vera Bergengruen)
Cryptographic Keys Protecting SSH Connections Exposed (Dan Goodin)
Developers can't seem to stop exposing credentials in publicly
 accessible code (Ars Technica)
Hacking Some More Secure USB Flash Drives -- Part II (SySS Tech Blog)
Social media gets teens hooked while feeding aggression and impulsivity, and
 researchers think they know why (CBC)
X marks the non-spot? (PGN adapted from Lauren Weinstein)
It's Still Easy for Anyone to Become You at Experian
 (Krebs on Security)
Paying ransom for data stolen in cyberattack bankrolls further crime,
 experts caution (CBC)
Toronto Public Library cyber-attack (Mark Brader)
People selling cars via Internet get phished (CBC)
Data breach of Michigan healthcare giant exposes millions of records
 (Engadget)
More on iLeakage (Victor Miller)
Using your iPhone to start your car is about to get a lot easier (The Verge)
Massive cryptomining rig discovered under Polish court's floor, stealing
 power (Ars Technica)
A Coder Considers the Waning Days of the Craft (The New Yorker via
 Steve Bacher)
Re: Industrial Robot Crushes Worker to Death (PGN)
Re: Toyota has built an EV with a fake transmission (Peter Houppermans)
Re: Data on 267,000 Sarnia patients going back 3 decades
 among cyberattack thefts at 5 Ontario hospitals Digest (Mark Brader)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Wed, 15 Nov 2023 23:43:11 -0500
From: Gabe Goldberg <gabe () gabegold com>
Subject: How the Railroad Industry Intimidates Employees Into
 Putting Speed Before Safety (ProPublica)

Railroad companies have penalized workers for taking the time to make needed
repairs and created a culture in which supervisors threaten and fire the
very people hired to keep trains running safely. Regulators say they can’t
stop this intimidation.

Bradley Haynes and his colleagues were the last chance Union Pacific had to
stop an unsafe train from leaving one of its railyards. Skilled in spotting
hidden dangers, the inspectors in Kansas City, Missouri, wrote up so-called
“bad orders” to pull defective cars out of assembled trains and send them
for repairs.

But on Sept. 18, 2019, the area’s director of maintenance, Andrew Letcher,
scolded them for hampering the yard’s ability to move trains on time.

“We're a transportation company, right? We get paid to move freight. We
don't get paid to work on cars,” he said.

https://www.propublica.org/article/railroad-safety-union-pacific-csx-bnsf-trains-freight

------------------------------

Date: Sun, 12 Nov 2023 17:04:52 -0500
From: Jan Wolitzky <jan.wolitzky () gmail com>
Subject: Hikers Rescued After Following Nonexistent Trail on Google Maps
 (

A Canadian search-and-rescue group said it had conducted two missions
recently after hikers may have sought to follow a nonexistent trail on
Google Maps

A search-and-rescue group in British Columbia advised hikers to use a paper
map and compass instead of street map programs after it said two hikers had
been rescued by helicopter after likely following a trail that did not
exist but that appeared on Google Maps.

The group, North Shore Rescue, said on Facebook that on 6 Nov 2023 Google
Maps had removed the nonexistent trail, which was in a very steep area with
cliffs north of Mount Fromme, which overlooks Vancouver.

https://www.nytimes.com/2023/11/12/world/canada/google-maps-trail-british-columbia.html

  [Fromme here to eternity?  PGN]

------------------------------

Date: Thu, 16 Nov 2023 09:51:56 +0000
From: David Lamkin <drl () shelford org>
Subject: Admission of the state of software

Having put of buying a 'smart car' for as long as possible I am now the
proud (?) owner of a SEAT Arona. The instruction manual is long and detailed
but one statement does not inspire confidence:

As with most state-of-the-art computer and electronic equipment, in
certain cases the system may need to be rebooted to make sure that it
operates correctly.

This statement should shame all software engineers!

  [Does the SEAT Arona have the classical new-seat aroma as an inSCENTive?
  PGN]

------------------------------

Date: Thu, 16 Nov 2023 00:29:03 +0900
From: Dave Farber <farber () gmail com>
Subject: 500 chatbots read the news and discussed it on social media. Guess
 how that went.  (Business Insider)

https://www.businessinsider.com/ai-chatbots-less-toxic-social-networks-twitter-simulation-2023-11

------------------------------

Date: Sun, 12 Nov 2023 16:09:15 PST
From: Peter Neumann <neumann () csl sri com>
Subject: The Problem with Regulating AI (Tim Wu)

Tim Wu, *The New York Times*, 12 Nov 2023

If the government acts prematurely on this evolving
  technology, it could fail to prevent concrete harm.

    [... and we certainly don't want AI mixing concrete for bridges
    and other life-critical structures.  PGN]

  Final para: The existence of actual social harm has long been a
  touchstone of legitimate state action.  But that point cuts both
  ways: The state should proceed cautiously in the absence of harm,
  but it also has duty, given evidence of harm, to take action.  By
  that measure, with AI we are at risk of doing too much and too
  little at the same time.

    [The lesser of weasels?  That is indeed a well-crafted weasel.  Be
    careful of what you ask for.  You might get [stuck with] it.  PGN]

------------------------------

Date: Mon, 13 Nov 2023 20:59:13 +0000
From: Victor Miller <victorsmiller () gmail com>
Subject: ChatGPT Created a Fake Dataset With Skewed Results (MedPage Today)

https://www.medpagetoday.com/special-reports/features/107247

  [What could possibly go wrong?  PGN]

------------------------------

Date: Tue, 14 Nov 2023 16:21:02 +0000
From: Victor Miller <victorsmiller () gmail com>
Subject: Researchers Discover New Vulnerability in Large Language
 Models (Carnegie Mellon University)

https://www.cmu.edu/news/stories/archives/2023/july/researchers-discover-new-vulnerability-in-large-language-models

------------------------------

Date: Wed, 15 Nov 2023 08:48:25 +0000
From: Bruce Schneier <schneier () schneier com>
Subject: Ten ways AI will change democracy

  [PGN-extracted from Bruce's CRYPTO-GRAM, 15 Nov 2023]

  A free monthly newsletter providing summaries, analyses,
  and commentaries on security: computer and otherwise.

        ** TEN WAYS AI WILL CHANGE DEMOCRACY

[2023.11.13]
[https://www.schneier.com/blog/archives/2023/11/ten-ways-ai-will-change-democrac
y.html]
Artificial intelligence will change so many aspects of society, largely in
ways that we cannot conceive of yet. Democracy, and the systems of
governance that surround it, will be no exception. In this short essay, I
want to move beyond the *AI-generated disinformation* trope and speculate on
some of the ways AI will change how democracy functions -- in both large and
small ways.

When I survey how artificial intelligence might upend different aspects of
modern society, democracy included, I look at four different dimensions of
change: speed, scale, scope, and sophistication. Look for places where
changes in degree result in changes of kind. Those are where the societal
upheavals will happen.

Some items on my list are still speculative, but none require
science-fictional levels of technological advance. And we can see the first
stages of many of them today. When reading about the successes and failures
of AI systems, it's important to differentiate between the fundamental
limitations of AI as a technology, and the practical limitations of AI
systems in the fall of 2023. Advances are happening quickly, and the
impossible is becoming the routine. We don't know how long this will
continue, but my bet is on continued major technological advances in the
coming years. Which means it's going to be a wild ride.

So, here's my list:

1. AI as educator. We are already seeing AI serving the role of
   teacher. It's much more effective for a student to learn a topic from an
   interactive AI chatbot than from a textbook. This has applications for
   democracy. We can imagine chatbots teaching citizens about different
   issues, such as climate change or tax policy. We can imagine candidates
modern society, democracy included, I look at four different dimensions of
change: speed, scale, scope, and sophistication. Look for places where
changes in degree result in changes of kind. Those are where the societal
upheavals will happen.

Some items on my list are still speculative, but none require
science-fictional levels of technological advance. And we can see the first
stages of many of them today. When reading about the successes and failures
of AI systems, it's important to differentiate between the fundamental
limitations of AI as a technology, and the practical limitations of AI
systems in the fall of 2023. Advances are happening quickly, and the
impossible is becoming the routine. We don't know how long this will
continue, but my bet is on continued major technological advances in the
coming years. Which means it's going to be a wild ride.

So, here's my list:

1. AI as educator. We are already seeing AI serving the role of
   teacher. It's much more effective for a student to learn a topic from an
   interactive AI chatbot than from a textbook. This has applications for
   democracy. We can imagine chatbots teaching citizens about different
   issues, such as climate change or tax policy. We can imagine candidates
   
[https://www.theatlantic.com/technology/archive/2023/04/ai-generated-political-ads-election-candidate-voter-interaction-transparency/673893/]
   of themselves, allowing voters to directly engage with them on various
   issues. A more general chatbot could know the positions of all the
   candidates, and help voters decide which best represents their
   position.  There are a lot of possibilities here.

2. AI as sense maker. There are many areas of society where accurate
   summarization is important. Today, when constituents write to their
   legislator, those letters get put into two piles -- one for and another
   against -- and someone compares the height of those piles. AI can do much
   better. It can provide a rich summary
[https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278]
   of the comments. It can help figure out which are unique and which are
   form letters. It can highlight unique perspectives. This same system can
   also work for comments to different government agencies on rulemaking
   processes -- and on documents generated during the discovery process in
   lawsuits.

3. AI as moderator, mediator, and consensus builder. Imagine online
   conversations in which AIs serve the role of moderator. This could ensure
   that all voices are heard. It could block hateful -- or even just
   off-topic -- comments. It could highlight areas of agreement and
   disagreement. It could help the group reach a decision. This is nothing
   that a human moderator can't do, but there aren't enough human moderators
   to go around. AI can give this capability
   [https://slate.com/technology/2023/04/ai-public-option.html] to every
   decision-making group. At the extreme, an AI could be an arbiter -- a
   judge -- weighing evidence and making a decision. These capabilities
   don't exist yet, but they are not far off.

4. AI as lawmaker. We have already seen proposed legislation written
   
[https://lieu.house.gov/media-center/press-releases/rep-lieu-introduces-first-federal-legislation-ever-written-artificial]
   by AI
   [https://www.politico.com/newsletters/digital-future-daily/2023/07/19/why-chatgpt-wrote-a-bill-for-itself-00107174],
   albeit more as a stunt than anything else. But in the future AIs will
   help craft legislation, dealing with the complex ways laws interact with
   each other. More importantly, AIs will eventually be able to craft
   loopholes
   [https://www.technologyreview.com/2023/03/14/1069717/how-ai-could-write-our-laws/]
   in legislation, ones potentially too complicated for people to easily
   notice. On the other side of that, AIs could be used to find loopholes in
   legislation -- for both existing and pending laws. And more generally,
   AIs could be used to help develop policy positions.

5. AI as political strategist. Right now, you can ask your favorite chatbot
   questions about political strategy: what legislation would further your
   political goals, what positions to publicly take, what campaign slogans
   to use. The answers you get won't be very good, but that'll improve with
   time. In the future we should expect politicians to make use of this AI
   expertise: not to follow blindly, but as another source of ideas. And as
   AIs become more capable at using tools
   [https://www.wired.com/story/does-chatgpt-make-you-nervous-try-chatgpt-with-a-hammer/],
   they can automatically conduct polls and focus groups to test out
   political ideas. There are a lot of possibilities
   [https://www.technologyreview.com/2023/07/28/1076756/six-ways-that-ai-could-change-politics/]
   here: AIs could also engage in fundraising campaigns, directly soliciting
   contributions from people.

6. AI as lawyer. We don't yet know which aspects of the legal profession can
   be done by AIs, but many routine tasks that are now handled by attorneys
   will soon be able to be completed by an AI. Early attempts at having AIs
   write legal briefs haven't worked
   [https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/],
   but this will change as the systems get better at accuracy. Additionally,
   AIs can help people navigate government systems: filling out forms,
   applying for services, contesting bureaucratic actions. And future AIs
   will be much better at writing legalese, reducing the cost of legal
   counsel.

7. AI as cheap reasoning generator. More generally, AI chatbots are really
   good at generating persuasive arguments. Today, writing out a persuasive
   argument takes time and effort, and our systems reflect that. We can
   easily imagine AIs conducting lobbying campaigns
   [https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html],
   generating and submitting comments
   [https://www.belfercenter.org/publication/we-dont-need-reinvent-our-democracy-save-it-ai]
   on legislation and rulemaking. This also has applications for the legal
   system. For example: if it is suddenly easy to file thousands of court
   cases, this will overwhelm the courts. Solutions for this are hard. We
   could increase the cost of filing a court case, but that becomes a burden
   on the poor. The only solution might be another AI working for the court,
   dealing with the deluge of AI-filed cases -- which doesn't sound like a
   great idea.

8. AI as law enforcer. Automated systems already act as law enforcement in
   some areas: speed trap cameras are an obvious example. AI can take this
   kind of thing much further, automatically identifying people who cheat on
   tax returns or when applying for government services. This has the
   obvious problem of false positives, which could be hard to contest if the
   courts believe that *the computer is always right.* Separately, future
   laws might be so complicated
[https://slate.com/technology/2023/07/artificial-intelligence-microdirectives.html]
   that only AIs are able to decide whether or not they are being
   broken. And, like breathalyzers, defendants might not be allowed to know
   how they work.

9. AI as propagandist. AIs can produce and distribute propaganda faster than
   humans can. This is an obvious risk, but we don't know how effective any
   of it will be. It makes disinformation campaigns easier, which means that
   more people will take advantage of them. But people will be more inured
   against the risks. More importantly, AI's ability to summarize and
   understand text can enable much more effective censorship.

10. AI as political proxy. Finally, we can imagine an AI voting on behalf of
   individuals. A voter could feed an AI their social, economic, and
   political preferences; or it can infer them by listening to them talk and
   watching their actions. And then it could be empowered to vote on their
   behalf, either for others who would represent them, or directly on ballot
   initiatives. On the one hand, this would greatly increase voter
   participation. On the other hand, it would further disengage people from
   the act of understanding politics and engaging in democracy.

When I teach AI policy at HKS, I stress the importance of separating the
specific AI chatbot technologies in November of 2023 with AI's technological
possibilities in general. Some of the items on my list will soon be
possible; others will remain fiction for many years. Similarly, our
acceptance of these technologies will change. Items on that list that we
would never accept today might feel routine in a few years. A judgeless
courtroom seems crazy today, but so did a driverless car a few years ago.
Don't underestimate our ability to normalize new technologies. My bet is
that we're in for a wild ride.

This essay previously appeared on the Harvard Kennedy School Ash Center's
website: https://ash.harvard.edu/ten-ways-ai-will-change-democracy

------------------------------

Date: Mon, 13 Nov 2023 17:37:27 -0500
From: Monty Solomon <monty () roscom com>
Subject: Fake Reviews Are Rampant Online. Can a Crackdown End Them?
 (NYTimes)

A wave of regulation and industry action has placed the flourishing fake
review business on notice. But experts say the problem may be
insurmountable.

https://www.nytimes.com/2023/11/13/technology/fake-reviews-crackdown.html

------------------------------

Date: Fri, 17 Nov 2023 16:29:37 -0800
From: Lauren Weinstein <lauren () vortex com>
Subject: OpenAI co-founder & president Greg Brockmane quits after firing of
 CEO Altman (TechCrunch)

https://techcrunch.com/2023/11/17/greg-brockman-quits-openai-after-abrupt-firing-of-sam-altman/

------------------------------

Date: Fri, 17 Nov 2023 09:45:44 -0800
From: Rob Slade <rslade () gmail com>
Subject: The AI Pin

The name is obviously intended to capitalize on the recent interest in
generative/large language model artificial intelligence.  Equally
obviously, some AI is involved, as long as you allow your definition of AI
to extend to mere speech-to-text capability.

Humane's AI Pin is a smartphone.  With no screen.  Attaching to your
clothing with a magnet, it can make calls, take pictures, access the
Internet, and even at need, project text (presumably later it will do
images) onto surfaces using lasers.

In one sense, this is what I always figured that smartphones would become.
It is styled as a "smart assistant."  If you have a human assistant, you
give them orders verbally, you don't type out commands.  (Unless you're
sending them texts ...)

------------------------------

Date: Fri, 17 Nov 2023 10:55:40 -0500 (EST)
From: ACM TechNews <technews-editor () acm org>
Subject: Ukraine's 'Secret Weapon' Against Russia Is a U.S. Tech Company
 (Vera Bergengruen)

Vera Bergengruen, *Time*, 14 Nov 2023

U.S. facial recognition company Clearview AI has become Ukraine's "secret
weapon" in its war against Russia. More than 1,500 officials across 18
Ukrainian government agencies are using its technology, which has helped
them identify more than 230,000 Russian soldiers and officials who have
participated in the Russian invasion. Ukraine also relies on the company to
assist with other tasks, including processing citizens who lost their
identification and locating abducted Ukrainian children. Ukraine has run at
least 350,000 searches of Clearview's database in the 20 months since the
outbreak of the war. Said Clearview AI CEO Hoan Ton-That, "Using facial
recognition in war zones is something that's going to save lives."

------------------------------

Date: Wed, 15 Nov 2023 11:57:51 -0500 (EST)
From: ACM TechNews <technews-editor () acm org>
Subject: Cryptographic Keys Protecting SSH Connections Exposed
 (Dan Goodin)

Dan Goodin, *Ars Technica*, 13 Nov 2023, via ACM Tech News

Researchers at the University of California, San Diego (UCSD) demonstrated
that a large portion of cryptographic keys used to protect data in
computer-to-server SSH traffic is vulnerable, and were able to calculate the
private portion of almost 200 unique SSH keys they observed in public
Internet scans. The vulnerability occurs when there are errors during the
signature generation that takes place when a client and server are
establishing a connection. It affects only keys using the RSA cryptographic
algorithm, which the researchers found in roughly a third of the SSH
signatures they examined, translating to about 1 billion signatures, about
one in a million of which exposed the private key of the host. Said UCSD's
Keegan Ryan, "Our research reiterates the importance of defense in depth in
cryptographic implementations and illustrates the need for protocol designs
that are more robust against computational errors."

  [Lauren Weinstein suggests looking at this:
    Passive SSH Key Compromise via Lattices
    https://eprint.iacr.org/2023/1711.pdf
  PGN]

------------------------------

Date: Thu, 16 Nov 2023 14:15:59 +0000
From: Victor Miller <victorsmiller () gmail com>
Subject: Developers can't seem to stop exposing credentials in publicly
 accessible code (Ars Technica)

https://arstechnica.com/security/2023/11/developers-cant-seem-to-stop-exposing-credentials-in-publicly-accessible-code/

------------------------------

Date: Mon, 13 Nov 2023 02:06:07 +0000
From: Victor Miller <victorsmiller () gmail com>
Subject: Hacking Some More Secure USB Flash Drives -- Part II
 (SySS Tech Blog)
https://blog.syss.com/posts/hacking-usb-flash-drives-part-2/

------------------------------

Date: Thu, 16 Nov 2023 05:49:16 -0700
From: Matthew Kruk <mkrukg () gmail com>
Subject: Social media gets teens hooked while feeding aggression
 and impulsivity, and researchers think they know why (CBC)

https://www.cbc.ca/news/health/smartphone-brain-nov14-1.7029406

Kids who spend hours on their phones scrolling through social media are
showing more aggression, depression and anxiety, say Canadian researchers.

Emma Duerden holds the Canada Research Chair in neuroscience and learning
disorders at Western University, where she uses brain imaging to study the
impact of social media use on children's brains.

She and others found that screen time has fallen just slightly from the
record 13 hours a day some Canadian parents reported for six- to
12-year-olds in the early months of the COVID-19 pandemic.

"We're seeing lots of these effects. Children are reporting high levels of
depression and anxiety or aggression. It really is a thing."

------------------------------

Date: Fri, 17 Nov 2023 16:37:46 -0800
From: Lauren Weinstein <lauren () vortex com>
Subject: X marks the non-spot? (PGN adapted)

* Warner Bros Discovery "pauses" its ads on X for "the foreseeable future"
* Comcast suspends X ads; OpenAI employees hold all-hands meeting in
  wake of exec turmoil
* Lionsgate Entertainment and Paramount Global suspend ads on X
* Google should stop all participation with Twitter/X or any other Musk
  enterprises as soon as contractually practical, or be branded a supporter
  of his horrific views [LW]

------------------------------

Date: Tue, 14 Nov 2023 14:47:20 +0000 (UTC)
From: Steve Bacher <sebmb1 () verizon net>
Subject: It's Still Easy for Anyone to Become You at Experian
 (Krebs on Security)

https://krebsonsecurity.com/2023/11/its-still-easy-for-anyone-to-become-you-at-experian/

------------------------------

Date: Sat, 18 Nov 2023 13:37:55 -0700
From: Matthew Kruk <mkrukg () gmail com>
Subject: Paying ransom for data stolen in cyberattack bankrolls
 further crime, experts caution (CBC)

https://www.cbc.ca/radio/spark/cyberattacks-ransomware-paying-ransom-crime-1.7030579

When the town of St. Marys, Ont., fell victim to a cyberattack last year,
lawyers advised the municipality to pay a ransom of $290,000 in
cryptocurrency.

The decision was made after an analysis by firms specializing in
cybersecurity. Al Strathdee, mayor of the southwestern Ontario town of
about 7,000 residents, said the potential risk to people's data was too
high not to pay up.

------------------------------

Date: Sat, 18 Nov 2023 01:53:56 -0500 (EST)
From: Mark Brader <msb () Vex Net>
Subject: Toronto Public Library cyber-attack

  [Note: This was previously reported as ransomware.
  Now they just say that no ransom has been paid.]

The Toronto Public Library reported a cyber-attack on October 28, and later
said that "a large number of files" were stolen, including personal
information of library staff.  While they're working on the problem, the
library's web site is down.  (You get forwarded to an information page
currently at:
https://torontopubliclibrary.typepad.com/tpl_maintenance/toronto-public-library-website-maintenance.html)

The public computers and printers at all 100 library branches are also down.
All this means that you (meaning me) can't request a book be held for you,
and you also can't search the electric catalog that replaced the old card
catalogs.

See also: http://www.cbc.ca/news/any-1.7028982

------------------------------

Date: Sat, 18 Nov 2023 02:00:47 -0500 (EST)
From: Mark Brader <msb () Vex Net>
Subject: People selling cars via Internet get phished (CBC)

It says here
   http://www.cbc.ca/news/any-1.7028730
that people who post car-for-sale ads are being sought by scammers.
The seller gets what appears to be an offer, but it requests the
seller use a specific source to provide the vehicle's history --
a source that's actually phishing for credit-card information.

------------------------------

Date: Tue, 14 Nov 2023 22:27:15 -0500
From: Monty Solomon <monty () roscom com>
Subject: Data breach of Michigan healthcare giant exposes millions of
 records (Engadget)

https://www.engadget.com/data-breach-of-michigan-healthcare-giant-exposes-millions-of-records-153450209.html

------------------------------

Date: Thu, 16 Nov 2023 19:26:39 -0800
From: Victor Miller <victorsmiller () gmail com>
Subject: More on iLeakage

[...] We show how an attacker can induce Safari to render an arbitrary
webpage, subsequently recovering sensitive information present within it
using speculative execution. In particular, we demonstrate how Safari allows
a malicious webpage to recover secrets from popular high-value targets, such
as Gmail inbox content. Finally, we demonstrate the recovery of passwords,
in case these are autofilled by credential managers.

Virtually all modern CPUs use a performance optimization where they predict
if a branch instruction will be taken or not, should the outcome not be
readily available. Once a prediction is made, the CPU will execute
instructions along the prediction, a process called speculative execution.
If the CPU realizes it had mispredicted, it must revert all changes in the
state it performed after the prediction. Both desktop and mobile CPUs
exhibit this behavior, regardless of manufacturer (such as Apple, AMD, or
Intel).

A Spectre attack coerces the CPU into speculatively executing the wrong flow
of instructions. If this wrong flow has instructions depending on sensitive
data, their value can be inferred through a side channel even after the CPU
realizes the mistake and reverts its changes.

We disclosed our results to Apple on September 12, 2022 (408 days before
public release).

------------------------------

Date: Thu, 16 Nov 2023 21:12:33 -0500
From: Monty Solomon <monty () roscom com>
Subject: Using your iPhone to start your car is about to get a
 lot easier (The Verge)

https://www.theverge.com/2023/11/16/23964379/apple-iphone-digital-key-uwb-ccc-fira-working-group

  [Except where there is no cell-phone coverage???  And if that has been
  overcome by making your iPhone a key-dongle, then thefts of cell phones
  *and* cars may increase!  PGN]

------------------------------

Date: Thu, 16 Nov 2023 22:41:25 -0500
From: Monty Solomon <monty () roscom com>
Subject: Massive cryptomining rig discovered under Polish court's
 floor, stealing power (Ars Technica)

https://arstechnica.com/?p=1984512

  [What's yours is mine(d).  PGN]

------------------------------

Date: Fri, 17 Nov 2023 10:33:36 -0800
From: Steve Bacher <sebmb1 () verizon net>
Subject: A Coder Considers the Waning Days of the Craft (The New Yorker)

www.newyorker.com

James Somers, a professional coder, writes about the astonishing scripting
skills of A.I. chatbots like GPT-4 and considers the future of a once
exalted craft.

https://www.newyorker.com/magazine/2023/11/20/a-coder-considers-the-waning-days-of-the-craft

I really disagree with some of what the writer says about
programming/coding.

  "What I learned was that programming is not really about knowledge or
  skill but simply about patience, or maybe obsession."

Almost certainly he got that attitude because he started, from no
experience, with the worst possible programming language, Visual C++. 
There's no way anyone should begin learning how to code with any C++
variant.  Those of us who started with Basic (or even FORTRAN, in my case)
ended up doing better.  Not to mention Logo.

------------------------------

Date: Mon, 13 Nov 2023 10:13:49 PST
From: Peter Neumann <neumann () csl sri com>
Subject: Re: Industrial Robot Crushes Worker to Death (R 33 93)

  [Here's the rest of that item.  PGN]

CBS News, 09 Nov 2023

An industrial robot crushed a worker to death at a vegetable packaging
factory in South Korea's southern county of Goseong. According to police,
the victim was grabbed and pressed against a conveyor belt by the machine's
robotic arms. The machine was equipped with sensors designed to identify
boxes. "It wasn't an advanced, artificial intelligence-powered robot, but a
machine that simply picks up boxes and puts them on pallets," said Kang
Jin-gi at Goseong Police Station. According to another police official,
security camera footage showed the man had moved near the robot with a box
in his hands, which could have triggered the machine's reaction. Similar
incidents have happened in South Korea before.

------------------------------

Date: Tue, 14 Nov 2023 21:34:36 +0100
From: Peter Houppermans <peter () houppermans net>
Subject: Re: Toyota has built an EV with a fake transmission
 (RISKS-33.93)

It depends on your perspective -- there is actually a good use case for it.

You may argue that this will eventually be a thing of the past*, but
changing gear manually is very prevalent in Europe.  I would posit that this
may have something to do with the difference in fuel prices as manual cars
are (or used to be) more economical to drive, but a side effect is that this
also results in driving license exemptions -- when you have learned to drive
with an automatic you are not allowed to drive a manual car, in some
countries for a few years, in some you even have to pass a separate exam.

Learning to drive with a manual car qualifies you for both, and this
presently creates a conundrum for driving schools: in order to teach someone
to drive with a manual car, they are effectively legally required to use an
ICE vehicle as EVs tend to be automatic - until now.

If Toyota's "fake transmission" is realistic enough to mimic ICE behaviour
to be ratified as a viable alternative, it could offer an EV stopgap until
manual vehicles are rare enough for the demand to disappear.

  [From that perspective, it's not a game or gadget, but a useful
  simulator.]

    [Martin Ward responds:

      That's an ingenious example that I hadn't thought of!
      It would make for a pretty expensive learner car.  MW]

------------------------------

Date: Sat, 18 Nov 2023 01:38:59 -0500 (EST)
From: Mark Brader <msb () Vex Net>
Subject: Re: Data on 267,000 Sarnia patients going back 3 decades
 among cyberattack thefts at 5 Ontario hospitals Digest (RISKS-33.93)

New update: https://www.cbc.ca/news/canada/windsor/anykey-1.7031544

------------------------------

Date: Sat, 28 Oct 2023 11:11:11 -0800
From: RISKS-request () csl sri com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest.  Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
 subscribe and unsubscribe:
   http://mls.csl.sri.com/mailman/listinfo/risks

=> SUBMISSIONS: to risks () CSL sri com with meaningful SUBJECT: line that
   includes the string `notsp'.  Otherwise your message may not be read.
 *** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) has moved to the ftp.sri.com site:
   <risksinfo.html>.
 *** Contributors are assumed to have read the full info file for guidelines!

=> OFFICIAL ARCHIVES:  http://www.risks.org takes you to Lindsay Marshall's
    delightfully searchable html archive at newcastle:
  http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
  Also, ftp://ftp.sri.com/risks for the current volume/previous directories
     or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
  If none of those work for you, the most recent issue is always at
     http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-33.00
  ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
  Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 33.94
************************


Current thread: