Interesting People mailing list archives

IEEE Spectrum Reverse-Engineers Cuban Sonic Weapon


From: "DAVID FARBER" <dfarber () me com>
Date: Mon, 19 Mar 2018 08:34:51 -0400




Begin forwarded message:

From: Joly MacFie <joly.nyc () gmail com>
Date: March 19, 2018 at 12:14:11 AM EDT
To: dave <dave () farber net>, ip <ip () listbox com>
Subject: IEEE Spectrum Reverse-Engineers Cuban Sonic Weapon

[Via Jim Griffin/Pho]       tl;dr " bad engineering may be a more likely culprit than a sonic weapon."


https://spectrum.ieee.org/semiconductors/devices/how-we-reverse-engineered-the-cuban-sonic-weapon-attack

How We Reverse Engineered the Cuban “Sonic Weapon” Attack

Examining overlooked clues reveals how ultrasound could have caused harm in Havana

By Kevin Fu, Wenyuan Xu and Chen Yan
IEEE Spectrum
15 Mar 2018

Throughout last year, mysterious ailments struck dozens of U.S. and Canadian diplomats and their families living in 
Cuba. Symptoms included dizziness, sleeplessness, headache, and hearing loss; many of the afflicted were in their 
homes or in hotel rooms when they heard intense, high-⁠pitched sounds shortly before falling ill. In February, 
neurologists who examined the diplomats concluded that the symptoms were consistent with concussion, but without any 
blunt trauma to the head. Suggested culprits included toxins, viruses, and a sonic weapon, but to date, no cause has 
been confirmed.

We found the last suggestion—a sonic weapon—intriguing, because around the same time that stories about health 
problems in Cuba began appearing, our labs, at the University of Michigan–Ann Arbor, and at Zhejiang University in 
China, were busy writing up our latest research on ultrasonic cybersecurity. We wondered, Could ultrasound be the 
culprit in Cuba?

On the face of it, it seems impossible. For one thing, ultrasonic frequencies—20 kilohertz or higher—are inaudible to 
humans, and yet the sounds heard by the diplomats were obviously audible. What’s more, those frequencies don’t 
propagate well through air and aren’t known to cause direct harm to people except under rarefied conditions. Acoustic 
experts dismissed the idea that ultrasound could be at fault.

Then, about six months ago, an editor from The Conversation sent us a link to a video from the Associated Press, 
reportedly recorded in Cuba during one of the attacks.

The editor asked us for our reaction. In the video, you can hear a piercing, metallic sound—it’s not pleasant. 
Watching the AP video frame by frame, we immediately noticed a few oddities. In one sequence, someone plays a sound 
file from one smartphone while a second smartphone records and plots the acoustic spectrum. So already the data are 
somewhat suspect because every microphone and every speaker introduces some distortion. Moreover, what humans hear 
isn’t necessarily the same as what a microphone picks up. Cleverly crafted sounds can lead to auditory illusions akin 
to optical illusions.

The AP video also includes a spectral plot of the recording—that’s basically a visual representation of the 
intensities of the various acoustic tones present, arranged by frequency. Looking closely, we noticed a spectral peak 
near 7 kilohertz and a dozen other less-intense tones that formed a regular pattern with peaks separated by 
approximately 180 hertz. What could have caused these ripples every 180 Hz? And what kind of mechanism could make an 
ultrasonic source produce audible sound?

As the questions began to mount, it still didn’t make sense to us, and that seemed like an excellent reason to dig 
deeper.

We also felt an obligation to investigate. Our own research had taught us that ultrasound can compromise the security 
of many types of sensors found widely in medical devices, autonomous vehicles, and the Internet of Things. For the 
last decade, two of us (Fu and Xu) have been collaborating on embedded security research, with the goal of 
discovering physics-based engineering principles and practices that will make automated computer systems secure by 
design. For example, Xu’s 2017 paper “DolphinAttack: Inaudible Voice Commands” describes how we used ultrasonic 
signals to inject inaudible voice commands into speech recognition systems such as Siri, Google Now, Samsung S Voice, 
Huawei HiVoice, Cortana, Alexa, and the navigation system of an Audi automobile.

The Cuban ultrasonic mystery was too close to our research to ignore.

One thing we knew going into this investigation is that acoustic interference can occur where you least expect it. 
Several years ago, Fu became annoyed by an ear-piercing sound coming from a lightbulb in his apartment. He took 
spectral measurements and noticed that the lightbulb tended to shriek when the air conditioner turned on. He 
eventually concluded that the compressor was pumping coolant through its pipes at the same resonant frequency of the 
filament in the bulb. Normally, this wouldn’t be a problem. But in this case, the coolant pipes ran through the 
ceiling and mechanically coupled to the ceiling joist supporting the lightbulb. The superintendent opened up the 
ceiling and separated the joist from the pipe with a piece of duct tape, to dampen the unwanted coupling. The sound 
stopped.

[The Canadian government] ruling notes that “a number of ‘subjective’ effects have been reportedly caused by airborne 
ultrasound, including fatigue, headache, nausea, tinnitus and disturbance of neuromuscular coordination.”

We also knew that ultrasound isn’t considered harmful to humans—for the most part. Misused, an ultrasonic emitter 
that’s in direct contact with a person’s body can heat tissues and damage organs. And the U.S. Occupational Safety 
and Health Administration (OSHA) warns that audible subharmonics caused by intense airborne ultrasonic tones can be 
harmful. Thus, U.S. standards on ultrasonic emissions build in safety margins to account for those subharmonics. The 
Canadian government, meanwhile, has ruled that humans can be directly harmed by airborne ultrasound at sound 
pressures of 155 decibels or higher—which is louder than a jet taking off at 25 meters. That ruling also notes that 
“a number of ‘subjective’ effects have been reportedly caused by airborne ultrasound, including fatigue, headache, 
nausea, tinnitus and disturbance of neuromuscular coordination.”

Of course, even at 155 dB, ultrasonic tones remain inaudible. Unless they’re not—more on this in a bit.

To make the problem tractable, we began by assuming that the source of the audible sounds in Cuba was indeed 
ultrasonic. Reviewing the OSHA guidance, Fu theorized that the sound came from the audible subharmonics of inaudible 
ultrasound. In contrast to harmonics, which are produced at integer multiples of a sound’s fundamental frequency, 
subharmonics are produced at integer divisors (or submultiples) of the fundamental frequency, such as 1/2 or 1/3. For 
instance, the second subharmonic of an ultrasonic 20-kHz tone is a clearly audible 10 kHz. Subharmonics didn’t quite 
explain the AP video, though: In the video, the spectral plot indicates tones evenly spaced every 180 Hz, whereas 
subharmonics would have appeared at progressively smaller fractions of the original frequency. Such a plot would not 
have the constant 180-Hz spacing.

Fu explained his theory to Chen Yan, a Ph.D. student in Xu’s lab. Yan wrote back: It’s not subharmonics—it’s 
intermodulation distortion.

Intermodulation distortion (IMD) is a bizarre effect. When multiple tones of different frequencies travel through 
air, IMD can produce several by-products at other frequencies. In particular, second-order IMD by-products will 
appear at the difference or the sum of the two tones’ frequencies. So if you start with a 25-kHz signal and a 32-kHz 
signal, the result could be a 7-kHz tone or a 57-⁠kHz tone. These by-products can be significantly lower in frequency 
while maintaining much of the intensity of the original tones.

IMD is well known to radio engineers, who consider it undesirable for radio communication. The sounds don’t have to 
travel through air; any “nonlinear medium” will do. A medium is considered nonlinear if a change in the output signal 
is not proportional to the change in the input. Acoustic devices such as microphones and amplifiers can also exhibit 
nonlinearity. One way to test for it is to send two pure tones into an amplifier or microphone and then measure the 
output. If additional tones appear in the output, then you know the device is nonlinear.

Computer science researchers have explored the physics of IMD. In the DolphinAttack paper, we used ultrasonic signals 
to trick a smartphone’s voice-recognition assistant. Because of nonlinearity in the smartphone’s microphone, the 
ultrasound produced by-products at audible frequencies inside the circuitry of the microphone. Thus, the IMD signal 
remains inaudible to humans, but the smartphone hears voices. In an early 2017 paper, Nirupam Roy, Haitham Hassanieh, 
and Romit Roy Choudhury at the University of Illinois at Urbana-Champaign described their BackDoor system [PDF] for 
using ultrasound and IMD to jam spy microphones, watermark music played at live concerts, and otherwise create 
“shadow” sounds.

Some composers and musicians have also used IMD to create synthetic sounds, combining audible tones to create other 
subliminal, audible tones. For example, in their 1987 book The Musician’s Guide to Acoustics, Murray Campbell and 
Clive Greated note that the last movement of Jean Sibelius’s Symphony No. 1 in E minor contains tones that lead to a 
rumbling IMD. The human ear processes sound in a nonlinear fashion, and so it can be “tricked” into hearing tones 
that weren’t produced by the instruments and that aren’t in the sheet music; those subliminal tones are produced when 
the played tones combine nonlinearly in the inner ear.

Back to our quest: Knowing that intermodulation distortion between multiple ultrasonic signals can cause 
lower-frequency by-products, we next set about simulating the effect in the lab, aiming to replicate what we observed 
in the AP News video. We used two signals: a pure 25-kHz tone and a 32-KHz carrier tone that had its amplitude 
modulated by a 180-Hz tone. (Our technical report, “On Cuba, Diplomats, Ultrasound, and Intermodulation Distortion” 
[PDF], goes into more detail on the math of how we did this.) The result was clear: Strong tones appeared at 7 kHz 
with repeating ripples separated by 180 Hz.

We then followed up with live experiments. As in the simulation, we used two ultrasonic speakers to emit the signals, 
one as a 180-Hz sine wave amplitude modulated over a 32-kHz carrier, and the second as a single-tone 25-kHz sine 
wave. We used a smartphone to record the result. IMD caused by the air and the smartphone microphone created the 
telltale 7-kHz signal.

This video shows the experimental setup:
Video: Chen Yan at link.

If you look closely at the spectral plot displayed on the smartphone, you’ll notice some higher-order IMD 
by-products, at 4 kHz and beyond, as well as several other frequencies. Interestingly, although we could hear the 
7-kHz tones during the experiment, we couldn’t hear the 4-kHz tones recorded by the smartphone. We suspect that the 
4-kHz tones partly resulted from secondary IMD within the microphone itself. In other words, the microphone was 
hearing an acoustic illusion that we couldn’t hear.

For fun, we also experimented with using an ultrasonic carrier to eavesdrop on a room. In this kind of setup, a spy 
places a microphone to pick up speech and then uses the relatively low-frequency audio signal to modulate the 
amplitude of the carrier wave. The carrier wave then gets picked up by an ultrasonic-capable sensor located some 
distance away and demodulated to recover the original audio. In our experiments, we selected a song to stand in for 
the audio signal recorded by an eavesdropping microphone: Rick Astley’s 1980s hit “Never Gonna Give You Up.” We 
amplitude modulated the song on a 32-kHz ultrasonic carrier. When we introduced a 25-kHz sine wave to interfere with 
this covert ultrasonic channel, IMD in the air produced a 7-kHz audible tone with ripples associated with the tones 
of the song, which was then picked up by the recording device. The computer played the song after software 
demodulation.

One thing to note in the video is that the metallic sounds near 7 kHz are audible only at the point where the two 
signals cross. When the signals do not intersect, you can’t hear the 7-kHz tone, but the demodulator can still play 
the covert song. That finding is consistent with what some diplomats reported in Cuba: The sounds they heard tended 
to be confined to a part of the room. When they moved just a few steps away, the sound stopped.

So if the sources of the sound in Cuba were ultrasonic, what could they have been? There are many sources of 
ultrasound in the modern world. At Michigan, our offices are bathed in 25-kHz signals coming from ceiling-mounted 
ultrasonic room-occupancy sensors. We’ve removed the devices closest to our lab equipment, but just last month we 
discovered a new one. [To learn more about our travails with these sensors, see “How an Ultrasonic Sensor Nearly 
Derailed a Ph.D. Thesis.”] Another source is ultrasonic pest repellents against rodents and insects. (This blog post 
describes a family’s encounter with such a device in the Havana airport.) And some automobiles contain ultrasonic 
emitters.

While the equipment we used in our Cuban re-creation is relatively bulky, ultrasonic emitters can be quite tiny, no 
larger than a piece of Rolo candy. Online, we found a manufacturer in Russia that sells a fashionable leather clutch 
that conceals an ultrasonic emitter, presumably to jam recording devices at cocktail parties. We also found 
electronics stores that carry high-⁠power ultrasonic jammers that cause microphones to malfunction. One advertised 
jammer emits 120-dB ultrasonic interference at a distance of 1 meter. That’s like standing next to a chainsaw. If a 
signal from that caliber jammer were to combine with a second ultrasonic source, audible by-products could result.

While the math leads us to believe that intermodulation distortion is a likely culprit in the Cuban case, we haven’t 
ruled out other null hypotheses that may account for the discomfort that diplomats felt. For example, maybe the tones 
people heard didn’t cause their symptoms but were just another symptom, a clue to the real cause. Or maybe the sounds 
had some sort of nonauditory effect on people’s hearing and physiology, through bone conduction or some other known 
phenomenon. Microwave radiation is another theory. One positive outcome from all this would be if more computer 
scientists were to master embedded security, signal processing, and systems engineering.

Even if our hypothesis is correct, we may never learn the definitive story. The parties responsible for the 
ultrasonic emitters would have already figured out by now that their devices are to blame and would have removed or 
deactivated them. But whether our hypothesis is correct or not, one thing is clear: Ultrasonic emitters can produce 
audible by-products that could have unintentionally harmed diplomats. That is, bad engineering may be a more likely 
culprit than a sonic weapon.

About the Authors

Kevin Fu is a Fellow of the IEEE and an associate professor of computer science and engineering at the University of 
Michigan–Ann Arbor, where he leads the Security and Privacy Research Group. He’s also chief scientist of the 
health-care cybersecurity startup Virta Labs. Wenyuan Xu is professor and chair of the department of systems science 
and engineering at Zhejiang University. Xu’s Ubiquitous System Security Lab (USSLab) has twice been recognized by the 
Tesla Security Researcher Hall of Fame. Chen Yan is a Ph.D. student at Zhejiang University.

To Probe Further

The authors’ technical report “On Cuba, Diplomats, Ultrasound, and Intermodulation Distortion” [PDF] (Technical 
Report CSE-TR-001-18, University of Michigan, Computer Science & Engineering, 1 March 2018) provides additional 
details on their simulation and experiments to reverse engineer the Cuban embassy “sonic weapon.”

AP News’s Josh Lederman and Michael Weissenstein were the first to report the Cuban sound recording, in “Dangerous 
sound? What Americans heard in Cuba attacks,” 13 October 2017.

For more on how sounds can be synthesized using intermodulation distortion, see “Sound Synthesis and Auditory 
Distortion Products,” by Gary S. Kendall, Christopher Haworth, and Rodrigo F. Cádiz, in Computer Music Journal, 
38(4), MIT Press, Winter 2014.

A number of people have suggested that microwaves, rather than ultrasound, may have been at work in Cuba. See, for 
example, James C. Lin’s article “Strange Reports of Weaponized Sound in Cuba,” in IEEE Microwave Magazine, 
January/February 2018, pp. 18-19. A remaining question is whether microwaves could have produced the high-pitched 
sounds recorded by the smartphone in the AP News video.

-- 
---------------------------------------------------------------
Joly MacFie  218 565 9365 Skype:punkcast
---------------------------------------------------------------
This message was sent to the list address and trashed, but can be found online.



-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
Modify Your Subscription: https://www.listbox.com/member/?member_id=18849915&id_secret=18849915-aa268125
Unsubscribe Now: 
https://www.listbox.com/unsubscribe/?member_id=18849915&id_secret=18849915-32545cb4&post_id=20180319083516:F7E69C4E-2B71-11E8-A45D-83A9DCE75AFE
Powered by Listbox: http://www.listbox.com

Current thread: