Information Security News mailing list archives

The Myth of Cyberterrorism


From: InfoSec News <isn () c4i org>
Date: Tue, 12 Nov 2002 00:59:13 -0600 (CST)

Forwarded from: William Knowles <wk () c4i org>

http://www.washingtonmonthly.com/features/2001/0211.green.html

By Joshua Green 
November 2002 

Again and again since September 11, President Bush, Vice President
Cheney, and senior administration officials have alerted the public
not only to the dangers of chem ical, biological, and nuclear weapons
but also to the further menace of cyberterrorism. "Terrorists can sit
at one computer connected to one network and can create worldwide
havoc," warned Homeland Security Director Tom Ridge in a
representative observation last April. "[They] don't necessarily need
a bomb or explosives to cripple a sector of the economy, or shut down
a power grid."

Even before September 11, Bush was fervently depicting an America
imminently in danger of an attack by cyberterrorists, warning during
his presidential campaign that "American forces are overused and
underfunded precisely when they are confronted by a host of new
threats and challenges--the spread of weapons of mass destruction, the
rise of cyberterrorism, the proliferation of missile technology." In
other words, the country is confronted not just by the specter of
terrorism, but by a menacing new breed of it that is technologically
advanced, little understood, and difficult to defend against. Since
September 11, these concerns have only multiplied. A survey of 725
cities conducted by the National League of Cities for the anniversary
of the attacks shows that cyberterrorism ranks with biological and
chemical weapons atop officials' lists of fears.

Concern over cyberterrorism is particularly acute in Washington. As is
often the case with a new threat, an entire industry has arisen to
grapple with its ramifications--think tanks have launched new projects
and issued white papers, experts have testified to its dangers before
Congress, private companies have hastily deployed security consultants
and software designed to protect public and private targets, and the
media have trumpeted the threat with such front-page headlines as this
one, in The Washington Post last June: "Cyber-Attacks by Al Qaeda
Feared, Terrorists at Threshold of Using Internet as Tool of
Bloodshed, Experts Say."

The federal government has requested $4.5 billion for infrastructure
security next year; the FBI boasts more than 1,000 "cyber
investigators"; President Bush and Vice President Cheney keep the
issue before the public; and in response to September 11, Bush created
the office of "cybersecurity czar" in the White House, naming to this
position Richard Clarke, who has done more than anyone to raise
awareness, including warning that "if an attack comes today with
information warfare . . . it would be much, much worse than Pearl
Harbor."

It's no surprise, then, that cyberterrorism now ranks alongside other
weapons of mass destruction in the public consciousness. Americans
have had a latent fear of catastrophic computer attack ever since a
teenage Matthew Broderick hacked into the Pentagon's nuclear weapons
system and nearly launched World War III in the 1983 movie WarGames.  
Judging by official alarums and newspaper headlines, such scenarios
are all the more likely in today's wired world.

There's just one problem: There is no such thing as cyberterrorism--no
instance of anyone ever having been killed by a terrorist (or anyone
else) using a computer. Nor is there compelling evidence that al Qaeda
or any other terrorist organization has resorted to computers for any
sort of serious destructive activity. What's more, outside of a Tom
Clancy novel, computer security specialists believe it is virtually
impossible to use the Internet to inflict death on a large scale, and
many scoff at the notion that terrorists would bother trying. "I don't
lie awake at night worrying about cyberattacks ruining my life," says
Dorothy Denning, a computer science professor at Georgetown University
and one of the country's foremost cybersecurity experts. "Not only
does [cyberterrorism] not rank alongside chemical, biological, or
nuclear weapons, but it is not anywhere near as serious as other
potential physical threats like car bombs or suicide bombers."

Which is not to say that cybersecurity isn't a serious problem--it's
just not one that involves terrorists. Interviews with terrorism and
computer security experts, and current and former government and
military officials, yielded near unanimous agreement that the real
danger is from the criminals and other hackers who did $15 billion in
damage to the global economy last year using viruses, worms, and other
readily available tools. That figure is sure to balloon if more isn't
done to protect vulnerable computer systems, the vast majority of
which are in the private sector. Yet when it comes to imposing the
tough measures on business necessary to protect against the real
cyberthreats, the Bush administration has balked.


Crushing BlackBerrys 

When ordinary people imagine cyberterrorism, they tend to think along
Hollywood plot lines, doomsday scenarios in which terrorists hijack
nuclear weapons, airliners, or military computers from halfway around
the world. Given the colorful history of federal
boondoggles--billion-dollar weapons systems that misfire, $600 toilet
seats--that's an understandable concern. But, with few exceptions,
it's not one that applies to preparedness for a cyberattack. "The
government is miles ahead of the private sector when it comes to
cybersecurity," says Michael Cheek, director of intelligence for
iDefense, a Virginia-based computer security company with government
and private-sector clients. "Particularly the most sensitive military
systems."

Serious effort and plain good fortune have combined to bring this
about. Take nuclear weapons. The biggest fallacy about their
vulnerability, promoted in action thrillers like WarGames, is that
they're designed for remote operation. "[The movie] is premised on the
assumption that there's a modem bank hanging on the side of the
computer that controls the missiles," says Martin Libicki, a defense
analyst at the RAND Corporation. "I assure you, there isn't." Rather,
nuclear weapons and other sensitive military systems enjoy the most
basic form of Internet security: they're "air-gapped," meaning that
they're not physically connected to the Internet and are therefore
inaccessible to outside hackers. (Nuclear weapons also contain
"permissive action links," mechanisms to prevent weapons from being
armed without inputting codes carried by the president.) A retired
military official was somewhat indignant at the mere suggestion: "As a
general principle, we've been looking at this thing for 20 years. What
cave have you been living in if you haven't considered this [threat]?"

When it comes to cyberthreats, the Defense Department has been
particularly vigilant to protect key systems by isolating them from
the Net and even from the Pentagon's internal network. All new
software must be submitted to the National Security Agency for
security testing. "Terrorists could not gain control of our
spacecraft, nuclear weapons, or any other type of high-consequence
asset," says Air Force Chief Information Officer John Gilligan. For
more than a year, Pentagon CIO John Stenbit has enforced a moratorium
on new wireless networks, which are often easy to hack into, as well
as common wireless devices such as PDAs, BlackBerrys, and even
wireless or infrared copiers and faxes.

The September 11 hijackings led to an outcry that airliners are
particularly susceptible to cyberterrorism. Earlier this year, for
instance, Sen. Charles Schumer (D-N.Y.) described "the absolute havoc
and devastation that would result if cyberterrorists suddenly shut
down our air traffic control system, with thousands of planes in
mid-flight." In fact, cybersecurity experts give some of their highest
marks to the FAA, which reasonably separates its administrative and
air traffic control systems and strictly air-gaps the latter. And
there's a reason the 9/11 hijackers used box-cutters instead of
keyboards: It's impossible to hijack a plane remotely, which
eliminates the possibility of a high-tech 9/11 scenario in which
planes are used as weapons.

Another source of concern is terrorist infiltration of our
intelligence agencies. But here, too, the risk is slim. The CIA's
classified computers are also air-gapped, as is the FBI's entire
computer system. "They've been paranoid about this forever," says
Libicki, adding that paranoia is a sound governing principle when it
comes to cybersecurity. Such concerns are manifesting themselves in
broader policy terms as well. One notable characteristic of last
year's Quadrennial Defense Review was how strongly it focused on
protecting information systems.

But certain tics in the way government agencies procure technology
have also--entirely by accident--helped to keep them largely free of
hackers. For years, agencies eschewed off-the-shelf products and
insisted instead on developing proprietary systems, unique to their
branch of government--a particularly savvy form of bureaucratic
self-preservation. When, say, the Department of Agriculture succeeded
in convincing Congress that it needed a specially designed system,
both the agency and the contractor benefited. The software company was
assured the agency's long-term business, which became dependent on its
product; in turn, bureaucrats developed an expertise with the software
that made them difficult to replace. This, of course, fostered
colossal inefficiencies--agencies often couldn't communicate with each
other, minor companies developed fiefdoms in certain agencies, and if
a purveyor went bankrupt, the agency was left with no one to manage
its technology. But it did provide a peculiar sort of protection:  
Outside a select few, no one understood these specific systems well
enough to violate them. So in a sense, the famous inability of
agencies like the FBI and INS to share information because of
incompatible computer systems has yielded the inadvertent benefit of
shielding them from attack.


Ankle Biters

That leaves the less-protected secondary targets--power grids, oil
pipelines, dams, and water systems that don't present opportunities as
nightmarish as do nuclear weapons, but nonetheless seem capable, under
the wrong hands, of causing their own mass destruction. Because most
of these systems are in the private sector and are not yet regarded as
national security loopholes, they tend to be less secure than
government and military systems. In addition, companies increasingly
use the Internet to manage such processes as oil-pipeline flow and
water levels in dams by means of "supervisory control and data
acquisition" systems, or SCADA, which confers remote access. Most
experts see possible vulnerability here, and though terrorists have
never attempted to exploit it, media accounts often sensationalize the
likelihood that they will.

To illustrate the supposed ease with which our enemies could subvert a
dam, The Washington Post's June story on al Qaeda cyberterrorism
related an anecdote about a 12-year-old who hacked into the SCADA
system at Arizona's Theodore Roosevelt Dam in 1998, and was, the
article intimated, within mere keystrokes of unleashing millions of
gallons of water upon helpless downstream communities. But a
subsequent investigation by the tech-news site CNet.com revealed the
tale to be largely apocryphal--the incident occurred in 1994, the
hacker was 27, and, most importantly, investigators concluded that he
couldn't have gained control of the dam and that no lives or property
were ever at risk.

Most hackers break in simply for sport. To the extent that these hacks
occur, they're mainly Web site defacements, which are a nuisance, but
leave the intruder no closer to exploiting the system in any deadly
way. Security experts dismiss such hackers as "ankle biters" and roll
their eyes at prognostications of doom.

Of course, it's conceivable that a computer-literate terrorist truly
intent on wreaking havoc could hack into computers at a dam or power
company. But once inside, it would be far more difficult for him to
cause significant damage than most people realize. "It's not the
difficulty of doing it," says RAND's Libicki. "It's the difficulty of
doing it and having any real consequence." "No one explains precisely
the how, whys, and wherefores of these apocalyptic scenarios," says
George Smith, the editor of Crypt Newsletter, which covers computer
security issues. "You always just get the assumption that chemical
plants can be made to explode, that the water supply can be
polluted--things that are even hard to do physically are suddenly
assumed to be elementary because of the prominence of the Internet."

Few besides a company's own employees possess the specific technical
know-how required to run a specialized SCADA system. The most commonly
cited example of SCADA exploitation bears this out. Two years ago, an
Australian man used an Internet connection to release a million
gallons of raw sewage along Queensland's Sunshine Coast after being
turned down for a government job. When police arrested him, they
discovered that he'd worked for the company that designed the sewage
treatment plant's control software. This is true of most serious
cybersecurity breaches--they tend to come from insiders. It was Robert
Hanssen's familiarity with the FBI's computer system that allowed him
to exploit it despite its security. In both cases, the perpetrators
weren't terrorists but rogue employees with specialized knowledge
difficult, if not impossible, for outsiders to acquire--a security
concern, but not one attributable to cyberterrorism.

Terrorists might, in theory, try to recruit insiders. But even if they
succeeded, the degree of damage they could cause would still be
limited. Most worst-case scenarios (particularly those put forth by
government) presuppose that no human beings are keeping watch to
intervene if something goes wrong. But especially in the case of
electrical power grids, oil and gas utilities, and communications
companies, this is simply untrue. Such systems get hit all the time by
hurricanes, floods, or tornadoes, and company employees are well
rehearsed in handling the fallout. This is equally true when the
trouble stems from human action. Two years ago in California, energy
companies like Enron and El Paso Corp. conspired to cause power
shortages that led to brownouts and blackouts--the same effects
cyberterrorists would wreak. As Smith points out, "There were no
newspaper reports of people dying as a result of the blackouts. No one
lost their mind." The state suffered only minor (if demoralizing)  
inconvenience.

But perhaps the best indicator of what is realistic came last July
when the U.S. Naval War College contracted with a research group to
simulate a massive attack on the nation's information infrastructure.  
Government hackers and security analysts gathered in Newport, R.I.,
for a war game dubbed "Digital Pearl Harbor." The result? The hackers
failed to crash the Internet, though they did cause serious sporadic
damage. But, according to a CNet.com report, officials concluded that
terrorists hoping to stage such an attack "would require a syndicate
with significant resources, including $200 million, country-level
intelligence and five years of preparation time."


Hack Attack

Despite all the media alarm about terrorists poised on the verge of
cyberattack, intelligence suggests that they're doing no more than
emailing and surfing for potential targets. When U.S. troops recovered
al Qaeda laptops in Afghanistan, officials were surprised to find its
members more technologically adept than previously believed. They
discovered structural and engineering software, electronic models of a
dam, and information on computerized water systems, nuclear power
plants, and U.S. and European stadiums. But nothing suggested they
were planning cyberattacks, only that they were using the Internet to
communicate and coordinate physical attacks. "There doesn't seem to be
any evidence that the people we know as terrorists like to do
cyberterrorism," says Libicki. Indeed, in a July report to the Senate
Governmental Affairs Committee detailing the threats detected to
critical infrastructure, the General Accounting Office noted "to date
none of the traditional terrorist groups such as al Qaeda have used
the Internet to launch a known assault on the U.S.'s infrastructure."  
It is much easier, and almost certainly much deadlier, to strike the
old-fashioned way.

Government computers have been targeted by politically minded hackers,
but these attacks are hardly life threatening. They're typified by
last October's penetration of a Defense Department Web site dedicated
to "Operation Enduring Freedom" and, somewhat incongruously, a Web
server operated by the National Oceanic and Atmospheric Association.  
The organization responsible was called the "al Qaeda Alliance Online"  
and was comprised of groups with names like GForce Pakistan and the
Pakistani Hackerz Club--names that connote a certain adolescent
worship of hip-hop that's a clue to the participants' relative lack of
menace; none turned out to have actual terrorist ties.

In both cases, the attackers replaced the government sites' home pages
with photos and anti-American text--but that's all they did. Robbed of
this context, as is usually the case with reports of politically
motivated cyberattacks, such manifestations are often presumed to be
much more serious terrorist threats than is warranted. "When somebody
defaces a Web site, it's roughly equivalent to spray painting
something rude on the outside of a building," says James Lewis,
director of technology policy at the Center for Strategic and
International Studies. "It's really just electronic graffiti."


The Gloom Boom

Yet Washington hypes cyberterrorism incessantly. "Cyberterrorism and
cyberattacks are sexy right now. It's novel, original, it captures
people's imagination," says Georgetown's Denning. Indeed, a peculiar
sort of one-upmanship has developed when describing the severity of
the threat. The most popular term, "electronic Pearl Harbor," was
coined in 1991 by an alarmist tech writer named Winn Schwartau to hype
a novel. For a while, in the mid-1990s, "electronic Chernobyl" was in
vogue. Earlier this year, Sen. Charles Schumer (D-N.Y.) warned of a
looming "digital Armageddon." And the Center for Strategic and
International Studies, a Washington think tank, has christened its own
term, "digital Waterloo."

Why all this brooding over so relatively minor a threat? Ignorance is
one reason. Cyberterrorism merges two spheres--terrorism and
technology--that most lawmakers and senior administration officials
don't fully understand and therefore tend to fear, making them
likelier to accede to any measure, if only out of self-preservation.  
Just as tellingly, many are eager to exploit this ignorance. Numerous
technology companies, still reeling from the collapse of the tech
bubble, have recast themselves as innovators crucial to national
security and boosted their Washington presence in an effort to attract
federal dollars. As Ohio State University law professor Peter Swire
explained to Mother Jones, "Many companies that rode the dot-com boom
need to find big new sources of income. One is direct sales to the
federal government; another is federal mandates. If we have a big
federal push for new security spending, that could prop up the sagging
market."

But lately, a third motive has emerged: Stoking fears of
cyberterrorism helps maintain the level of public anxiety about
terrorism generally, which in turn makes it easier for the
administration to pass its agenda.


Profit of Doom

At the center of all this hype is Richard Clarke, special adviser to
the president for cyberspace security, a veteran of four
administrations, and terrorism czar to Bill Clinton. Even though he
was a senior Clinton official, Clarke's legendary bureaucratic skills
saw him through the transition; and when replaced by Gen. Wayne
Downing after September 11, Clarke created for himself the position of
cybersecurity czar and continued heralding the threat of cyberattack.  
Understanding that in Washington attention leads to resources and
power, Clarke quickly raised the issue's profile. "Dick has an ability
to scare the bejesus out of everybody and to make the bureaucracy
jump," says a former colleague. The Bush administration has requested
a 64 percent increase in cybersecurity funds for next year.

Last month, I paid Clarke a visit in his office a few blocks west of
the White House to talk about the threat and discovered that even he
is beginning to wilt under the false pretense of cyberterrorism. As I
was led back to meet him, his assistant made an odd request: "Mr.  
Clarke doesn't like to talk about the source of the threat, he'd
rather focus on the vulnerability." And indeed, the man who figured
most prominently in hyping the issue seemed particularly ill at ease
discussing it.

Clarke is in the curious bind of an expert on terrorism charged to
protect the nation against a form of the disease that has yet to
appear. But he is smart enough to understand that one very real
cybersecurity threat is unfolding: the damage, largely economic, being
done by hackers and criminals. Last year, 52,000 cyberattacks were
reported, up from 21,000 the year before. Yet Clarke's greatest
leverage is misperception about the true source of this threat. In his
careful way, he tried to guide our conversation away from terrorism
and toward cybersecurity.

"To date,"he readily conceded, "we've never seen any of the officially
designated terrorist groups engage in a cyberattack against us." But
he stressed that little noticed in the aftermath of September 11 was a
large-scale cyberattack seven days later--the Nimda virus--that proved
extremely costly to private industry. "Nimda hit a lot of businesses
that thought they had done a good job securing themselves," Clarke
explained. "And a lot of CEOs got really pissed because they thought
they had spent a lot of time and money doing cybersecurity for the
company and--bang!--they got hammered, knocked offline, their records
got destroyed, and it cost millions of dollars per company." The $15
billion in damage caused by cyberattacks last year is derived mostly
from worms, viruses, denial-of-service attacks, and theft, all of
which capitalized on the generally lax cybersecurity in the
private-sector businesses that comprise about 85 percent of the
Internet. Many vulnerabilities are imported through the use of
products by private companies, such as Microsoft, that supply
software. There is no regulatory mechanism to ensure that they meet
security standards; and as Clarke notes, "there's no legal liability
if you are the software manufacturer and sell somebody something that
doesn't work."

He also pointed out that a typical company devotes one-quarter of 1
percent of its information technology budget to cybersecurity,
"slightly less than they spend on coffee." By contrast, the Bush
administration's FY 2003 budget would spend 8 percent, or 32 times
higher a proportion. Yet even this considerable outlay doesn't
guarantee that the government's systems are secure. The same poorly
written and configured software that plagues private industry also
hampers government computers--the federal government is, after all,
Microsoft's largest customer. John Gilligan, the Air Force CIO and one
of the fiercest advocates of stronger safety standards in government,
says that 80 percent of successful penetrations of federal computer
systems can be attributed to software full of bugs, trapdoors, and
"Easter eggs"--programming errors and quirks inserted into the code
(see box) that could leave software vulnerable to hackers. What's
more, as federal agencies move away from proprietary systems toward
universal software, this becomes a greater problem not just in terms
of security, but also of cost. "The assessment I make is that we're
fast approaching the point at which we're spending more money to find,
patch, and correct vulnerabilities than we paid for the software,"  
says Gilligan.


Bad for Business

The danger of hyping a threat like cyberterrorism is that once the
exaggeration becomes clear, the public will grow cynical toward
warnings about real threats. The Chicken Little approach might be
excusable were the Bush administration hyping cyberterrorism in order
to build political momentum for dealing with the true problem posed by
hackers and shoddy software. There is a precedent for this sort of
thing. In the midst of all the anxiety about the Y2K bug, the federal
government and the SEC came up with a novel way to ensure that private
companies were ready: They required businesses to disclose their
preparations to shareholders, setting goals and letting market forces
do the rest.

There were high hopes, then, for the Bush administration's National
Strategy to Secure Cyberspace--the culmination of a year's effort to
address the country's post-9/11 cybersecurity problems. Clarke's team
circulated early drafts that contained what most experts considered to
be solid measures for shoring up security in government, business, and
home computers. But the business community got word that the plan
contained tough (read: potentially costly) prescriptions, and
petitioned the White House, which gutted them. When a draft of the
plan was rolled out in mid-September, Bill Conner, president of the
computer security firm Entrust, told The Washington Post, "It looks as
though a Ph.D. wrote the government items, but it reads like someone a
year out of grade school wrote the rest of the plan."

It's hard to imagine a worse outcome for all involved, even private
industry. By knuckling under to the business community's
anti-regulatory impulses, Bush produced a weak plan that ultimately
leaves the problem of cybersecurity to persist. It proposes no
regulations, no legislation, and stops well short of even the Y2K
approach, prompting most security experts to dismiss it out of hand.  
What it does do instead is continue the stream of officially
sanctioned scaremongering about cyberattack, much to the delight of
software companies. IT security remains one of the few bright spots in
the depressed tech market and thus that important sector of the market
is perfectly satisfied with the status quo. But as the Nimda virus
proved, even companies that pay for security software (and oppose
government standards) don't realize just how poorly it protects them.  
So in effect, the Bush administration has created the conditions for
what amounts to war profiteering--frightening businesses into
investing in security, but refusing to force the changes necessary to
make software safe and effective.

The way the Bush White House has exaggerated the likelihood of
cyberterrorism is familiar to anyone who's followed its style of
government. This is an administration that will frequently proclaim a
threat (the Saddam/al Qaeda connection, for instance) in order to
forward its broader agenda, only to move on nonchalantly when evidence
proves elusive or nonexistent. But in this case, by moving on, Bush
leaves unaddressed something that really is a problem--just not one
that suits the administration's interests. Forced to choose between
increasing security and pleasing his business base, the president has
chosen the latter. Hyping a threat that doesn't exist while shrinking
from one that does is no way to protect the country.


 
Spy Hunter 

There is, for instance, an entire videogame secretly embedded in 
Microsoft Excel 2000. "Spy Hunter," a shoot-'em-up driving game, in 
which race cars zoom down a highway, maneuvering around oil slicks and 
other obstacles, can be opened with a few keystrokes. 

To find the game: Under File menu, click "Save as Web Page," then 
"Selection: Sheet" and "Publish." Next, choose "Add Interactivity" and 
save to an .htm page on your drive. Load the .htm page with Internet 
Explorer. You should see Excel in the center of the page. Scroll down 
to row 2000, and tab across so that WC is the active column. Hold down 
"Shift" and press the space bar to highlight the WC cell. Then 
simultaneously depress Shift + Crtl + Alt and click the four-color 
"Office" logo in the upper-left-hand corner. If you have the original 
Excel 2000, the game will appear. Use the arrow keys to drive, 
spacebar to fire, "O" to drop oil slicks, and "H" for your headlights 
when it gets dark. 

 -- Joshua Green 
 
Joshua Green is an editor of The Washington Monthly. 
 

 
*==============================================================*
"Communications without intelligence is noise;  Intelligence
without communications is irrelevant." Gen Alfred. M. Gray, USMC
================================================================
C4I.org - Computer Security, & Intelligence - http://www.c4i.org
*==============================================================*
 


-
ISN is currently hosted by Attrition.org

To unsubscribe email majordomo () attrition org with 'unsubscribe isn'
in the BODY of the mail.


Current thread: