September 30, 2001
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
Copyright (c) 2001 by Counterpane Internet Security, Inc.
This is a special issue of Crypto-Gram, devoted to the September
11 terrorist attacks and their aftermath.
Please distribute this issue widely.
In this issue:
- [url=#1]The Attacks[/url]
- [url=#2]Airline Security Regulations[/url]
- [url=#3]Biometrics in Airports[/url]
- [url=#4]Diagnosing Intelligence Failures[/url]
- [url=#5]Regulating Cryptography[/url]
- [url=#6]Terrorists and Steganography[/url]
- [url=#7]News[/url]
- [url=#8]Protecting Privacy and Liberty[/url]
- [url=#9]How to Help[/url]
The Attacks Watching the television on September 11, my primary reaction was The attacks were amazing in their diabolicalness and audacity: I was impressed when al-Qaeda simultaneously bombed two American The attacks were amazing in their complexity. Estimates are that The attacks rewrote the hijacking rule book. Responses to hijackings They rewrote the terrorism book, too. Al-Qaeda invented a new It was also a new type of attack. One of the most difficult things Finally, the attacks were amazing in their success. They weren’t Rarely do you see an attack that changes the world’s conception Airline Security Regulations Computer security experts have a lot of expertise that can be All the warning signs are there: new and unproven security measures, Parked cars now must be 300 feet from airport gates. Why? What The rule limiting concourse access to ticketed passengers is another Increased inspections — of luggage, airplanes, airports — seem Positive bag matching — ensuring that a piece of luggage does The worst security measure of them all is the photo ID requirement. The real point of photo ID requirements is to prevent people from Airline security measures are primarily designed to give the appearance This is not to say that all airport security is useless, and that One basic snake-oil warning sign is the use of self-invented security Airline security: FAA on new security rules: A report on the rules’ effectiveness: El Al’s security measures: More thoughts on this topic: Two secret FAA documents on photo ID requirement, in text and Passenger profiling: A CATO Institute report: “The Cost of Antiterrorist Rhetoric,” I don’t know if this is a good idea, but at least someone is thinking Biometrics in Airports You have to admit, it sounds like a good idea. Put cameras throughout Reality is a lot more complicated; it always is. Biometrics is I think it would be a great addition to airport security: identifying In the first case (employee identification), the biometric system Setting up the system is different for the two applications. In Getting reference biometrics is different, too. In the first case, But even if all these technical problems were magically solved, Suppose this magically effective face-recognition software is No. The software will generate 1000 false alarms for every one I say mostly useless, because it would have some positive effect. Phil Agre on face-recognition biometrics: My original essay on biometrics: Face recognition useless in airports: A company that is pushing this idea: A version of this article was published here: Diagnosing Intelligence It’s clear that U.S. intelligence failed to provide adequate warning There’s a world of difference between intelligence data and intelligence Armed with the clarity of hindsight, it’s easy to look at all It’s a lot harder to do before the fact. Most data is irrelevant, So much data is collected — the NSA sucks up an almost unimaginable We also don’t have any context to judge the intelligence effort. And it was a failure. Over the past couple of decades, the U.S. But whatever the reason, we failed to prevent this terrorist attack. Intelligence failure is an overreliance on eavesdropping and not Another view: Too much electronic eavesdropping only makes things harder: Israel alerted the U.S. about attacks: Regulating Cryptography In the wake of the devastating attacks on New York’s World Trade I think this is a bad move. It will do little to thwart terrorist One, you can’t limit the spread of cryptography. Cryptography Two, any controls on the spread of cryptography hurt more than Three, key escrow doesn’t work. Short refresher: this is the notion Key escrow also makes it harder for the good guys to secure the Stockpiling keys in one place is a huge risk just waiting for Years ago, a group of colleagues and I wrote a paper outlining The events of September 11 have convinced a lot of people that My old “Risks of Key Recovery” paper: Articles on this topic: Al-Qaeda did not use encryption to plan these attacks: Poll indicates that 72 percent of Americans believe that anti-encryption Terrorists and Steganography Guess what? Al-Qaeda may use steganography. According to nameless I’ve written about steganography in the past, and I don’t want It doesn’t surprise me that terrorists are using this trick. The If you read the FBI affidavit against Robert Hanssen, you learn That’s a dead drop. It has many advantages over a face-to-face Using steganography to embed a message in a pornographic image To make it work in practice, the terrorists would need to set The effect is that the sender can transmit a message without ever So, what’s a counter-espionage agency to do? There are the standard Why can’t businesses use this? The primary reason is that legitimate Steganography is good way for terrorist cells to communicate, News articles: My old essay on steganography: Study claims no steganography on eBay: Detecting steganography on the Internet: A version of this essay appeared on ZDnet: News I am not opposed to using force against the terrorists. I am not Written before September 11: A former CIA operative explains why Phil Agre’s comments on these issues: Why technology can’t save us: Hactivism exacts revenge for terrorist attacks: Hackers face life imprisonment under anti-terrorism act: Companies fear cyberterrorism: Upgrading government computers to fight terrorism: Risks of cyberterrorism attacks against our electronic infrastructure: Now the complaint is that Bin Laden is NOT using high-tech communications: Larry Ellison is willing to give away the software to implement
amazement.
to hijack fuel-laden commercial airliners and fly them into buildings, killing
thousands of innocent civilians. We’ll probably never know if the attackers
realized that the heat from the jet fuel would melt the steel supports and collapse
the World Trade Center. It seems probable that they placed advantageous trades
on the world’s stock markets just before the attack. No one planned for an attack
like this. We like to think that human beings don’t make plans like this.
embassies in Africa. I was more impressed when they blew a 40-foot hole in an
American warship. This attack makes those look like minor operations.
the plan required about 50 people, at least 19 of them willing to die. It required
training. It required logistical support. It required coordination. The sheer
scope of the attack seems beyond the capability of a terrorist organization.
are built around this premise: get the plane on the ground so negotiations can
begin. That’s obsolete now.
type of attacker. Historically, suicide bombers are young, single, fanatical,
and have nothing to lose. These people were older and more experienced. They
had marketable job skills. They lived in the U.S.: watched television, ate fast
food, drank in bars. One left a wife and four children.
about a terrorist operation is getting away. This attack neatly solved that
problem. It also solved the technological problem. The United States spends
billions of dollars on remote-controlled precision-guided munitions; al-Qaeda
just finds morons willing to fly planes into skyscrapers.
perfect. We know that 100% of the attempted hijackings were successful, and
75% of the hijacked planes successfully hit their targets. We don’t know how
many planned hijackings were aborted for one reason or another. What’s most
amazing is that the plan wasn’t leaked. No one successfully defected. No one
slipped up and gave the plan away. Al-Qaeda had assets in the U.S. for months,
and managed to keep the plan secret. Often law enforcement has been lucky here;
in this case we weren’t.
of attack, as these terrorist attacks changed the world’s conception of what
a terrorist attack can do. Nothing they did was novel, yet the attack was completely
new. And our conception of defense must change as well.
applied to the real world. First and foremost, we have well-developed senses
of what security looks like. We can tell the difference between real security
and snake oil. And the new airport security rules, put in place after September
11, look and smell a whole lot like snake oil.
no real threat analysis, unsubstantiated security claims. The ban on cutting
instruments is a perfect example. It’s a knee-jerk reaction: the terrorists
used small knives and box cutters, so we must ban them. And nail clippers, nail
files, cigarette lighters, scissors (even small ones), tweezers, etc. But why
isn’t anyone asking the real questions: what is the threat, and how does turning
an airplane into a kindergarten classroom reduce the threat? If the threat is
hijacking, then the countermeasure doesn’t protect against all the myriad of
ways people can subdue the pilot and crew. Hasn’t anyone heard of karate? Or
broken bottles? Think about hiding small blades inside luggage. Or composite
knives that don’t show up on metal detectors.
security problem does this solve? Why doesn’t the same problem imply that passenger
drop-off and pick-up should also be that far away? Curbside check-in has been
eliminated. What’s the threat that this security measure has solved? Why, if
the new threat is hijacking, are we suddenly worried about bombs?
one that confuses me. What exactly is the threat here? Hijackers have to be
on the planes they’re trying to hijack to carry out their attack, so they have
to have tickets. And anyone can call Priceline.com and “name their own price”
for concourse access.
like a good idea, although it’s far from perfect. The biggest problem here is
that the inspectors are poorly paid and, for the most part, poorly educated
and trained. Other problems include the myriad ways to bypass the checkpoints
— numerous studies have found all sorts of violations — and the impossibility
of effectively inspecting everybody while maintaining the required throughput.
Unidentified armed guards on select flights is another mildly effective idea:
it’s a small deterrent, because you never know if one is on the flight you want
to hijack.
not get loaded on the plane unless its owner boards the plane — is actually
a good security measure, but assumes that bombers have self-preservation as
a guiding force. It is completely useless against suicide bombers.
This solves no security problem I can think of. It doesn’t even identify people;
any high school student can tell you how to get a fake ID. The requirement for
this invasive and ineffective security measure is secret; the FAA won’t send
you the written regulations if you ask. Airlines are actually more stringent
about this than the FAA requires, because the “security” measure solves a business
problem for them.
reselling tickets. Nonrefundable tickets used to be regularly advertised in
the newspaper classifieds. Ads would read something like “Round trip, Boston
to Chicago, 11/22 – 11/30, female, $50.” Since the airlines didn’t check ID
but could notice gender, any female could buy the ticket and fly the route.
Now this doesn’t work. The airlines love this; they solved a problem of theirs,
and got to blame the solution on FAA security requirements.
of good security rather than the actuality. This makes sense, once you realize
that the airlines’ goal isn’t so much to make the planes hard to hijack, as
to make the passengers willing to fly. Of course airlines would prefer it if
all their flights were perfectly safe, but actual hijackings and bombings are
rare events and they know it.
we’d be better off doing nothing. All security measures have benefits, and all
have costs: money, inconvenience, etc. I would like to see some rational analysis
of the costs and benefits, so we can get the most security for the resources
we have.
measures, instead of expert-analyzed and time-tested ones. The closest the airlines
have to experienced and expert analysis is El Al. Since 1948 they have been
operating in and out of the most heavily terroristic areas of the planet, with
phenomenal success. They implement some pretty heavy security measures. One
thing they do is have reinforced, locked doors between their airplanes’ cockpit
and the passenger section. (Notice that this security measure is 1) expensive,
and 2) not immediately perceptible to the passenger.) Another thing they do
is place all cargo in decompression chambers before takeoff, to trigger bombs
set to sense altitude. (Again, this is 1) expensive, and 2) imperceptible, so
unattractive to American airlines.) Some of the things El Al does are so intrusive
as to be unconstitutional in the U.S., but they let you take your pocketknife
on board with you.
< [url=http://www.time.com/time/covers/1101010924/bsecurity.html]http://www.time.com/time/covers/1101010924/bsecurity.html[/url]>
< [url=http://www.accessatlanta.com/ajc/terrorism/atlanta/0925gun.html]http://www.accessatlanta.com/ajc/terrorism/atlanta/0925gun.html[/url]>
< [url=http://www.faa.gov/apa/faq/pr_faq.htm]http://www.faa.gov/apa/faq/pr_faq.htm[/url]>
< [url=http://www.boston.com/dailyglobe2/266/nation/Passengers_say_banned_items_have_eluded_airport_monitors+.shtml]http://www.boston.com/dailyglobe2/266/nation/Passengers_say_banned_items_have_eluded_airport_monitors+.shtml[/url]>
< [url=http://news.excite.com/news/ap/010912/18/israel-safe-aviation]http://news.excite.com/news/ap/010912/18/israel-safe-aviation[/url]>
< [url=http://news.excite.com/news/r/010914/07/international-attack-israel-elal-dc]http://news.excite.com/news/r/010914/07/international-attack-israel-elal-dc[/url]>
< [url=http://slate.msn.com/HeyWait/01-09-17/HeyWait.asp]http://slate.msn.com/HeyWait/01-09-17/HeyWait.asp[/url]>
< [url=http://www.tnr.com/100101/easterbrook100101.html]http://www.tnr.com/100101/easterbrook100101.html[/url]>
< [url=http://www.tisc2001.com/newsletters/317.html]http://www.tisc2001.com/newsletters/317.html[/url]>
GIF:
< [url=http://www.cs.berkeley.edu/~daw/faa/guid/guid.txt]http://www.cs.berkeley.edu/~daw/faa/guid/guid.txt[/url]>
< [url=http://www.cs.berkeley.edu/~daw/faa/guid/guid.html]http://www.cs.berkeley.edu/~daw/faa/guid/guid.html[/url]>
< [url=http://www.cs.berkeley.edu/~daw/faa/id/id.txt]http://www.cs.berkeley.edu/~daw/faa/id/id.txt[/url]>
< [url=http://www.cs.berkeley.edu/~daw/faa/id/id.html]http://www.cs.berkeley.edu/~daw/faa/id/id.html[/url]>
< [url=http://www.latimes.com/news/nationworld/nation/la-091501profile.story]http://www.latimes.com/news/nationworld/nation/la-091501profile.story[/url]>
written well before September 11:
< [url=http://www.cato.org/pubs/regulation/reg19n4e.html]http://www.cato.org/pubs/regulation/reg19n4e.html[/url]>
about the problem:
< [url=http://www.zdnet.com/anchordesk/stories/story/0,10738,2812283,00.html]http://www.zdnet.com/anchordesk/stories/story/0,10738,2812283,00.html[/url]>
airports and other public congregation areas, and have automatic face-recognition
software continuously scan the crowd for suspected terrorists. When the software
finds one, it alerts the authorities, who swoop down and arrest the bastards.
Voila, we’re safe once again.
an effective authentication tool, and I’ve written about it before. There are
three basic kinds of authentication: something you know (password, PIN code,
secret handshake), something you have (door key, physical ticket into a concert,
signet ring), and something you are (biometrics). Good security uses at least
two different authentication types: an ATM card and a PIN code, computer access
using both a password and a fingerprint reader, a security badge that includes
a picture that a guard looks at. Implemented properly, biometrics can be an
effective part of an access control system.
airline and airport personnel such as pilots, maintenance workers, etc. That’s
a problem biometrics can help solve. Using biometrics to pick terrorists out
of crowds is a different kettle of fish.
has a straightforward problem: does this biometric belong to the person it claims
to belong to? In the latter case (picking terrorists out of crowds), the system
needs to solve a much harder problem: does this biometric belong to anyone in
this large database of people? The difficulty of the latter problem increases
the complexity of the identification, and leads to identification failures.
the first case, you can unambiguously know the reference biometric belongs to
the correct person. In the latter case, you need to continually worry about
the integrity of the biometric database. What happens if someone is wrongfully
included in the database? What kind of right of appeal does he have?
you can initialize the system with a known, good biometric. If the biometric
is face recognition, you can take good pictures of new employees when they are
hired and enter them into the system. Terrorists are unlikely to pose for photo
shoots. You might have a grainy picture of a terrorist, taken five years ago
from 1000 yards away when he had a beard. Not nearly as useful.
it’s still very difficult to make this kind of system work. The hardest problem
is the false alarms. To explain why, I’m going to have to digress into statistics
and explain the base rate fallacy.
99.99 percent accurate. That is, if someone is a terrorist, there is a 99.99
percent chance that the software indicates “terrorist,” and if someone is not
a terrorist, there is a 99.99 percent chance that the software indicates “non-terrorist.”
Assume that one in ten million flyers, on average, is a terrorist. Is the software
any good?
real terrorist. And every false alarm still means that all the security people
go through all of their security procedures. Because the population of non-terrorists
is so much larger than the number of terrorists, the test is useless. This result
is counterintuitive and surprising, but it is correct. The false alarms in this
kind of system render it mostly useless. It’s “The Boy Who Cried Wolf” increased
1000-fold.
Once in a while, the system would correctly finger a frequent-flyer terrorist.
But it’s a system that has enormous costs: money to install, manpower to run,
inconvenience to the millions of people incorrectly identified, successful lawsuits
by some of those people, and a continued erosion of our civil liberties. And
all the false alarms will inevitably lead those managing the system to distrust
its results, leading to sloppiness and potentially costly mistakes. Ubiquitous
harvesting of biometrics might sound like a good idea, but I just don’t think
it’s worth it.
< [url=http://dlis.gseis.ucla.edu/people/pagre/bar-code.html]http://dlis.gseis.ucla.edu/people/pagre/bar-code.html[/url]>
< [url=http://www.counterpane.com/crypto-gram-9808.html#biometrics]http://www.counterpane.com/crypto-gram-9808.html#biometrics[/url]>
< [url=http://www.theregister.co.uk/content/4/21916.html]http://www.theregister.co.uk/content/4/21916.html[/url]>
According to a DARPA study, to detect 90 per cent of terrorists we’d need to
raise an alarm for one in every three people passing through the airport.
< [url=http://www.theregister.co.uk/content/6/21882.html]http://www.theregister.co.uk/content/6/21882.html[/url]>
< [url=http://www.extremetech.com/article/0,3396,s%253D1024%2526a%253D15070,00.asp]http://www.extremetech.com/article/0,3396,s%253D1024%2526a%253D15070,00.asp[/url]>
Failures
of the September 11 terrorist attacks, and that the FBI failed to prevent the
attacks. It’s also clear that there were all sorts of indications that the attacks
were going to happen, and that there were all sorts of things that we could
have noticed but didn’t. Some have claimed that this was a massive intelligence
failure, and that we should have known about and prevented the attacks. I am
not convinced.
information. In what I am sure is the mother of all investigations, the CIA,
NSA, and FBI have uncovered all sorts of data from their files, data that clearly
indicates that an attack was being planned. Maybe it even clearly indicates
the nature of the attack, or the date. I’m sure lots of information is there,
in files, intercepts, computer memory.
the data and point to what’s important and relevant. It’s even easy to take
all that important and relevant data and turn it into information. And it’s
real easy to take that information and construct a picture of what’s going on.
and most leads are false ones. How does anyone know which is the important one,
that effort should be spent on this specific threat and not the thousands of
others?
quantity of electronic communications, the FBI gets innumerable leads and tips,
and our allies pass all sorts of information to us — that we can’t possibly
analyze it all. Imagine terrorists are hiding plans for attacks in the text
of books in a large university library; you have no idea how many plans there
are or where they are, and the library expands faster than you can possibly
read it. Deciding what to look at is an impossible task, so a lot of good intelligence
goes unlearned.
How many terrorist attempts have been thwarted in the past year? How many groups
are being tracked? If the CIA, NSA, and FBI succeed, no one ever knows. It’s
only in failure that they get any recognition.
has relied more and more on high-tech electronic eavesdropping (SIGINT and COMINT)
and less and less on old fashioned human intelligence (HUMINT). This only makes
the analysis problem worse: too much data to look at, and not enough real-world
context. Look at the intelligence failures of the past few years: failing to
predict India’s nuclear test, or the attack on the USS Cole, or the bombing
of the two American embassies in Africa; concentrating on Wen Ho Lee to the
exclusion of the real spies, like Robert Hanssen.
In the post mortem, I’m sure there will be changes in the way we collect and
(most importantly) analyze anti-terrorist data. But calling this a massive intelligence
failure is a disservice to those who are working to keep our country secure.
enough on human intelligence:
< [url=http://www.sunspot.net/bal-te.intelligence13sep13.story]http://www.sunspot.net/bal-te.intelligence13sep13.story[/url]>
< [url=http://www.newscientist.com/news/news.jsp?id=ns99991297]http://www.newscientist.com/news/news.jsp?id=ns99991297[/url]>
< [url=http://www.wired.com/news/politics/0,1283,46746,00.html]http://www.wired.com/news/politics/0,1283,46746,00.html[/url]>
< [url=http://www.wired.com/news/business/0,1367,46817,00.html]http://www.wired.com/news/business/0,1367,46817,00.html[/url]>
< [url=http://www.latimes.com/news/nationworld/nation/la-092001probe.story]http://www.latimes.com/news/nationworld/nation/la-092001probe.story[/url]>
Mostly retracted:
< [url=http://www.latimes.com/news/nationworld/nation/la-092101mossad.story]http://www.latimes.com/news/nationworld/nation/la-092101mossad.story[/url]>
Center and the Pentagon, Senator Judd Gregg and other high-ranking government
officials quickly seized on the opportunity to resurrect limits on strong encryption
and key escrow systems that ensure government access to encrypted messages.
activities, while at the same time significantly reducing the security of our
own critical infrastructure. We’ve been through these arguments before, but
legislators seem to have short memories. Here’s why trying to limit cryptography
is bad for Internet security.
is mathematics, and you can’t ban mathematics. All you can ban is a set of products
that use that mathematics, but that is something quite different. Years ago,
during the cryptography debates, an international crypto survey was completed;
it listed almost a thousand products with strong cryptography from over a hundred
countries. You might be able to control cryptography products in a handful of
industrial countries, but that won’t prevent criminals from importing them.
You’d have to ban them in every country, and even then it won’t be enough. Any
terrorist organization with a modicum of skill can write its own cryptography
software. And besides, what terrorist is going to pay attention to a legal ban?
they help. Cryptography is one of the best security tools we have to protect
our electronic world from harm: eavesdropping, unauthorized access, meddling,
denial of service. Sure, by controlling the spread of cryptography you might
be able to prevent some terrorist groups from using cryptography, but you’ll
also prevent bankers, hospitals, and air-traffic controllers from using it.
(And, remember, the terrorists can always get the stuff elsewhere: see my first
point.) We’ve got a lot of electronic infrastructure to protect, and we need
all the cryptography we can get our hands on. If anything, we need to make strong
cryptography more prevalent if companies continue to put our planet’s critical
infrastructure online.
that companies should be forced to implement back doors in crypto products such
that law enforcement, and only law enforcement, can peek in and eavesdrop on
encrypted messages. Terrorists and criminals won’t use it. (Again, see my first
point.)
important stuff. All key-escrow systems require the existence of a highly sensitive
and highly available secret key or collection of keys that must be maintained
in a secure manner over an extended time period. These systems must make decryption
information quickly accessible to law enforcement agencies without notice to
the key owners. Does anyone really think that we can build this kind of system
securely? It would be a security engineering task of unbelievable magnitude,
and I don’t think we have a prayer of getting it right. We can’t build a secure
operating system, let alone a secure computer and secure network.
attack or abuse. Whose digital security do you trust absolutely and without
question, to protect every major secret of the nation? Which operating system
would you use? Which firewall? Which applications? As attractive as it may sound,
building a workable key-escrow system is beyond the current capabilities of
computer engineering.
why key escrow is a bad idea. The arguments in the paper still stand, and I
urge everyone to read it. It’s not a particularly technical paper, but it lays
out all the problems with building a secure, effective, scalable key-escrow
infrastructure.
we live in dangerous times, and that we need more security than ever before.
They’re right; security has been dangerously lax in many areas of our society,
including cyberspace. As more and more of our nation’s critical infrastructure
goes digital, we need to recognize cryptography as part of the solution and
not as part of the problem.
< [url=http://www.counterpane.com/key-escrow.html]http://www.counterpane.com/key-escrow.html[/url]>
< [url=http://cgi.zdnet.com/slink?140437:8469234]http://cgi.zdnet.com/slink?140437:8469234[/url]>
< [url=http://www.wired.com/news/politics/0,1283,46816,00.html]http://www.wired.com/news/politics/0,1283,46816,00.html[/url]>
< [url=http://www.pcworld.com/news/article/0,aid,62267,00.asp]http://www.pcworld.com/news/article/0,aid,62267,00.asp[/url]>
< [url=http://www.newscientist.com/news/news.jsp?id=ns99991309]http://www.newscientist.com/news/news.jsp?id=ns99991309[/url]>
< [url=http://www.zdnet.com/zdnn/stories/news/0,4586,2814833,00.html]http://www.zdnet.com/zdnn/stories/news/0,4586,2814833,00.html[/url]>
< [url=http://dailynews.yahoo.com/h/nm/20010918/ts/attack_investigation_dc_23.html]http://dailynews.yahoo.com/h/nm/20010918/ts/attack_investigation_dc_23.html[/url]>
laws would be “somewhat” or “very” helpful in preventing a repeat of last week’s
terrorist attacks on New York’s World Trade Center and the Pentagon in Washington,
D.C. No indication of what percentage actually understood the question.
< [url=http://news.cnet.com/news/0-1005-200-7215723.html?tag=mn_hd]http://news.cnet.com/news/0-1005-200-7215723.html?tag=mn_hd[/url]>
“U.S. officials and experts” and “U.S. and foreign officials,” terrorist groups
are “hiding maps and photographs of terrorist targets and posting instructions
for terrorist activities on sports chat rooms, pornographic bulletin boards
and other Web sites.”
to spend much time retracing old ground. Simply, steganography is the science
of hiding messages in messages. Typically, a message (either plaintext or, more
cleverly, ciphertext) is encoded as tiny changes to the color of the pixels
of a digital photograph. Or in imperceptible noise in an audio file. To the
uninitiated observer, it’s just a picture. But to the sender and receiver, there’s
a message hiding in there.
very aspects of steganography that make it unsuitable for normal corporate use
make it ideally suited for terrorist use. Most importantly, it can be used in
an electronic dead drop.
how Hanssen communicated with his Russian handlers. They never met, but would
leave messages, money, and documents for one another in plastic bags under a
bridge. Hanssen’s handler would leave a signal in a public place — a chalk
mark on a signpost — to indicate a waiting package. Hanssen would later collect
the package.
meeting. One, the two parties are never seen together. Two, the two parties
don’t have to coordinate a rendezvous. Three, and most importantly, one party
doesn’t even have to know who the other one is (a definite advantage if one
of them is arrested). Dead drops can be used to facilitate completely anonymous,
asynchronous communications.
and posting it to a Usenet newsgroup is the cyberspace equivalent of a dead
drop. To everyone else, it’s just a picture. But to the receiver, there’s a
message in there waiting to be extracted.
up some sort of code. Just as Hanssen knew to collect his package when he saw
the chalk mark, a virtual terrorist will need to know to look for his message.
(He can’t be expected to search every picture.) There are lots of ways to communicate
a signal: timestamp on the message, an uncommon word in the subject line, etc.
Use your imagination here; the possibilities are limitless.
communicating directly with the receiver. There is no e-mail between them, no
remote logins, no instant messages. All that exists is a picture posted to a
public forum, and then downloaded by anyone sufficiently enticed by the subject
line (both third parties and the intended receiver of the secret message).
ways of finding steganographic messages, most of which involve looking for changes
in traffic patterns. If Bin Laden is using pornographic images to embed his
secret messages, it is unlikely these pictures are being taken in Afghanistan.
They’re probably downloaded from the Web. If the NSA can keep a database of
images (wouldn’t that be something?), then they can find ones with subtle changes
in the low-order bits. If Bin Laden uses the same image to transmit multiple
messages, the NSA could notice that. Otherwise, there’s probably nothing the
NSA can do. Dead drops, both real and virtual, can’t be prevented.
businesses don’t need dead drops. I remember hearing one company talk about
a corporation embedding a steganographic message to its salespeople in a photo
on the corporate Web page. Why not just send an encrypted e-mail? Because someone
might notice the e-mail and know that the salespeople all got an encrypted message.
So send a message every day: a real message when you need to, and a dummy message
otherwise. This is a traffic analysis problem, and there are other techniques
to solve it. Steganography just doesn’t apply here.
allowing communication without any group knowing the identity of the other.
There are other ways to build a dead drop in cyberspace. A spy can sign up for
a free, anonymous e-mail account, for example. Bin Laden probably uses those
too.
< [url=http://www.wired.com/news/print/0,1294,41658,00.html]http://www.wired.com/news/print/0,1294,41658,00.html[/url]>
< [url=http://www.usatoday.com/life/cyber/tech/2001-02-05-binladen.htm]http://www.usatoday.com/life/cyber/tech/2001-02-05-binladen.htm[/url]>
< [url=http://www.sfgate.com/cgi-bin/article.cgi?file=/gate/archive/2001/09/20/sigintell.DTL]http://www.sfgate.com/cgi-bin/article.cgi?file=/gate/archive/2001/09/20/sigintell.DTL[/url]>
< [url=http://www.cnn.com/2001/US/09/20/inv.terrorist.search/]http://www.cnn.com/2001/US/09/20/inv.terrorist.search/[/url]>
< [url=http://www.washingtonpost.com/wp-dyn/articles/A52687-2001Sep18.html]http://www.washingtonpost.com/wp-dyn/articles/A52687-2001Sep18.html[/url]>
< [url=http://www.counterpane.com/crypto-gram-9810.html#steganography]http://www.counterpane.com/crypto-gram-9810.html#steganography[/url]>
< [url=http://www.theregister.co.uk/content/4/21829.html]http://www.theregister.co.uk/content/4/21829.html[/url]>
< [url=http://www.citi.umich.edu/techreports/reports/citi-tr-01-11.pdf]http://www.citi.umich.edu/techreports/reports/citi-tr-01-11.pdf[/url]>
< [url=http://www.zdnet.com/zdnn/stories/comment/0,5859,2814256,00.html]http://www.zdnet.com/zdnn/stories/comment/0,5859,2814256,00.html[/url]>
< [url=http://www.msnbc.com/news/633709.asp?0dm=B12MT]http://www.msnbc.com/news/633709.asp?0dm=B12MT[/url]>
opposed to going to war — for retribution, deterrence, and the restoration
of the social contract — assuming a suitable enemy can be identified. Occasionally,
peace is something you have to fight for. But I think the use of force is far
more complicated than most people realize. Our actions are important; messing
this up will only make things worse.
the terrorist Usama bin Laden has little to fear from American intelligence.
< [url=http://www.theatlantic.com/issues/2001/07/gerecht.htm]http://www.theatlantic.com/issues/2001/07/gerecht.htm[/url]>
And a Russian soldier discusses why war in Afghanistan will be a nightmare.
< [url=http://www.latimes.com/news/printedition/asection/la-000075191sep19.story]http://www.latimes.com/news/printedition/asection/la-000075191sep19.story[/url]>
A British soldier explains the same:
< [url=http://www.sunday-times.co.uk/news/pages/sti/2001/09/23/stiusausa02023.html?]http://www.sunday-times.co.uk/news/pages/sti/2001/09/23/stiusausa02023.html?[/url]>
Lessons from Britain on fighting terrorism:
< [url=http://www.salon.com/news/feature/2001/09/19/fighting_terror/index.html]http://www.salon.com/news/feature/2001/09/19/fighting_terror/index.html[/url]>
1998 Esquire interview with Bin Ladin:
< [url=http://www.esquire.com/features/articles/2001/010913_mfe_binladen_1.html]http://www.esquire.com/features/articles/2001/010913_mfe_binladen_1.html[/url]>
< [url=http://commons.somewhere.com/rre/2001/RRE.War.in.a.World.Witho.html]http://commons.somewhere.com/rre/2001/RRE.War.in.a.World.Witho.html[/url]>
< [url=http://commons.somewhere.com/rre/2001/RRE.Imagining.the.Next.W.html]http://commons.somewhere.com/rre/2001/RRE.Imagining.the.Next.W.html[/url]>
< [url=http://www.osopinion.com/perl/story/13535.html]http://www.osopinion.com/perl/story/13535.html[/url]>
< [url=http://news.cnet.com/news/0-1003-201-7214703-0.html?tag=owv]http://news.cnet.com/news/0-1003-201-7214703-0.html?tag=owv[/url]>
FBI reminds everyone that it’s illegal:
< [url=http://www.nipc.gov/warnings/advisories/2001/01-020.htm]http://www.nipc.gov/warnings/advisories/2001/01-020.htm[/url]>
< [url=http://www.ananova.com/news/story/sm_400565.html?menu=]http://www.ananova.com/news/story/sm_400565.html?menu=[/url]>
< [url=http://www.securityfocus.com/news/257]http://www.securityfocus.com/news/257[/url]>
Especially scary are the “advice or assistance” components. A security consultant
could face life imprisonment, without parole, if he discovered and publicized
a security hole that was later exploited by someone else. After all, without
his “advice” about what the hole was, the attacker never would have accomplished
his hack.
< [url=http://cgi.zdnet.com/slink?140433:8469234]http://cgi.zdnet.com/slink?140433:8469234[/url]>
< [url=http://computerworld.com/nlt/1%2C3590%2CNAV65-663_STO63965_NLTSEC%2C00.html]http://computerworld.com/nlt/1%2C3590%2CNAV65-663_STO63965_NLTSEC%2C00.html[/url]>
They’re investing in security:
< [url=http://www.washtech.com/news/software/12514-1.html]http://www.washtech.com/news/software/12514-1.html[/url]>
< [url=http://www.theregister.co.uk/content/55/21814.html]http://www.theregister.co.uk/content/55/21814.html[/url]>
< [url=http://www.zdnet.com/zdnn/stories/news/0,4586,5096868,00.html]http://www.zdnet.com/zdnn/stories/news/0,4586,5096868,00.html[/url]>
< [url=http://www.businessweek.com/bwdaily/dnflash/sep2001/nf20010918_8931.htm?&_ref=1732900718]http://www.businessweek.com/bwdaily/dnflash/sep2001/nf20010918_8931.htm?&_ref=1732900718[/url]>
< [url=http://cgi.zdnet.com/slink?143569:8469234]http://cgi.zdnet.com/slink?143569:8469234[/url]>
< [url=http://www.theregister.co.uk/content/57/21790.html]http://www.theregister.co.uk/content/57/21790.html[/url]>
a national ID card.
< [url=http://www.siliconvalley.com/docs/news/svfront/ellsn092301.htm]http://www.siliconvalley.com/docs/news/svfront/ellsn092301.htm[/url]>
Security problems include: inaccurate information, insiders issuing fake cards
(this happens with state drivers’ licenses), vulnerability of the large database,
potential privacy abuses, etc. And, of course, no trans-national terrorists
would be listed in such a system, because they wouldn’t be U.S. citizens. What
do you expect from a company whose origins are intertwined with the CIA?