Interesting People mailing list archives

some comments on Europeon crypto (and UK) by Ross Anderson, at Cambridge University


From: David Farber <farber () linc cis upenn edu>
Date: Thu, 23 Jun 1994 20:18:16 -0400

NSA did a deal with Britain and Sweden to introduce the Clipper chip. I 
heard this from a US source late last year. The other European countries 
apparently turned them down flat.


Confirmation came last month when a journalist who'd heard of the deal 
from UK sources asked me to comment. I said that classified designs are 
unusable in evidence in British courts, and so it was a crock and I would 
advise clients not to touch it. There still hasn't been a public 
announcement though. 


I take it you've followed the GSM/A5 story - I posted an implementation to 
sci.crypt and uk.telecom. There has been little feedback so far. One of 
the things that does emerge, however, is that GSM phones have odd 
surveillance characteristics. We think that if you buy a phone in the UK, 
then GCHQ could follow you areound in germany even if the german 
government didn't want you to. The only bit of GSM security which seems 
fairly well designed is the part which prevents billing fraud.


So if Clipper is introduced in the UK, it might give more security than 
GSM. But this doesn't mean it will be a commercial success. Most people in 
security greatly overestimate the amount of interest which the real world 
has in the subject, and the market for security products is a lot smaller 
than many businessmen have thought. The whole industry lives 70% off 
government subsidy and 20% off the banks' paranoia; the other 10% of 
genuine demand is scattered over a whole lot of applications, such as 
eftpos systems, prepayment electricity tokens, burglar alarms, pay-TV, 
authenticating document and video images, and software licensing; here the 
requirements tend to have more to do with integrity than confidentiality, 
and so open design is a must. 


I enclose a paper which has been accepted for ESORICS this year, and which 
goes into the subject of evidence a bit more, 


Regards


Ross Anderson, at Cambridge University




\documentstyle[a4,11pt]{article}
\parskip 7pt plus 2pt minus 2pt
\newtheorem{principle}{Principle}
\begin{document}


\begin{center}
{\Large \bf Liability and Computer Security: Nine Principles} 


\vspace{5ex}


Ross J Anderson\\
Cambridge University Computer Laboratory\\ Email: {\tt rja14 () cl cam ac uk}
\end{center}


\vspace{3ex}


\begin{abstract}


Many authors have proposed that security priorities should be set by risk 
analysis. However, reality is subtly different: many (if not most) 
commercial computer security systems are at least as much about shedding 
liability as about minimising risk. Banks use computer security mechanisms 
to transfer liability to their customers; companies use them to transfer 
liability to their insurers, or (via the public prosecutor) to the 
taxpayer; and they are also used within governments and companies to shift 
the blame to other departments (``we did everything that GCHQ/the internal 
auditors told us to''). We derive nine principles which might help 
designers avoid the most common pitfalls. 


\end{abstract}




\section{Introduction}


In the conventional model of technological progress, there is a smooth 
progression from research through development and engineering to a 
product. After this is fielded, the experience gained from its use 
provides feedback to the research team, and helps drive the next 
generation of products: 


\begin{center}
{\sc Research $\rightarrow$ Development $\rightarrow$ Engineering 
$\rightarrow$ Product\\}


\begin{picture}(260,10)(0,0)
\thinlines
\put(260,10){\line(0,-1){10}}
\put(260,0){\line(-1,0){260}}
\put(0,0){\vector(0,1){10}}
\end{picture}
\end{center}


This cycle is well known, and typically takes about ten years. However, 
the product's failure modes may not be immediately apparent, and may even 
be deliberately concealed; in this case it may be several years before 
litigation comes into the cycle. This was what happened with the asbestos 
industry, and many other examples could be given.


\begin{center}
{\sc Research $\rightarrow$ Development $\rightarrow$ Engineering 
$\rightarrow$ Product $\rightarrow$ Litigation\\}


\begin{picture}(320,10)(0,0)
\thinlines
\put(320,10){\line(0,-1){10}}
\put(320,0){\line(-1,0){320}}
\put(0,0){\vector(0,1){10}}
\end{picture}
\end{center}


Now many computer security systems and products are designed to achieve 
some particular legal result. Digital signatures, for example, are often 
recommended on the grounds that they are the only way in which an 
electronic document can in the long term be made acceptable to the courts. 
It may therefore be of interest that some of the first court cases 
involving cryptographic evidence have recently been decided, and in this 
paper we try to distil some of the practical wisdom which can be gleaned 
from them. 




\section{Using Cryptography in Evidence} 


Over the last two years, we have advised in a number of cases involving 
disputed withdrawals from ATMs. These now include five criminal and three 
civil cases in Britain, two civil cases in Norway, and one civil and one 
criminal case in the USA. All these cases had a common theme of reliance 
on claims concerning cryptography and computer security; in many cases the 
bank involved said that since its PINs were generated and verified in 
secure cryptographic hardware, they could not be known to any member of 
its staff and thus any disputed withdrawals must therefore be the 
customer's fault. 


However, these cases have shown that such sweeping claims do not work, and 
in the process have undermined some of the assumptions made by commercial 
computer security designers for the past fifteen years. 


At the engineering level, they provided us with the first detailed threat 
model for commercial computer security systems; they showed that almost 
all frauds are due to blunders in application design, implementation and 
operation [A1]. The main threat is not the cleverness of the attacker, but 
the stupidity of the system builder. At the technical level, we should be 
much more concerned with robustness [A2], and we have shown how robustness 
properties can be successfully incorporated into fielded systems in [A3]. 


However, there is another lesson to be learned from the ``phantom 
withdrawal'' cases, which will be our concern here. This is that many 
security systems are really about liability rather than risk; and failure 
to understand this has led to many computer security systems being 
essentially useless. 


We will first look at evidence; here it is well established that a 
defendant has the right to examine every link in the chain. 


\begin{itemize}
\item One of the first cases was R v Hendy at Plymouth Crown Court. One of 
Norma Hendy's colleagues had a phantom withdrawal from her bank account, 
and as the staff at this company used to take turns going to the cash 
machine for each other, the victim's PIN was well known. Of the many 
suspects, Norma was arrested and charged for no good reason other than 
that the victim's purse had been in her car all day (even although this 
fact was widely known and the car was unlocked).


She denied the charge vigorously; and the bank said in its evidence that 
the alleged withdrawal could not possibly have been made except with the 
card and PIN issued to the victim. This was untrue, as both theft by bank 
staff using extra cards, and card forgery by outsiders, had been known to 
affect this bank's customers [A1]. We therefore demanded disclosure of the 
bank's security manuals, audit reports and so on; the bank refused, and so 
Norma was acquitted. 


\item Almost exactly the same happened in the case R v De Mott at Great 
Yarmouth. Philip De Mott was a taxi driver, who was accused of stealing 
\pounds 50 from a colleague after she had a phantom withdrawal. His 
employers did not believe that he could be guilty, and applied for his 
bail terms to allow him to keep working for them. Again, the bank claimed 
that its systems were infallible; again, when the evidence was demanded, 
they backed down and the case collapsed.
\end{itemize}


Given that, even on the banks' own admission, ATM systems have an error 
rate of 1 in 34,000 [A2], a country like Britain with $10^9$ ATM 
transactions a year will have 30,000 phantom withdrawals and other 
miscellaneous malfunctions. If 10,000 of these are noticed by the victims, 
and 1,000 referred to the police, then even given the police tendency to 
`file and forget' small matters, it is not surprising that there are maybe 
a dozen wrongful prosecutions each year. 


Thankfully, there now exists a solid defence. This is to demand that the 
Crown Prosecution Service provide a full set of the bank's security and 
quality documentation, including security policies and standards, crypto 
key management procedures and logs, audit and insurance inspectors' 
reports, test and bug reports, ATM balancing records and logs, and details 
of all customer complaints in the last seven years. The UK courts have so 
far upheld the rights of both criminal defendants [RS] and civil 
plaintiffs [MB] to this material, despite outraged protest from the banks.


Of course, this defence works whether or not the defendant is actually 
guilty, and the organised crime squad at Scotland Yard has expressed 
concern that the inability of banks to support computer records could 
seriously hinder police operations. In a recent trial in Bristol, two men 
who were accused of conspiring to defraud a bank by card forgery obtained 
a plea bargain by threatening to call a banking industry expert to say 
that the crimes they had planned could not possibly have succeeded [RLN]. 


The first (and probably most important) lesson from the litigation is 
therefore this:


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 1:} Security systems which are to provide 
evidence must be designed and certified on the assumption that they will 
be examined in detail by a hostile expert.}} \end{center}


This should have been obvious to anybody who stopped to think about the 
matter, yet for many years nobody in the industry (including the author) 
did so. In fact, many banking sector crypto suppliers also sell equipment 
to government bodies. Have their military clients stopped to assess the 
damage which could be done if a mafioso's lawyers, embroiled in some 
dispute over an electronic banking transaction, raid the design lab at six 
in the morning and, armed with a court order, take away all the schematics 
and source code they can find? Pleading a classification mismatch is no 
defence - in a recent case, lawyers staged just such a dawn raid against 
Britain's biggest defence electronics firm, in order to find out how many 
PCs were running unlicenced software. 






\section{Using the Right Threat Model}


Another problem is that many designers fail to realise that most security 
failures occur at the level of application detail [A2] and instead put 
most of their effort into cryptographic algorithms and protocols, or into 
delivery mechanisms such as smartcards.


This is illustrated by current ATM litigation in Norway. Norwegian banks 
spent millions on issuing all their customers with smartcards, and are now 
as certain as British banks (at least in public) that no debit can appear 
on a customer's account without the actual card and PIN issued to the 
customer being used. Yet a number of phantom withdrawals around the 
University of Trondheim have cast serious doubt on their position.


In these cases, cards were stolen from offices on campus and used in ATMs 
and shops in the town; among the victims are highly credible witnesses who 
are quite certain that their PINs could not have been compromised. The 
banks refused to pay up, and have been backed up by the central bank and 
the local banking ombudsman; yet the disputed transactions (about which 
the bank was so certain) violated the card cycle limits. Although only NOK 
5000 should have been available from ATMs and NOK 6000 from eftpos, the 
thief managed somehow to withdraw NOK 18000 (the extra NOK 7000 was 
refunded without any explanation) [BN].


Although intelligence agencies may have the resources to carry out 
technical attacks on algorithms or operating systems, most crime is 
basically opportunist, and most criminals are both unskilled and 
undercapitalised; most of their opportunities therefore come from the 
victim's mistakes. 


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 2:} Expect the real problems to come from 
blunders in the application design and in the way the system is operated. 
}}
\end{center}


\section{The Limitations of Legal Process} 


Even if we have a robust system with a well designed and thoroughly tested 
application, we are still not home and dry; and conversely, if we suffer 
as a result of an insecure application built by someone else, we cannot 
rely on prevailing against them in court. This is illustrated by the one 
case `won' recently by the banking industry, in which one of our local 
police constables was prosecuted for attempting to obtain money by 
deception after he complained about six phantom withdrawals on his bank 
account. 


Here, it came out during the trial that the bank's system had been 
implemented and managed in a rather ramshackle way, which is probably not 
untypical of the small data processing departments which service most 
medium sized commercial firms.


\begin{itemize}
\item The bank had no security management or quality assurance function. 
The software development methodology was `code-and-fix', and the 
production code was changed as often as twice a week.


\item No external assessment, whether by auditors or insurance inspectors, 
was produced; the manager who gave technical evidence was the same man who 
had originally designed and written the system twenty years before, and 
still managed it. He claimed that bugs could not cause disputed 
transaction, as his system was written in assembler, and thus all bugs 
caused abends and were thus detected. He was not aware of the existence of 
TCSEC or ITSEC; but nonetheless claimed that as ACF2 was used to control 
access, it was not possible for any systems programmer to get hold of the 
encryption keys which were embedded in application code.


\item The disputed transactions were never properly investigated; the 
technical staff had just looked at the mainframe logs and not found 
anything which seemed wrong (and even this was only done once the trial 
was underway, under pressure from defence lawyers). In fact, there were 
another 150-200 transactions under dispute with other clients, none of 
which had been investigated. \end{itemize}


It was widely felt to be shocking that, even after all this came to light, 
Munden was still convicted [E]; one may hope that the conviction is 
overturned on appeal. The Munden case does however highlight not just our 
second principle that many problems are likely to be found in the 
application, but a fact that (although well known to lawyers) is often 
ignored by the security community: 


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 3:} Judgments handed down in computer cases 
are often surprising.
}}
\end{center}




\section{Legislation}


Strange computer judgments have on occasion alarmed lawmakers, and they 
have tried to rectify matters by legislation. For example, in the famous 
case of R v Gold \& Schifreen, two hackers, who had played havoc with 
British Telecom's electronic mail service by sending electronic mail 
`from' Prince Philip `to' people they didn't like announcing the award of 
honours, were charged with having stolen the master password by copying it 
from another system. They were acquitted, on the grounds that information 
(unlike material goods) cannot be stolen.


The ensuing panic in parliament led to the Computer Misuse Act. This act 
makes `hacking' a specific criminal offence, and thus tries to transfer 
some of the costs of distributed system access control from the system 
administrator to the Crown Prosecution Service. Whether it actually does 
anything useful is open to dispute: on the one hand firms have to take 
considerable precautions if they want to use it against errant employees 
[A5] [C1]; and on the other hand it has led to surprising convictions, 
such as that of a software writer who used the old established technique 


Current thread: