Interesting People mailing list archives

some comments on Europeon crypto (and UK) by Ross Anderson, at Cambridge University


From: David Farber <farber () linc cis upenn edu>
Date: Thu, 23 Jun 1994 20:18:16 -0400

of putting a timelock in his code to enforce payment [C2]. Similar laws 
have been passed in a number of jurisdictions, and similar problems have 
arisen; in a field where the technology changes quickly, and both judges 
and lawmakers lag behind the curve, our next principle is inevitable: 


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 4:} Computer security legislation is highly 
likely to suffer from the law of unintended consequences. }}
\end{center}




\section{Standards}


Another tack taken by some governments is to try and establish a system of 
security standards, and indeed there are a number of initiatives in play 
from various governmental bodies. Often, these are supposed to give a 
legal advantage to systems which follow some particular standard. For 
example, to facilitate CREST (the Bank of England's new share dealing 
system), the Treasury proposes to amend English law so that the existence 
of a digital signature on a stock transfer order will create `an equitable 
interest by way of tenancy in common in the ... securities pending 
registration' [HMT]. 


On a more general note, some people are beginning to see a TCSEC C2 
evaluation as the touchstone for commercial computer security, and this 
might lead in time to a situation where someone who had not used a C2 
product might be considered negligent, and someone who had used one might 
hope that the burden of proof had thereby passed to someone else. However, 
in the Munden case cited above, the bank did indeed use an evaluated 
product - ACF2 was one of the first products to gain the C2 rating - yet 
this evaluation was not only irrelevant to the case, but not even known to 
the bank.


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 5:} Don't try to solve legal problems with 
system engineering standards.
}}
\end{center}


A related point is that although the courts often rely on industry 
practice when determining which of two parties has been negligent, 
existing computer security standards do not help much here. After all, as 
noted above, they mostly have to do with operating system level features, 
while the industry practices themselves tend to be expressed in 
application detail - precisely where the security problems arise. The 
legal authority flows from the industrial practice to the application, not 
the other way around. 


Understanding this could have saved the banks in UK and Norway a lot of 
legal fees, security expenditure and public embarrassment; in traditional 
banking, the onus is on the bank to show that it made each debit in 
accordance with the customer's mandate.


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 6:} Security goals and assumptions should be 
based on the existing industry practice in the application area, not on 
general `computer' concepts.
}}
\end{center}




\section{Abuses}


Things become even more problematic when one of the parties to a dispute 
has used market power, legal intimidation or political influence to shed 
liability. There are many examples of this:


\begin{enumerate}
\item We recently helped to evaluate the security of an alarm system, 
which is used to protect bank vaults in over a dozen countries. The vendor 
had claimed for years that the alarm signalling was encrypted; this is a 
requirement under the draft CENELEC standards for class 4 risks [B]. On 
examination, it was found that the few manipulations performed to disguise 
the data could not be expected to withstand even an amateur attack.


\item Many companies can be involved in providing components of a secure 
system; as a result, it is often unclear who is to blame. With software 
products, licence agreements usually include very strong disclaimers and 
it may not be practical to sue. Within organisations, it is common that 
managers implement just enough computer security that they will not carry 
the blame for any disaster. They will often ask for guidance from the 
internal audit department, or some other staff function, in order to 
diffuse the liability for an inadequate security specification. 


\item If liability cannot be transferred to the state, to suppliers, to 
insurers, or to another department, then managers may attempt to transfer 
it to customers - especially if the business is a monopoly or cartel. 
Utilities are notorious for refusing to entertain disputes about billing 
system errors; and many banking disputes also fall into this category. 
\end{enumerate}


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 7:} Understand how liability is transferred 
by any system you build or rely on.
}}
\end{center}




\section{Security Goals}


In case the reader is still not convinced that liability is central, we 
shall compare the background to ATM cases in Britain and the United 
States. 


The British approach is for the banks to claim that their systems are 
infallible, in that it is not possible for an ATM debit to appear on 
someone's account unless the card and PIN issued to him had been used in 
that ATM. People who complain are therefore routinely told that they must 
be lying, or mistaken, or the victim of fraud by a friend or relative (in 
which case they must be negligent).


The US is totally different; there, in the landmark court case Judd v 
Citibank [JC], Dorothy Judd claimed that she had not made a number of ATM 
withdrawals which Citibank had debited to her account; Citibank claimed 
that she must have done. The judge ruled that Citibank was wrong in law to 
claim that its systems were infallible, as this placed `an unmeetable 
burden of proof' on the plaintiff. Since then, if a US bank customer 
disputes an electronic debit, the bank must refund the money within 30 
days, unless it can prove that the claim is an attempted fraud.


When tackled in private, British bankers claim they have no alternative; 
if they paid up whenever a customer complained, there would be `an 
avalanche of fraudulent claims of fraud'. US bankers are much more 
relaxed; their practical experience is that the annual loss due to 
customer misrepresentation is only about \$15,000 per bank [W]. This will 
not justify any serious computer security programme; so in areas such as 
New York where risks are higher, banks just use ATM cameras to resolve 
disputes. 


One might expect that as US banks are liable for fraudulent transactions, 
they would invest more in security than British banks do. One of the more 
interesting facts thrown up by the recent ATM cases is that precisely the 
reverse is the case: almost all UK banks and building societies now use 
hardware security modules to manage PINs [VSM], while most US banks do 
not; they just encrypt PINs in software [A1]. 


Thus we can conclude that the real function of these hardware security 
modules is due diligence rather than security. British bankers want to be 
able to point to their security modules when fighting customer claims, 
while US bankers, who can only get the advertised security benefit from 
these devices, generally do not see any point in buying them. Given that 
no-one has yet been able to construct systems which bear hostile 
examination, it is in fact unclear that these devices added any real value 
at all. 


One of the principles of good protocol engineering is that one should 
never use encryption without understanding what it is for (keeping a key 
secret, binding two values together, ...) [AN]. This generalises naturally 
to the following: 


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 8:} Before setting out to build a computer 
security system, make sure you understand what its real purpose is 
(especially if this differs from its advertised purpose). }}
\end{center}




\section{National Security Interference} 


In addition to assuming liability for prosecuting some computer disputes 
which are deemed to be criminal offences, governments have often tried to 
rewrite the rules to make life easier for their signals intelligence 
organisations. 


For example, the South African government decreed in 1986 that all users 
of civilian cryptology had to provide copies of their algorithms and keys 
to the military. Bankers approached the authorities and said that this was 
a welcome development; managing keys for automatic teller machines was a 
nuisance and the military were welcome to the job. Of course, whenever a 
machine was out of balance, they would be sent the bill. At this the 
military backed down quickly. 


More recently, the NIST public key initiative [C3] proposes that the US 
government will assume responsibility for certifying all the public keys 
in use by civilian organisations in that country. They seem to have 
learned from the South African experience; they propose a statutory legal 
exemption for key management agencies. It remains to be seen how much 
trust users will place in a key management system which they will not be 
able to sue when things go wrong. 




\section{Liability and Insurance}


The above sections may have given the reader the impression that managing 
the liability aspects of computer security systems is just beyond most 
companies. This does not mean that the problem should be accepted as 
intractable, but rather that it should be passed to a specialist - the 
insurer. 


As insurers become more aware of the computer related element in their 
risks, it is likely that they will acquire much more clout in setting 
security standards. This is already happening at the top end of the 
market: banks who wish to insure against computer fraud usually need to 
have their systems inspected by a firm approved by the insurer. 


The present system could be improved [A4] - in particular the inspections, 
which focus on operational controls, will have to be broadened to include 
application reviews. However, this is a detail; certification is bound to 
spread down to smaller risks, and, under current business conditions, it 
could economically be introduced for risks of the order of \$250,000. It 
is surely only a matter of time before insurance driven computer security 
standards affect not just businesses and wealthy individuals, but most of 
us [N1]. 


Just as my insurance policy may now specify `a five-lever mortice 
deadlock', so the policy I buy in ten years' time is likely to insist that 
I use accounting software from an approved product list, and certify that 
I manage encryption keys and take backups in accordance with the manual, 
if my practice is to be covered against loss of data and various kinds of 
crime. 


Insurance-based certification will not of course mean hardening systems to 
military levels, but rather finding one or more levels of assurance at 
which insurance business can be conducted profitably. The protection must 
be cheap enough that insurance can still be sold, yet good enough to keep 
the level of claims under control.


Insurance-based security will bring many other benefits, such as 
arbitration; any dispute I have with you will be resolved between my 
insurer and your insurer, as with most motor insurance claims, thus saving 
the bills (and the follies) of lawyers. Insurance companies are also 
better placed to deal with government meddling; they can lobby for 
offensive legislation to be repealed, or just decline to cover any system 
whose keys are kept on a government server, unless the government provides 
a full indemnity. 


A liability based approach can also settle a number of intellectual 
disputes, such as the old question of trust. What is `trust'? At present, 
we have the US DoD `functional' definition that a trusted component is one 
which, if it breaks, can compromise system security, and Needham's 
alternative `organisational' definition [N2] that a trusted component is 
one such that, if it breaks and my company's system security is 
compromised as a result, I do not get fired.


From the liability point of view, of course, a component which can be 
trusted 
is one such that, if it breaks and compromises my system security, I do 
not lose an unpredictable amount of money. In other words: 


\begin{center}
\fbox{
\parbox{5.5in}{{\bf Principle 9:} A trusted component or system is one 
which you can insure.
}}
\end{center}


\vspace{4ex}


\small
\begin{thebibliography}{TCPEC}


\bibitem[A1]{A1}
RJ Anderson, ``Why Cryptosystems Fail'', in {\em Proceedings of the 1st 
ACM Conference on Computer and Communications Security} (Fairfax 1993) pp 
215 - 227 


\bibitem[A2]{A2}
RJ Anderson, ``Why Cryptosystems Fail'', to appear in {\em Communications 
of the ACM}


\bibitem[A3]{A3}
RJ Anderson, ``Making Smartcard Systems Robust'', submitted to {\em Cardis 
94} 


\bibitem[A4]{A4}
RJ Anderson, ``Liability, trust and Security Standards'', in {\em 
Proceedings of the 1994 Cambridge Workshop on Security Protocols} 
(Springer, to appear) 


\bibitem[A5]{A5}
J Austen, ``Computer Crime: ignorance or apathy?'', in {\em The Computer 
Bulletin v 5 no 5} (Oct 93) pp 23 - 24


\bibitem[AN]{AN}
M Abadi, RM Needham, `Prudent Engineering Practice for Cryptographic 
Protocols', in {\em Proceedings of the 1994 IEEE Symposium on Security and 
privacy} (to appear)


\bibitem[B]{B}
KM Banks, Kluwer Security Bulletin, 4 Oct 93 


\bibitem[BN]{BN}
Behne v Den Norske Bank, Bankklagenemnda, Sak nr: 92457/93111 


\bibitem[C1]{C1}
T Corbitt, ``The Computer Misuse Act'', in {\em Computer Fraud and 
Security Bulletin} (Feb 94) pp 13 - 17


\bibitem[C2]{C2}
A Collins, ``Court decides software time-locks are illegal'', in {\em 
Computer Weekly} (19 August 93) p 1


\bibitem[C3]{C3}
S Chokhani,
``Public Key Infrastructure Study (PKI)'', in {\em Proceedings of the 
first ISOC Symposium on Network and Distributed System Security} (1994) p 
45 


\bibitem[DP]{DP}
DW Davies and WL Price,
{\em `Security for Computer Networks'}, John Wiley and Sons 1984. 


\bibitem[E]{E}
B Ellis, ``Prosecuted for complaint over cash machine'', in {\em The 
Sunday Times}, 27th March 1994, section 5 page 1 


\bibitem[ECMA]{ECMA}
European Computer Manufacturers' Association, {\em `Secure Information 
Processing versus the Concept of Product Evaluation'}, Technical Report 64 
(December 1993)


\bibitem[HMT]{HMT}
HM Treasury, {\em `CREST - The Legal Issues'}, March 1994 


\bibitem[ITSEC]{ITSEC}
{\em `Information Technology Security Evaluation Criteria'}, June 1991, EC 
document COM(90) 314


\bibitem[J]{J}
RB Jack (chairman),
{\em `Banking services: law and practice report by the Review Committee'}, 
HMSO, London, 1989


\bibitem[JC]{JC}
Dorothy Judd v Citibank, {\em 435 NYS, 2d series, pp 210 - 212, 107 
Misc.2d 526}


\bibitem[L]{L}
HO Lubbes, ``COMPUSEC: A Personal View'', in {\em Proceedings of Security 
Applications 93} (IEEE) pp x - xviii


\bibitem[MB]{MB}
McConville \& others v Barclays Bank \& others, High Court of Justice 
Queen's Bench Division 1992 ORB no.812


\bibitem[MM]{MM}
CH Meyer and SM Matyas,
{\em `Cryptography: A New Dimension in Computer Data Security'}, John 
Wiley and Sons 1982.


\bibitem[N1]{N1}
RM Needham, ``Insurance and protection of data'', {\em preprint} 


\bibitem[N2]{N2}
RM Needham, comment at 1993 Cambridge formal methods workshop 


\bibitem[P]{P}
WR Price,
``Issues to Consider When Using Evaluated Products to Implement Secure 
Mission Systems'', in {\em Proceedings of the 15th National Computer 
Security Conference}, National Institute of Standards and Technology 
(1992) pp 292 - 299


\bibitem[R]{R}
J Rushby, {\em `Formal methods and digital systems validation for airborne 
systems'}, NASA Contractor Report 4551, NA81-18969 (December 1993) 


\bibitem[RLN]{RLN}
R v Lock and North, Bristol Crown Court, 1993 


\bibitem[RS]{RS}
R v Small, Norwich Crown Court, 1994


\bibitem[T]{T}
``Business Code'', in {\em The Banker} (Dec 93) p 69 


\bibitem[TCSEC]{TCSEC}
{\em `Trusted Computer System Evaluation Criteria'}, US Department of 
Defense, 5200.28-STD, December 1985


\bibitem[TW]{TW}
VP Thompson, FS Wentz, ``A Concept for Certification of an Army MLS 
Management Information System'', in {\em Proceedings of 16th National 
Computer Security Conference, 1993} pp 253 - 259


\bibitem[VSM]{VSM}
{\em `VISA Security Module Operations Manual'}, VISA, 1986 


\bibitem[W]{W}
MA Wright, ``Security Controls in ATM Systems'', in {\em Computer Fraud 
and Security Bulletin}, November 1991, pp 11 - 14 \end{thebibliography}
\end{document}


Current thread: