Wednesday 15 May 2013

"When it comes to cyber security QinetiQ couldn’t grab their ass with both hands"

So said Bob Slapnik, vice president at HBGary, the security experts "detecting tomorrow's threats today", as reported by Bloomberg, the company that's been using its financial information terminals to spy on its clients. So says the New York Times, the company whose cyberdefences were breached in 2012 by the Chinese, seeking to stop people being rude about Prime Minister Wen Jiabao. Although the Chinese say they didn't.

You can see why Mr Slapnik was cross back in 2010. QinetiQ had just won a contract to advise the Pentagon on how to counter cyberespionage despite QinetiQ's own computer systems having been comprehensively hacked for the previous three years.

But talk about the pot calling the kettle black, one reason QinetiQ's inability to grab its ass with both hands came to light was an examination of the documents hacked out of HBGary in 2011 by Anonymous, the cybervigilantes previously derided as mere "script kiddies", who were so piqued by Aaron Barr, HBGary's CEO, pretending that he had infiltrated them that Anonymous ...
... infiltrated HBGary’s servers, erased data, defaced its website with a letter ridiculing the firm with a download link to a leak of more than 40,000 of its emails to The Pirate Bay, took down the company’s phone system, usurped the CEO’s twitter stream, posted his social security number, and clogged up fax machines ... 'You brought this upon yourself. You’ve tried to bite the Anonymous hand, and now the Anonymous hand is bitch-slapping you in the face', said the letter posted on the firm’s website ...
That's according to Dr Thomas Rid, who finishes his report with: "the attack badly pummeled the security company’s reputation". Yes, you can see how it would, but HBGary (detecting yesterday's threats tomorrow) had been commissioned to sort out QinetiQ's cybersecurity problems so circumspice, Mr Slapnik.

Not to be left out, Bloomberg had been targeted by the same Chinese hackers in pursuit of the same object – keeping Mr Wen's business dealings out of the news. Fail. Everyone who is anyone had been hacked. The Pentagon briefed "about 30" defence contractors like QinetiQ about Chinese hacking in 2007-08, too late to stop the Chinese acquiring so much information on Lockheed Martin's F-22 and F-35 fighter jets that it's doubtful now whether it's worth deploying them. Ditto the designs for the US combat helicopter fleet, drones, satellites and military robotics, all of which were copied from QinetiQ's computers.

Bloomberg's computers weren't hacked straight from China. The Chinese tried to come in via computers they had taken over in various US universities. Same modus operandi, NASA complained to QinetiQ that it was under attack by the Chinese via QinetiQ's computers and would QinetiQ please sort it out. Investigators into that hack found that you could just sit in the car park and connect to QinetiQ's network via an unsecured wifi. They also found that the Russians had been stealing trade secrets from QinetiQ for 2½ years.

Towards the end, the Chinese had access to 13,000 internal passwords at QinetiQ and they could do pretty much whatever they wanted: "by 2009, the hackers had almost complete control over TSG’s computers". TSG is QinetiQ's Technology Solutions Group, whose boss reckoned that investigating all this hacking took too long. "You finally have to reach a point where you say let’s move on" and, indeed, he has now moved on.

HBGary weren't the only security experts trying to sort out QinetiQ. Mandiant were in there (and at the New York Times) and suggested using two-factor authentication to log on to the QinetiQ network, the way those of us with a Lloyds business account do. No, said QinetiQ, and off went all their robotics designs.

HBGary's counter-espionage software was installed on 1,900 QinetiQ computers but it wouldn't run on a lot of them and when it did it missed some rogue software and reported some benign software and it slowed the machines down so users did what they always do and deleted it. HBGary accused another consultant, Terremark, part of Verizon, of withholding information and Terremark said damned if they were telling HBGary anything, their clunky software was alerting the hackers to the investigation.

Two months after the all-clear, the FBI had to tell QinetiQ they were losing data again and all the consultants came back and tried to clear out the malware they had missed last time round. Meanwhile, the Chinese have got bomb disposal robots on the market that look remarkably like QinetiQ's but they're cheaper.

All of which is just by way of introductory remarks. Setting the scene.

Remember Skyscape? The cloud computing company owned by just one man? The company with contracts from the MOD, HMRC and the Government Digital Service (GDS)?

GDS never did respond to the letter asking them how they had seen fit to entrust GOV.UK to a one-man company. But HMRC did. Twice. Which is very proper of them.

The HMRC response came from Phil Pavitt, HMRC's Director General Change, Security and Information. He said (22 October 2012):
Skyscape’s services are provided through a number of key, or “Alliance”, Partners. These partners are industry leading organisations that provide services in the data centre or “cloud” arena such as EMC (storage  and security services), Cisco (networking) and Ark Continuity (UK based high security data centres) ...

... data security remains integral to HMRC and a pre-requisite of any of our data being migrated to Skyscape is for their solution, including all the constituent parts, to be formally accredited by CESG (the Communications-Electronics Security Group) to Impact Level 3 (IL3) ...

This accreditation is expected imminently, at which point HMRC will be in a position to begin securely moving data over to Skyscape and decommissioning our old servers ... will be re-competed to ensure HMRC continues to take advantage of innovative, secure and low cost solutions ...

It should also be noted that for security reasons HMRC does not discuss details of the data that it holds, or where it stores it, however we are able to confirm that by using Skyscape HMRC data will continue to be kept in accordance with existing legislation and HMRC security policies ...

The data, which will be securely stored by Skyscape, currently resides on several hundred servers, across multiple HMRC office locations. This change will consolidate that data and place it into a small number of secure and highly resilient cloud data centres hence improving the security of the data, the efficiency of managing that data ...
and (28 November 2012):
I must reiterate our assurance that using Skyscape HMRC data will continue to be kept in accordance with existing legislation and HMRC security policies.

When fully operational, Skyscape Cloud Services Ltd will securely host all HMRC data currently held on office File and Print Servers (FAPS) ... FAPS do not hold the definitive tax records for the UK and these records remain distributed across a number of secure systems.

HMRC routinely risk assesses and tests the security of our solutions and services. Our secure connection to Skyscape will be delivered in line with HM Government standards to protect our data, with ongoing assurance checks throughout the life of this service ...

Data security remains integral to HMRC and a pre-requisite of any of our data being migrated to Skyscape is for their solution, including all the constituent parts, to be formally accredited by CESG (the Communications-Electronics Security Group) to Impact Level 3 (IL3). All security aspects of the service will have to be proven in line with HM Government security standards. This will include the need to ensure the ‘cloud’ is hosted in a UK domiciled, secure data centre(s) and operated by staff with appropriate security clearance ...
It's not just HMRC. Here's GDS in their Government Digital Strategy:
We know that our users often find it hard to register for our online services, so it is
vital that we offer a more straightforward, secure way to allow our users to identify
themselves online while preserving their privacy ... (p.34)

Legality, security and resilience

Transactional services will be redesigned to:
  • be robustly protective of the security of sensitive user information
  • maintain the privacy and security of all personal information ... (p.46)
And here's Mydex, one of the UK's eight identity providers, writing about PDSs (personal data stores):
Personal Data Stores create a single, secure, easy-to-access store for such information so that when we need it it’s at our finger tips ... (p.8)

... the PDS can create one single message informing them of the fact that the card has been lost. It can then be sent securely, direct to their systems ... (p.9)

... behind each payment there is a hugely sophisticated system of highly secure data ‘handshakes’ taking place across a complete eco-system of supporting players ... (p.14)

Etc ...
Skyscape is in an alliance with QinetiQ. That doesn't bode well. But it's not just QinetiQ. The Pentagon felt it necessary, remember, to brief about 30 contractors on cybersecurity. They all have problems. Are any of them capable of grabbing their ass with both hands?

Judging by the daily diet of cyberattack stories, no. Cybersecurity looks like a myth. Just bear that in mind whenever a supplier offers you security.

----------

(Hat tip: Anonymous @ 3 May 2013 10:31, see also the excellent 'Chinese' attack sucks secrets from US defence contractor in ElReg®)

----------

Updated 22.5.14

There were bound to be consequences.

With all these allegations of Chinese hacking flying around, the US had to do something. And now they have. 19 May 2014:
America sues China over corporate spying
America's fraught trading relationship with China turned even more hostile on Monday, after Washington filed an unprecedented lawsuit against Beijing for corporate spying.

The US Department of Justice accused members of China’s military, the People’s Liberation Army, of stealing sensitive information from major energy and metal companies, including Alcoa, the aluminium producer, and Westinghouse, which makes nuclear reactors.
The post above was written three weeks before the Edward Snowden revelations. We now know what we didn't in mid-May 2013 that the US is quite capable of a bit of hacking themselves. It's not just China.

Which may be what China had in mind in their initial response to the US suing them. They called the US a "high-level hooligan". Not entirely impolite – it's better than being a low-level hooligan.

Then they raised the stakes, by calling the US a "mincing rascal". It's not clear which international law being a mincing rascal contravenes. But it sounds bad. China wins phase one of the epithet war.

This whole cybersecurity and countersecurity business is fraught with dilemmas. Ethical, legal, diplomatic and trade dilemmas.

Given that you are a rascal, is it better to be a mincing one than not? It's not clear.

And then there's the FBI problem.

Like everyone else, they're trying to recruit infosec/information security experts. These experts are exceptional people. Few and far between, an inordinate number of them lead lives fuelled on drugs, 21 May 2014:
Wacky 'baccy making a hash of FBI infosec recruitment efforts

... FBI Director James Comey ... reportedly told the White Collar Crime Institute that he needs a “great work force” to compete with the black hats, but “some of those kids want to smoke weed on the way to the interview”.
Ethics, the law, diplomacy or trade? Which one will win?

Trade. It often doesCisco to Obama: get NSA out of our hardware. Etc ...


Updated 19.1.15

China now knows what most people in the west are catching up with: that the F-35 Joint Strike Fighter is a lemon.

The latest round of managed information release by Edward Snowden via Spiegel (one of a series) includes the snippet that Chinese security services copied “terabytes” of data about the aircraft ...
Please see also China calls Snowden's stealth jet hack accusations 'groundless'. "Lockheed Martin is producing the F-35 for the U.S. military and allies in a $399 billion project, the world's most expensive weapons program.".

So much for the security of Lockheed Martin's computer systems.

Lockheed Martin must be among the best in the business. The security business. And $399 billion should buy you the best of ... just about everything. And yet "the F-35 Joint Strike Fighter is a lemon".

Charming old stick-in-the-muds that they are, the Government Digital Service may believe that they can offer the public a secure national identity scheme, GOV.UK Verify. But they really can't expect us to believe it. Not now.


Updated 25.5.15

John Bercow mood music

"Read our blog", said the self-proclaimed Digital Leaders on 25 May 2015, and pointed us all at a 12 February 2015 blog post by John Bercow MP, Speaker of the House of Commons, British democracy and the digital revolution.

Mr Speaker established a special Commission in late 2013 to "consider how the digital revolution has changed or might further develop British representative democracy".

The Commission has reported now. It sets five targets. And target #4 is:
By 2020, secure online voting should be an option for all voters.
 Feasible?

Just reading over the post above, you can't help noticing that Lockheed Martin of all people couldn't keep the design of the F-35 Joint Strike Fighter secure. Ditto the F-22. Ditto the designs for the US combat helicopter fleet, drones, satellites and military robotics, all of which were copied from QinetiQ's computers. But Mr Speaker thinks that on-line voting could be secure.

Why does he think that? What does he know that Lockheed Martin and QinetiQ don't?

And Sony. What does Mr Speaker know that Sony don't know?

Remember Sony?
For two weeks or so now [we said in December 2014], we have all watched as Sony's private and confidential correspondence has been published by hackers, personal details about the stars of their films have been revealed and the value of the company's intellectual property has been destroyed.
If Mr Speaker can obtain endorsements from Lockheed Martin, QinetiQ and Sony to the effect that they have good reason to believe that he knows how to deliver secure on-line services including electronic voting, maybe we'll believe that his target #4 is feasible. Otherwise, no, his words are just John Bercow mood music.

"When it comes to cyber security QinetiQ couldn’t grab their ass with both hands"

So said Bob Slapnik, vice president at HBGary, the security experts "detecting tomorrow's threats today", as reported by Bloomberg, the company that's been using its financial information terminals to spy on its clients. So says the New York Times, the company whose cyberdefences were breached in 2012 by the Chinese, seeking to stop people being rude about Prime Minister Wen Jiabao. Although the Chinese say they didn't.

You can see why Mr Slapnik was cross back in 2010. QinetiQ had just won a contract to advise the Pentagon on how to counter cyberespionage despite QinetiQ's own computer systems having been comprehensively hacked for the previous three years.

Tuesday 14 May 2013

The unqualified success of the Government Digital Service

Comment submitted to the UK Constitutional Law Group in response to a post on their blog about the perils of GOV.UK:
When links are broken, a bit of history is lost. This vandalism is always happening on the web. We know that. The web is inimical to scholarship in that way.
The advent of GOV.UK was exceptionally vandalistic. The Government Digital Service (GDS), whose baby it is, left behind a trail of destruction. Or rather, they didn’t. They eradicated it.
They did so under the terms of reference of a project called "the single government domain".
They are prone now to congratulating themselves on completing the transfer of all central government departmental websites to the single government domain, GOV.UK, and several non-departmental sites. Their congratulations are premature. hmrc.gov.uk, for example, lives on, thank goodness. A rare case of GDS’s discretion being the better part of valour.
There was internal dissent to the policy-centric GOV.UK approach identified by Liz Fisher. Jeni Tennison argued that destroying departmental identity involved losing something valuable. Judging by the comments on her thoughtful blog post, her objections were slapped down, rather than refuted, and she left GDS.
Who grants the licence for GDS’s vandalism?
The answer may interest Constitutional lawyers. Martha Lane Fox.
Now a peer of the realm, Lady Lane Fox of Soho, it is she who first called for GOV.UK in a letter dated 14 October 2010 where she wrote:
A new central commissioning team should take responsibility for the overall user experience on the government web estate, and should commission content from departmental experts. This content should then be published to a single Government website with a consistently excellent user experience.
The "new central commissioning team" is GDS. And the departments of state are to be reduced, in Lady Lane Fox’s view, to waiting to be commissioned by GDS to publish their policy.
She didn’t stop there. GDS should be able to countermand the law as well as the expertise of policy-makers wherever "user needs" are adversely affected as judged by GDS:
[GDS] SWAT teams … should be given a remit to support and challenge departments and agencies … We must give these SWAT teams the necessary support to challenge any policy and legal barriers which stop services being designed around user needs.
We all used to get emails from the individual departments bringing their press releases to our attention. Now those emails all come from GDS, GOVUK@public.govdelivery.com.
Unprecedented power is being centralised in GDS, whose qualifications – they are a team of website developers – are questionable. It’s a new world.

The unqualified success of the Government Digital Service

Comment submitted to the UK Constitutional Law Group in response to a post on their blog about the perils of GOV.UK:
When links are broken, a bit of history is lost. This vandalism is always happening on the web. We know that. The web is inimical to scholarship in that way.
The advent of GOV.UK was exceptionally vandalistic. The Government Digital Service (GDS), whose baby it is, left behind a trail of destruction. Or rather, they didn’t. They eradicated it.
They did so under the terms of reference of a project called "the single government domain".
They are prone now to congratulating themselves on completing the transfer of all central government departmental websites to the single government domain, GOV.UK, and several non-departmental sites. Their congratulations are premature. hmrc.gov.uk, for example, lives on, thank goodness. A rare case of GDS’s discretion being the better part of valour.
There was internal dissent to the policy-centric GOV.UK approach identified by Liz Fisher. Jeni Tennison argued that destroying departmental identity involved losing something valuable. Judging by the comments on her thoughtful blog post, her objections were slapped down, rather than refuted, and she left GDS.
Who grants the licence for GDS’s vandalism?
The answer may interest Constitutional lawyers. Martha Lane Fox.
Now a peer of the realm, Lady Lane Fox of Soho, it is she who first called for GOV.UK in a letter dated 14 October 2010 where she wrote:
A new central commissioning team should take responsibility for the overall user experience on the government web estate, and should commission content from departmental experts. This content should then be published to a single Government website with a consistently excellent user experience.
The "new central commissioning team" is GDS. And the departments of state are to be reduced, in Lady Lane Fox’s view, to waiting to be commissioned by GDS to publish their policy.
She didn’t stop there. GDS should be able to countermand the law as well as the expertise of policy-makers wherever "user needs" are adversely affected as judged by GDS:
[GDS] SWAT teams … should be given a remit to support and challenge departments and agencies … We must give these SWAT teams the necessary support to challenge any policy and legal barriers which stop services being designed around user needs.
We all used to get emails from the individual departments bringing their press releases to our attention. Now those emails all come from GDS, GOVUK@public.govdelivery.com.
Unprecedented power is being centralised in GDS, whose qualifications – they are a team of website developers – are questionable. It’s a new world.

midata is an attempt to get us all to embrace PDSs (personal data stores)

Comment submitted to Craig Belsham's midata blog:
Mr Belsham

My objections to midata are set out in my response to last year's BIS consultation and I shan't repeat them all here.

None of midata's claims to empower the consumer and to expand the economy is even remotely convincing. Which leaves me asking, like Paul Clarke, why?

One hypothetical answer is that midata's sole purpose is to encourage people to maintain PDSs (personal data stores).

That hypothesis is consistent with William Heath being a member of the midata strategy board and the chairman of Mydex – a PDS company – which is, in turn, one of the UK's eight appointed identity providers. It makes midata part of the Government Digital Service's Identity Assurance Programme (IDAP).

It doesn't excuse the mendacious marketing. But at least it explains why Whitehall takes the trouble to promote this otherwise fatuous initiative.

What do you think, Mr Belsham?

midata is an attempt to get us all to embrace PDSs (personal data stores)

Comment submitted to Craig Belsham's midata blog:
Mr Belsham

My objections to midata are set out in my response to last year's BIS consultation and I shan't repeat them all here.

None of midata's claims to empower the consumer and to expand the economy is even remotely convincing. Which leaves me asking, like Paul Clarke, why?

One hypothetical answer is that midata's sole purpose is to encourage people to maintain PDSs (personal data stores).

That hypothesis is consistent with William Heath being a member of the midata strategy board and the chairman of Mydex – a PDS company – which is, in turn, one of the UK's eight appointed identity providers. It makes midata part of the Government Digital Service's Identity Assurance Programme (IDAP).

It doesn't excuse the mendacious marketing. But at least it explains why Whitehall takes the trouble to promote this otherwise fatuous initiative.

What do you think, Mr Belsham?

The historically inevitable triumph of midata

Many of us find the Department for Business Innovation and Skills's initiative, midata, perplexing. With the passing of the Enterprise and Regulatory Reform Act 2013, midata now has the statutory powers needed. But why?

An explanation is available.

It nestles in one of the comments on Craig Belsham's first post on his new midata blog.

The midwife to our understanding is one William, who opens his comment (May 8, 2013 at 9:47 am) with a dazzling and surely incontrovertible exposition of the economic benefits of midata and then adds:
It’s a rational thing for businesses to accept and make a virtue of going along with the inevitable. But inertia tales a lot of overcoming, and it’s understandable to see the element of regulation in Midata ...
Thank goodness, you may say, that there are all-powerful, benevolent scholars out there who understand economics and who will help us to overcome our miserable inertia.

The historically inevitable triumph of midata

Many of us find the Department for Business Innovation and Skills's initiative, midata, perplexing. With the passing of the Enterprise and Regulatory Reform Act 2013, midata now has the statutory powers needed. But why?

An explanation is available.

Sunday 12 May 2013

Biometrics: will the Center for Global Development reconsider?

A recently published report on India's identity management scheme says that: "accurate, biometric-based, identification is quite feasible for large countries, including the US".

The suggestion below is that the conclusion should read: "subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register and the fact that the discipline of biometrics is out of statistical control".

Will the authors consider issuing a revised version of their report?

-----  o  O  o  -----

On the rare occasions when trials have been conducted, the performance of biometrics technology has been disappointing. For example, when 10,000 of us took part in a UK government-run trial in 2004, about 20% of participants couldn't have their identity verified by their fingerprints.

That's useless. For example, the plan at the time was to use biometrics to confirm people's right to work in the UK. You can't tell 20% of the working population that it's illegal for them to work.

Ever optimistic, the biometrics industry is always announcing that the corner has been turned and that it's safe now to believe their promises. Is that true at last?

Consider Performance Lessons from India’s Universal Identification Program [Updated 13.12.18, change of web address, now here], a 12-page report by Alan Gelb and Julia Clark (Gelb and Clark, G&C).

It's about India's Unique Identification project (UID, also known as "Aadhaar") which relies on biometrics. UID/Aadhaar is the brainchild of UIDAI, the Unique Identification Authority of India. UIDAI are currently trying to register the biometrics of all 1.2 billion Indians.

G&C conclude that:
UID’s performance suggests that accurate, biometric-based, identification is quite feasible for large countries, including the US (p.8).

UID shows that countries with large populations can implement inclusive, precise, high-quality identity systems by using existing technology (p.9).
Those conclusions are electric.

If they're correct.

But are they?

Why do G&C conclude that biometrics is now ready for large-scale deployment?

-----  o  O  o  -----

They have identified "160 [biometrics] programs in 70 countries that together cover over 1 billion people and include a wide range of applications – financial access, public payroll management, social transfers [?], health insurance and tracking and voter rolls – as well as national identification systems" (p.1).

Do they say that biometrics is ready for the big time because UIDAI have successfully implemented financial access systems which depend on biometrics? Or public payroll management systems? Or ...

Certainly not.

In fact G&C are at pains to say that:
UID is still at a relatively early stage, and links to the delivery of public programs are only now getting under way (p.2).

It remains to be seen how robust the system is against active efforts to spoof it by providing faked fingerprints or iris images, to capture biometric data in transmission or to penetrate the database (p.2).

Having a unique Aadhaar number issued by UIDAI itself entitles the holder to no specific privileges or programs (p.3).

UID is still at an early stage. Only one fifth of the population has been enrolled and the linkage to public programs is just beginning (p.8).
Their logic doesn't depend on any practical successes of Aadhaar. There aren't any.

What G&C base their conclusions on is the performance of biometrics in the compilation of the Indian population register so far. If we are to answer the question whether their conclusions are correct, we need to look at the UIDAI statistics which measure the reliability of biometrics.

Before we do that, we need to update G&C's conclusions. There's a rider to add. Their p.2 warnings about spoofing and eavesdropping on telecommunications and burgling the population register need to be incorporated – the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack.

-----  o  O  o  -----

Slide rules ready? G&C say (p.5):
How many people would be denied enrolment because of a wrong determination that they had already enrolled? The False Rejection Rate (FRR) of the identity system is critical, especially with a large population. Since every new enrollment has to be checked against every existing enrollment, the number of comparisons increases with the square of the population ... Extrapolating this to our hypothetical Ughana population ...
Wrong.

Think in terms of ice cream. How many unique combinations of two ice cream flavours can you make from a choice of five flavours (A, B, C, D and E)? G&C suggest that the answer is 25, "the square of the population". It isn't. It's 10 (AB, AC, AD, AE, BC, BD, BE, CD, CE and DE), 5!/((5-2)! x 2!).

G&C have a peculiar habit. They're talking about India, with its population of 1.2 billion, but half the time when they use statistics they apply them to Ughana, a country they have invented. Why?

It confuses the readers. It may also confuse the writers. Forswearing Ughana and sticking to India, how many comparisons would have to be made to compare each one of 1.2 billion sets of biometrics against all the rest? Answer, 719,999,999,400,000,000 and not G&C's implied answer 1,440,000,000,000,000,000, which is out by a smidgeon over 100%.

New rider on the conclusions –  the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right.

-----  o  O  o  -----

How did G&C get themselves into this bind?

It was in the midst of a discussion about false accept rates and false reject rates.

Aadhaar is all about comparing the biometrics captured by a fingerprint scanner or an iris scanner with the biometrics stored in the population register. Either they match or they don't.

Say your Aadhaar number is 782474317884, that there's an election on and that you have turned up at a voting centre. The biometrics associated with 782474317884 are retrieved from the population register and checked to see if they match your freshly scanned biometrics. If they do, you can vote. It's a one-to-one comparison, an "authentication" process.

Two ways the process can go wrong (among others):
  • Either the process says the biometrics don't match, you are not who you claim to be, you are not President Lincoln according to Aadhaar, even though you are in reality. That's a false reject. 
  • Or, alternatively, the process can say that, yes, you are who you claim to be, you are President Lincoln, when, in fact, you're not, you're an impostor. That's a false accept.
The False Accept Rate (FAR) and False Reject Rate (FRR) are two measures of the reliability of any biometrics system. They are inversely proportional. This is the "Detection Error Tradeoff" that G&C talk about on p.4. As one goes up, the other goes down. You can't get them both low at the same time.

Take a look at UIDAI's 27 March 2012 report on authentication (p.4). Using one or two fingers to authenticate yourself, UIDAI expect the Aadhaar system to be between 93.5% and 99% accurate. I.e. FRR will be between 1% and 6.5%. That's with a FAR of 0.01%. FRR is high, FAR is low(ish).

Varying FAR from high to low and FRR the other way is achieved by changing the matching threshold. You can set the system to insist on a very high score before asserting that President Lincoln's freshly scanned fingerprints match the set already stored on the population register. That would give a low FAR and a high FRR. Or you can set a very low threshold and achieve the opposite. And all points in between.

This is odd.

In the world we're used to, if you are President Lincoln then you are President Lincoln and that's all there is to it. It doesn't depend on the matching threshold set by some state functionary.

In the world of Aadhaar, depending on the threshold chosen, sometimes you will be President Lincoln (low threshold, easy to achieve a match, low FRR, high FAR) and sometimes you won't (high threshold, hard to achieve a match, high FRR, low FAR). It all depends. At the limit, the functionary could fix it so that no-one was President Lincoln. Or that everyone was.

When we said above that "either they match or they don't", that was a tease. That's the way people imagine biometrics systems to work. All cut and dried. In fact, it's discretionary.

The concept of identity in Aadhaar is different from the concept in the real world. Identity becomes discretionary, something that can be granted or revoked by twiddling the dial on a gizmo.

There's another rider to add to G&C's conclusions – the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed.

-----  o  O  o  -----

That's authentication.

Identification is different.

Identification is the process you go through when you are enrolled into Aadhaar. Before identification, you don't exist as far as Aadhaar is concerned. If public services in India ever start to depend on Aadhaar and you don't have an Aadhaar number, you won't get any public services. Why would the state provide benefits to someone who doesn't exist? At the very least you will look very suspicious.

Identification is a one-to-many process. When you first enrol someone in the register, their biometrics have to be checked for uniqueness. Instead of checking them against just one set of biometrics, they have to be checked against every set already registered.

The errors that can be made by the biometrics system are very similar (among others, yes you are already enrolled when really you're not or no you're not already enrolled when really you are) but the process has such existential consequences that it's normal to talk of false negative identification rate (FNIR) and false positive identification rate (FPIR), rather than FAR and FRR, to distinguish it from mere quotidian authentication.

UIDAI talk of FRRs between 1% and 6.5% for authentication using fingerprints whereas, when it comes to identification, their FPIR figure is 0.057%. That's two orders of magnitude different. Identification is a strict process and, by comparison, authentication is flabby.

G&C unfortunately use FAR and FRR for both identification and authentication which obscures the important distinctions between the two processes.

-----  o  O  o  -----

FNIR and FPIR are inversely proportional, like FAR and FRR.

How good are the biometrics UIDAI are using at creating a reliable population register?

It's a problem Professor John Daugman has looked at. Not in connection with Aadhaar in particular. But in general. For any biometrics-based identity management scheme.

Remember, to establish uniqueness for every one of the 1.2 billion sets of biometrics on India's population register, you have to make 719,999,999,400,000,000 comparisons.

Suppose, says Professor Daugman, that there's a mistake 1 time in a million such that a false positive identification is made. Then Aadhaar will throw up 719,999,999,400 false matches.

These can't be resolved by the computer – it's the computer that threw up the false matches in the first place. They have to be resolved by human investigations.

Humans aren't going to complete 719,999,999,400 investigations. It's impractical. The identity management scheme will drown in a sea of false positives, as the professor puts it.

Is there a one-in-a-million chance of a mistake?

Professor Daugman thinks that it's a lot worse than that if you rely on face recognition as a biometric. There's far too little randomness in faces, there are far too few degrees of freedom, for face recognition to support enormous numbers like 719,999,999,400,000,000. (Never mind Ughana, that doesn't stop the UK government wasting money on face recognition.)

Fingerprinting is better in this sense than face recognition, but still not good enough to avoid drowning in a sea of false positives. (That doesn't stop the UK government wasting money on glitzy new fingerprinting systems.)

Irises on the other hand do have enough randomness, he says, there are enough degrees of freedom to stay afloat. Which is good news for UIDAI – Aadhaar uses a combination of both fingerprints and irises.

-----  o  O  o  -----

Is Aadhaar in the clear? Which is it? Sink or swim?

According to UIDAI's report on identification (p.4), on 31 December 2011 when there were 84 million sets of biometrics on the population register, the FPIR was 0.057%, the FNIR was 0.035% and "it is unnecessary and inaccurate to attempt to infer UIDAI system performance from other systems which are ten to thousand times smaller".

It may be unnecessary and it may be inaccurate but it's impossible to resist the temptation – compared to any other biometrics-based scheme known to man, these figures for Aadhaar are astonishing. Certainly no salesman worth his or her salt will ignore it.

It looks as if there would be only 684,000 false positive identifications to investigate by the time the population register is full, and not 719,999,999,400.

684,000 is manageable. As UIDAI say (p.18):
... at a run rate 10 lakhs enrolments a day, only about 570 cases need to be manually reviewed daily to ensure that no resident is erroneously denied an Aadhaar number. Although this number is expected to grow as the database size increases, it is not expected to exceed manageable values even at full enrolment of 120 crores. The UIDAI currently has a manual adjudication team thatreviews and resolves such cases.
[1 lakh = 100,000 and 1 crore = 10,000,000]
How do UIDAI know that the FPIR was 0.057% when the register had 84 million entries?

Presumably they had recorded 47,880 cases of false positive identifications to date.

You'd think that. But you'd be wrong. UIDAI tell us that (p.18):
An FPIR of 0.057% was measured when the gallery size was 8.4 crore (84 million) and probe size was 40 lakhs (4 million). The false rejects (legitimate residents who are falsely rejected by the biometric system) were a count of 2309 out of the 40 lakh probes
They did a test. They probed the gallery with 4 million sets of biometrics and they got 2,309 false positive identifications.

Funny way to do it.

Perhaps we shall be told that there's an agreed protocol in the biometrics industry such that this is an acceptable way of determining FPIR. Even so, why not report the actual number of false positive identifications recorded?

That statistic should be available in the case of Aadhaar – G&C tell us that (p.2):
UIDAI places a heavy emphasis on data quality throughout the process. It collects as much operational data as possible, including on the details of each individual enrolment as it is carried out, process by process. This is included, together with biometric and demographic data, in the packet of information sent from the enrollment point to the data center.
Why not tell us how many false positive identifications there were as well as the result of the test probe? Why were there 4 million sets of biometrics in the probe and not 5 million, or 3 million? How were the 4 million chosen?

The questions mount and the answer gradually comes into focus – in order to inspire confidence, UIDAI's figures need to be audited by independent experts and certified like a set of company accounts.

And, like company accounts, they should be audited every year. These figures from 31 December 2011 are getting very long in the tooth.

Another rider – subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed.

-----  o  O  o  -----

UIDAI say that the incidence of false positive identifications is manageable and that they expect it to remain manageable. I.e. they're not drowning in a sea of false positives.

G&C have this footnote, #7, on p.5 of their report:
For a huge population like India’s, even this small level of error would result in some 3.1 million false rejections if continued through the program. UIDAI plans to contain the numbers by eliminating some sources of error unearthed by the initial study, and also by relaxing the [FNIR] if needed to further reduce the [FPIR]. Handling false rejections has reportedly been a manageable problem to date.
"UIDAI plans to contain the numbers by ... relaxing the [FNIR] if needed to further reduce the [FPIR]". What? "Relaxing the [FNIR]"?

What does that mean? In order not to drown in false positives, UIDAI will let false negatives go up? UIDAI have got to get the population register completed and if that means tolerating lots of duplicate entries, too bad, so be it, let uniqueness go hang? If that isn't what it means, then what?

How relaxed? Very relaxed? What level does FNIR have to rise to, to keep FPIR down at 0.057%? Do UIDAI even know? Should they change their name to the Multiple Identification Authority of India?

"It is unnecessary and inaccurate to attempt to infer UIDAI system performance from other systems which are ten to thousand times smaller"? On the contrary, it is only sensible to question UIDAI's performance claims.

The riders are piling up now – subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register.

-----  o  O  o  -----

If a supplicant turns up at an Aadhaar registration centre and is the victim of a false positive identification, you're going to know about it. They're going to demand their Aadhaar number and they're going to stay there and jump up and down until they get it. At least they will if they're legitimate and not impostors.

It's different with false negative identification. If an impostor turns up at the centre and his or her earlier registration is not detected by Aadhaar, then they're not going to tell you. You won't know. Impostors don't have the same desire to keep the performance statistics up to date as upright people do.

The upshot is that you can't measure FNIR. Not in the field.

You can submit a batch of sample biometrics and see how well the system performs. How successful it is at finding these deliberately seeded duplicates on the register. And that's what UIDAI did (pp.18-19):
To compute FNIR, 31,399 known duplicates were used as probe against gallery of 8.4 crore (84M). The biometric system correctly caught 31,388 duplicates (in other words, it did not catch 11 duplicates). The computed FNIR rate is 0.0352%. Assuming current 0.5% rate of duplicate submissions continues, there would only be a very small number of duplicate Aadhaars issued when the entire country of 120 crores is enrolled. Aadhaar expects to be able to increase the quality of all collections as the system matures. Consequently, we expect the potential number of false acceptances to decrease further below this already operationally satisfactory number.
That's fine. But if the actual number of "duplicate submissions" is higher than UIDAI assume and the "false acceptances" are more numerous than they expect, no-one will know. All UIDAI can say is, when we did this test, we got this result. Whether that is an accurate measure of FNIR out there in the operational system in the real world, nobody knows.

What we do know – G&C tell us – is that UIDAI have been "relaxing" the FNIR to keep FPIR low. The confidence we can have in UIDAI's figure for FNIR is severely limited.

-----  o  O  o  -----

It's worse than that.

G&C tell us on p.1 of their report that:
Although there has been extensive laboratory testing of different hardware and software for a variety of biometrics, including fingerprints, iris, face and voice, testing under carefully controlled conditions does not provide adequate information on real-world performance, which can be affected by many factors (Wayman et al 2010).
The paper they cite, Fundamental issues in biometric performance testing: A modern statistical and philosophical framework for uncertainty assessment, is written by three world-class experts – James L. Wayman, Antonio Possolo and Anthony J. Mansfield.

As G&C tell us, the experts conclude that technology tests and scenario tests tell us nothing about how well or how badly a biometrics system will perform in the operational environment. As they put it, biometrics is out of "statistical control".

To put it another way, UIDAI's FNIR and FPIR test probes are a waste of time.

Tony Mansfield is the UK's top biometrics authority and Jim Wayman is the US's. And Antonio Possolo is the top man on measurement at the US National Institute of Standards and Technology (NIST). They're practitioners. They have decades of experience. They advise governments. Their own and others. They know what they're talking about.

And what they're talking about is biometrics being out of statistical control.

That implies many things. Among others, consider the following.

Messrs Wayman, Possolo and Mansfield refer to the USA PATRIOT Act in their paper (p.20). By law, NIST have to certify biometrics systems before they are deployed in the national defence.

That may be the law but, if the technology is out of control then NIST have a problem obeying the law. They could refuse to certify any biometrics systems and then none would be deployed. That's one option. They have chosen another option. The certificate they issue says:
For purpose of NIST PATRIOT Act certification this test certifies the accuracy of the participating systems on the datasets used in the test. This evaluation does not certify that any of the systems tested meet the requirements of any specific government application. This would require that factors not included in this test such as image quality, dataset size, cost, and required response time be included.
That's the best they can manage in the circumstances. The result of the test is the result of the test and that's all we know. How the system will perform in the field is anyone's guess. According to three world-class experts, in biometrics, that is the state of the art.

Final rider – subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register and the fact that the discipline of biometrics is out of statistical control.

-----  o  O  o  -----

It is premature to conclude that biometrics have proved themselves in Aadhaar:
  • Let's wait and see if any bank is confident enough to authorise payments on the basis of biometrics alone. No password. No PIN. No token. Just biometrics.
  • Let's wait and see if legitimate voter participation is increased by Aadhaar.
  • India's various food and fuel distribution programmes and its temporary employment programmes for the long-term unemployed are plagued by large-scale corruption. Let's wait and see if Aadhaar reduces the level of corruption or simply automates it.
  • And let's wait for an independent audit of UIDAI's results.
G&C have already identified 160 biometrics programmes in 70 countries. This latest report of theirs will be embraced by biometrics salesmen the world over as an unsolicited testimonial from a respected source and will be used to raise funds for more programmes. (G&C driving up the false accept rate?)

G&C work for the Center for Global Development, a Washington-based think-tank and lobbyist which aims to "reduce global poverty and inequality through rigorous research and active engagement with the policy community to make the world a more prosperous, just, and safe place for us all".

It's hard work finding good homes for aid money. There are legitimate doubts about the reliability of biometrics. Aid money isn't necessarily well spent on biometrics systems.

Michela Wrong, a journalist who has covered Africa for two decades, reported on the March 2013 elections in Kenya complete with biometric registration of electors and electronic voting. She had this to say:
I suddenly realised I was watching a fad hitting its stride: the techno-election as democratic panacea ... EU and Commonwealth election monitors hailed the system as a marvel of its kind, an advance certain to be rolled out across the rest of Africa and possibly Europe, too. The enthusiasm was baffling, because almost none of it worked.
The Economist magazine have let down their scepticism guard and become active in the unsolicited testimonials market – please see The Economist magazine sticks its nose into Indian politics, comes away with egg on its face and The Economist magazine's chickens, now on their way home to roost.

That was some time ago. They remain dazzled by technology to this day: "India has registered 275m of its 1.2 billion people in one of the world’s most sophisticated ID schemes (it includes iris scans and fingerprints)". Why do they think that the inclusion of biometrics is ipso facto "sophisticated"?

They should talk to Michela Wrong.

-----  o  O  o  -----

G&C have spotted what the Economist have missed:
  • The Wayman, Possolo and Mansfield paper.
  • UIDAI relaxing the FNIR.
  • The element of smoke and mirrors in biometrics – they talk about the "fiction of infallibility" (p.9) and the "pretense of uniqueness in the ID system" (p.10) and the possibility that "in the longer run, as its mystique evaporates, the identity system will no longer be trusted by anyone, eliminating any value" (p.10).
Above all, quite rightly, G&C call for more countries to release data on the performance of biometrics in the field – "distressingly little data is available on [biometrics] performance, either for identification or for authentication" (p.1) and "there is now no excuse for other countries not to share data—or for donors not to insist on it when financing identification programs" (p.10).

The biometrics salesmen won't like that conclusion of G&C's and they won't mention it, please see UIDAI and the textbook case study of how not to do it, one for the business schools. (Neither will the UK government.)

All that healthy scepticism, and yet G&C conclude that biometrics is ready for large-scale deployment:
  • Did they check with NIST or the FBI before publishing their report? Those organisations know quite a lot about biometrics and might have provided some useful input.
  • Did they contact Messrs Wayman, Possolo and Mansfield? If G&C believe them when they say that biometrics is out of statistical control, then there's not much point filling up their report with useless statistics, is there? If they don't believe them, why not?
  • Would G&C be so generous with their testimonials if Aadhaar was an aeroplane safety system, for example?
  • Would they feel qualified to comment if they were dealing with the pharmaceutical industry rather than the biometrics industry?
  • Would they be more sceptical if they were dealing with research funded by the tobacco industry?
  • Why does biometrics get the kid gloves treatment?
  • And what is this fake distinction G&C make between countries with a large population and a small one? The biometrics tested in the UK failed with a trial population of 10,000 participants. Biometrics is a technology. At least it's supposed to be. Either it works or it doesn't. Cars work in the US. And they work in India. If biometrics isn't good enough for the US, it's not good enough for India. Or Uganda or Ghana. Which are two different countries. Ask Michela Wrong.
All that healthy scepticism, and yet G&C conclude that: "UID shows that countries with large populations can implement inclusive, precise, high-quality identity systems by using existing technology".

No.

It shows nothing of the sort.

Is there any chance of G&C reissuing their report with revised conclusions?

Biometrics: will the Center for Global Development reconsider?

A recently published report on India's identity management scheme says that: "accurate, biometric-based, identification is quite feasible for large countries, including the US".

The suggestion below is that the conclusion should read: "subject to an annual audit, the US could safely deploy an identity management scheme based on biometrics apart from the possibility of cyberattack and as long as we've got our maths right and as long as you realise that it's not identity that's being managed and as long as you're relaxed about the fact that anyone could have any number of entries on the population register and the fact that the discipline of biometrics is out of statistical control".

Will the authors consider issuing a revised version of their report?

-----  o  O  o  -----

On the rare occasions when trials have been conducted, the performance of biometrics technology has been disappointing. For example, when 10,000 of us took part in a UK government-run trial in 2004, about 20% of participants couldn't have their identity verified by their fingerprints.