Several possible angles of discussion here - from the privacy implications, the difficulty of revocation...
But I think I'll start with the algorithmic PointOfView?.
How do you go about turning a lossy piece of information (no scan is ever accurate) into a secret key, whilst maintaining the requirement that it stays unique?
Taking a hash of the significant bits isn't good enough, since several hands (or irises or whatever) will then map to a single key. Ideas? --Vitenka
Is this actually a problem, particularly if it's being used to add an extra layer of security to something like a credit card? All that's needed is some reasonable evidence that the user is who they claim to be. If you don't know who you share BioMetric? information with, which seems a reasonable assumption, then you can't identify which card to steal. --CH
Well, you still need to make it contain enough bits of unique information so that not every hand on earth activates it, but your hand reliably activates it, despite dodgy scanning. Agreed, it's less important the less important the scanner is, but it must still be an interesting algorithm. --Vitenka
But, if it turns out that 1 person in 50 shares your prints at that level then the bank would say that there's a 98% chance that you did spend that money and that you couldn't possibly have been mugged, coming back to the arguments under ChipAndPin, albeit even MORE in favour of the bank (and, I do admit, it'd be less likely that you'd be successfully mugged, but only until the people doing the mugging catch up technologically - this only raises the bar, and you're stuck in an ArmsRace). Meanwhile, it's the bank customers who will be footing the bill for that arms race, and frankly, that's my money. Gerroff. --Jumlian
Humm. I was trying to get away from ChipAndPin. Still, the point stands - for an important break in, then unless the chances of a FalsePositive? are one in twelve billion, then an attacker has a decent chance of finding a match if they try hard enough. --Vitenka
AlexChurchill disagrees, for what it's worth. Sure, 1 in 50 isn't far enough, but one in 12 000 000 000 is too far. 1 in 5000 would do.
Six billion people on earth, and counting. So double it to get a standard deviation and make it unlikely that two exist. BackOfTheEnvelope? maths, sure... But one in five thousand means that you can find a couple in every town! --Vitenka
Could do, sure. If you had access to the biometric hashes of everyone in said town! --AC
You exaggerate. Consider the number of people visiting a petrol station, or a Cambridge cafe, or other popular place of business containing biometric-hashing equipment under a clerk's control, over several weeks. Consider that there does not have to be just one person looking to find a match, and apply the birthday paradox. - MoonShadow
Just what are the standards of "trying hard enough" that Vitenka thinks we need to defend against? If someone's got access to an entire town's biometric data then they've probably got access to their credit cards also. --AC
Oh, indeed, you're no worse off if someone's collecting biometric hashes than if they're gathering other credit card data - if they're only useful for the purpose of authenticating credit cards. But I thought the point of all that money and all that effort was to make things better? - MoonShadow
Also, once a large base of readers, card factories and other infrastructure is around, how likely is the usage limitation remain? How likely are people to want to do anything different when something else needs to be authenticated? What happens when we get feature creep, like the situation with SSNs in the US? What if it's also the authentication used in Blunkett's ID cards, say? Suddenly you can do a lot more with a hash collision that just drain someone's credit card. And once a collision is known, you can't revoke your fingerprint the way you can revoke a non-biometric form of ID. - MoonShadow
DNA tests as used by courts are far from perfect, and I'm pretty sure the odds against a false positive aren't twelve billion, but you don't see hue and cry about their inaccuracy. --AC
First off - it is very rare to try and spoof DNA testing, since you're only testing those who were (plausibly) at the scene. If it was used in a more widespread way, it would be seen to be innaccurate. Even as it is, it seems to be given too much weight. Secondly, you are right - it depends how badly you need something protected. Biometrics are clearly overkill for something like access through a shared door, and clearly insufficient for something like access to top-secret files. From a banks point of view, I agree that it's probably 'good enough' (hey, stops 5% more fraud, costs us x billion, do it) - but from the individuals point of view, it is not a good thing. I'd also like to add that fingerprints are a really stupid thing to use on credit cards, which are likely to get the prints of the user all over them already. Which would make spoofing kinda trivial. Which is another thing you forget - you leave your biometric data everywhere. The number of places it could be lifted (and remember, you can never redact it) is terrifying compared to a secret password only used for access. --Vitenka
Surely this is down to ignorance and a false belief in unquestionable forensics rather than anything else. There was a good article in New Scientist a few years ago on this which mentioned a couple of cases where false positives have been spotted already on DNA tests. Where exactly does the figure one in 12 billion come from anyway? --Edith
Six billion in the world, double that as a rule of thumb to make finding a collision unlikely. --Vitenka (There will be collisions, but you're not likely to find them)
I've just had a thought. I'm presuming here that a ReverseLookup? DataBase? will exist (to allow the attacking country to find the matching individual) - but surely such a DataBase?will exist sooner or later... and doesn't it throw up all sorts of problems? Once a matching pair have been found, they'll be messing up all sorts of different applications. --Vitenka
Oooh! Oooh! IdenticalTwins?! Or is that a fallacy? --Vitenka (I think it is for most bio data - but not for all.)
And what about kids, who haven't finished growing? If you issue them a new id every few months then it's both expensive and a huge security hole. --Vitenka (And if you don't, well, then there's a lot of stuff that you can't protect with bio data)
What question are you trying to use biometrics to answer?
"Is this iris that is in front the camera right now the same as the one recorded on the card the person holds?" is different to "Is this iris in front of a camera right now one of the ones on this list of 9000-odd people?", is different again to "is this face I caught on a CCTV shot of a crowd the same as one of the ones on this list of 9000-odd people?"..
How reliably can you answer it? Give an expected rate for each of the following cases:
System answers "yes"..
..and it is correct.
System answers "no"..
..and it is correct.
What is the cost incurred by each kind of failure? At what rate do you expect people to attempt to deliberately fool the system to make it give an incorrect answer in each case, and how much can they affect reliability?
What is the value of the thing you are using the biometric to protect?
What ways can you think of of attacking the system, how expensive are they, how much do they affect the system and are they ClassBreaks?
How expensive is the system?
How does the presence of the system affect the people around it, and thus change their behaviour - changing the above answers. --Vitenka (No system is static. And if people introducing new systems would only learn that...)
Taking all the above into account, is the system worth it?
Unsure where in the hierachy to paste this. [The register] chimes in with another opinion piece which touches on many of these issues. Biased against introduction of such a system, as always. --Vitenka
Maybe. This is a classic example of the BirthdayParadox?. Making the wild assumption of independence (if the Feds can do it when it suits them...) you get a probability of MoonShadow and I matching at 9 loci of about 1 in 50,000,000. So from a judicial perspective the problem is more failure to take into account the Bayesian of evidence - P(He did it) != 1 - P(The DNA samples match | He didn't do it) - and problems with the nature of DNA replication, etc. DNA evidence is pretty useless for identifying the person who robbed an off licence, but notwithstanding hash collisions it's still useful for matching the blood in the carpet against the the missing person. --PT