Microsoft Secure Tech Accelerator
Apr 03 2024, 07:00 AM - 11:00 AM (PDT)
Microsoft Tech Community
Biometrics – Keep Your Fingers Close
Published May 26 2020 11:00 AM 11.3K Views
Microsoft

Here’s a common customer question, especially in manufacturing, government-to-citizen, and kiosk scenarios: “ I want a system where my user can walk up to any system, then scan a fingerprint (or look at a camera, or speak into a mic, or …) and just be magically logged in.”

 

Walking up to a new-to-your machine and logging in with just a fingerprint or a smile is certainly a very appealing scenario – nothing to buy, manage, or lose; no phishing or re-use problems, no issues with employees having their social-media-device (a.k.a. “Smart phone”) in a work or security sensitive environment. Lots of people will try to sell it to you.

 

To enable the magic scenario, all we need to do is register the user’s biometric signature with a remote credential server, then scan the biometric on the client, and finally, match what we scanned with what we stored to magically show that it is the same user. Cool! What’s not to love?

 

Alas, there are several major issues with using centralized biometrics to authenticate users. Centralized biometrics can help with identification – the ability to guess who the user might be from a pool of possibilities. But as of early 2020, remote biometrics aren’t suitable for the authentication – the task of proving who the user is.

 

The problems with using remotely stored biometrics for authentication stem from three major issues with biometric signatures:

  1. Biometrics aren’t secrets, even before we consider intentional recording of our faces in social media or fingerprints in voluntary identification systems used for more rapid check-in, your physical presence is easily captured. It is, by definition, how you present yourself to the world. This makes the capture of the biometric a matter of finding compatible images, videos, or imprints, or getting the user within range of a rogue sensor. You’re kind of famous.

  2. For any given user, the choices for biometrics are very limited. 10 fingers, 2 eyes, one face, one voice, one pulse signature . . . even after thinking up a bunch of new ideas, we’re realistically limited to 20 or so things we can revoke across all systems. Practically speaking, usability is much more limited – a few fingers on your dominant hand, and your face. When we consider the limitations of budget that usually restrict an organization to a single sensor class, options are even more limited. Even a four-digit pin offers 10k options before we run out of options. You can only revoke so many fingers.

  3. Biometric data may be re-useable across systems. Most biometric systems store templates, not exact images of the biometric being used. Distances between key reference points on your face or in your fingerprints are common. However, depending on the technology the data used for biometric matching may be similar, so a fingerprint template extracted from one system can be transformed to work in another – think of this like transforming from .jpg to .bmp to .png formats – you can transform from one template to another but retain usability of the data. In some modern systems, templates may not even translate between versions of the algorithm for the system, but realistically, rolling to a new algorithm and re-enrolling all templates is often prohibitive or impossible. Lose it anywhere, lose it everywhere.

  4. Biometric templates are easily stolen. Because biometrics rely on an approximate match, hashing - which non-reversibly transforms an exact value - isn’t an option. Templates stored in centralized databases must be reversibly encrypted. Once the attacker figures out the key, they can extract the templates for all users in the database. And once extracted, the template is now compromised for all compatible systems or systems to which the template can be translated. If the system supports multiple registered biometrics, new templates can be added which allow the attackers template to be used on an already registered identity without disrupting the correct user’s login experience (for single value credentials like passwords, changing a value is immediately detectable when the correct user can’t login).

One can make a reasonable argument that while the biometric in each case isn’t a secret, that isn’t important as long as the mechanism guarantees perfect integrity, ensuring:

  • what is registered for the user on the server can’t be manipulated,
  • the biometric reader (e.g. fingerprint scanner) is valid and untampered, and
  • the pathway between the reader and the storage mechanism is perfectly secure.

These things are theoretically possible with lots and lots of crypto. For example, every biometric reader could have an embedded keypair and sign every request with its public key, which every party in the pathway could check with a manufacturer’s certificate authority then look up by model number to ensure it was a known valid reader which hadn’t been hacked. One could further use some firmware checksum to attempt to ensure that the hardware hadn’t been manipulated and include that in the request signature. Use a distributed ledger to validate databases are untampered. And so on.

 

But each of these steps has been shown to be challenging in the real world, with a long trail of broken protocols, algorithms, and products to show for it. And every step on the pathway between the sensor and the server creates opportunities for the attacker.

 

Biometric authentication is important to solve for at-risk, impoverished, and displaced populations (to name but a few), but I think the stakes are just too high to get it wrong. For this reason, I am glad to see the problem is being approached in a standards forum, inviting participation, inspection and third-party validation (I am broadly a fan of the standards process for these reasons – standards are a powerful security tool).

 

Until such time as these standards are ratified, implemented, and time tested, we rely on something else – minimizing attack surface. The goal of implementations like Windows Hello and standards like FIDO2 is to minimize the interceptability of the biometric template by minimizing the path between the template and secure storage, limiting usability of the template to the device where it was acquired, and by storing the templates themselves in secure hardware. The templates are used only to access cryptographic operations by the secure hardware (e.g. TPM) which uses the template to protect operations such as creating keypairs, releasing public keys or signing messages with the private key.

 

In this world, compromising the biometric pathway requires direct hardware access, and even if someone were to compromise the authenticator, the template retrieved is useless elsewhere. Importantly, there is no need to collect biometric data on any device not in the user’s direct possession. But this means biometrics must be registered to the local authenticator, which means we lose the ability to walk up to a random device and scan a fingerprint. With this approach, biometrics can only be used locally on devices on which they have already been registered.

 

By comparison, centralized biometrics can be compromised in the reader, the device the reader is connected to, the software on that device, the network between that device and the server they’re stored on, and the server itself – and none of that pathway has security mechanisms which have been ratified by someone other than the vendors who make them.

 

Without rigorously validated mechanisms, centralized biometrics are ripe for tampering, interception, extraction, and sharing. What happens if compromise of templates in one badly managed system means your face is compromised for recognition in all systems? If by adding their template besides yours an attacker can become you? What happens if the company managing your personal data shares it with others, or if the stolen data is used to automatically identify you in other contexts, without your consent?

 

This is why Microsoft does not currently build in support for authentication using centralized biometrics. The security issues are simply too pervasive, the impacts simply too significant. And yet – for many environments, and especially for at risk, displaced, and impoverished populations – betting identifiability on a $15 device means denying the benefits of digital identity to many of the people who need them most. I believe that this is an incredibly important to solve, and so I believe we will, in time, solve it.

 

But for now, our recommendation for biometric authentication is to limit collection and storage of biometric data to secure hardware environments. Windows Hello requires the biometric is locally registered to the device, as does FIDO. This is preferable for both security and privacy reasons. The biometric is only used to locally unlock the authenticator and doesn’t act as an authenticator in and of itself.

 

But that means our ideal “walk unencumbered up to a previously unused device and just sign in with a biometric” isn’t ready for prime time quite yet. You must present the authenticator and then unlock it with your biometric. Not quite as magic as the “” scenarios, but a whole lot more secure, private, and future proof.

 

Interested in helping solve the problem of biometrics-only authentication at scale or think I'm missing something? Hit me up on Twitter at @alex_t_weinert to share your thinking.

 

 

 

 

 

1 Comment
Version history
Last update:
‎Jul 24 2020 01:07 AM
Updated by: