If I understand correctly, this means you can't back up the private key, correct? It's in the Secure Enclave, so if you lose your laptop, you also lose the key? Since it looks like export only really exports the public key not the private one?
Probably not the worst thing, you most likely have another way to get into the remote machine, or an admin who can reset you, but still feels like a hole.
Or am I missing something?
ps. It amuses me that my Mac won't let me type Secure Enclave without automatically capitalizing it.
Edit: I understand good security is having multiple keys, I was simply asking if this one can be backed up. OP answered below and is updating their webpage accordingly.
Check out `man sc_auth`. There's also an exportable variant where the private key is encrypted using the secure enclave as opposed to generated on the secure enclave:
% sc_auth create-ctk-identity -l ssh-exportable -k p-256 -t bio
% sc_auth list-ctk-identities
p-256 A581E5404ED157C4C73FFDBDFC1339E0D873FCAE bio ssh-exportable ssh-exportable 23.11.26, 19:50 YES
% sc_auth export-ctk-identity -h A581E5404ED157C4C73FFDBDFC1339E0D873FCAE -f ssh-exportable.pem
Enter a password which will be used to protect the exported items:
Verify password:
You can then re-import it on another device
% sc_auth import-ctk-identities -f ssh-exportable.pem.p12 -t bio
Enter PKCS12 file password:
Is there a way to import an existing (compatible) key and still mark it as non-exportable?
That seems more useful for the SSH key scenario: Generate a key in memory and back it up to offline storage once, and otherwise only use it in a way totally non-exportable by any malware.
This sentence
> The exportable private key is encrypted with Elliptic Curve Encryption Standard Variable IVX963 algorithm which is backed by a Secure Enclave key.
makes it sound like exportable keys might inherently not be Secure Enclave resident in Apple's implementation, which would be unfortunate, as anything else can still be accessed by malware with kernel-level privileges.
(GPG, and I believe also PIV, allow importing externally-generated keys without necessarily marking them exportable; they'll just, correctly, lack any attestation statement about having been generated in secure hardware.)
Another option is to generate a key and put it on an offline storage, and have a second key only in the SE. This means you'll need to upload two public keys to places to have a backup instead of one, but I think would otherwise achieve the same thing.
The nice thing with this is you can keep your backup public key easily accessible. I try to keep a primary and backup Yubikey on everything important, but you have to physically get the backup Yubikey in order to add it to a site.
The key is stored encrypted with a unique symmetric key that only your secure enclave knows until the point that you export it. It then re-encrypts it with the password.
Until you export it it's just as strong as an enclave-generated one.
Obviously don't keep the exported password encrypted key around and don't use a weak password for export.
>The key is stored encrypted with a unique symmetric key that only your secure enclave knows until the point that you export it. It then re-encrypts it with the password.
But what's the security benefit of this compared to having a keyfile? So far as I can tell from the commands you provided, there's no real difference, aside from a hacker having to modify their stealer script slightly.
Why is it more secure: a key file on disk is decrypted into memory every time you enter your passphrase. It means the key is around in plain text in the memory of ssh or ssh-agent. Which means it's extractable by an attacker.
An exportable key does all the signing inside the secure enclave and never exposes the decrypted key to OS memory.
The exported key you can keep in a safe for disaster recovery. You shouldn't keep it on your computer of course.
>It means the key is around in plain text in the memory of ssh or ssh-agent. Which means it's extractable by an attacker. An exportable key does all the signing inside the secure enclave and never exposes the decrypted key to OS memory.
But malware can just tell the secure enclave to export the key? Yes, they'll have to write new code to do that, but it's not particularly hard (it's 1 line code from your example above), and it's security through obscurity.
The export operation is guarded by TouchID. So the malware needs to trick you into performing the TouchID gesture.
But yeh the malware only needs to trick you to hit TouchID once. Instead of on each sign operation. So if that's in your threat model don't make the key exportable.
> That's not meaningfully more difficult than tricking you into revealing your key file password.
No, but that's meaningfully more difficult to do without an intervention from the user. Say your computer is infected, the malware won't silently do it: it will have to interact with you.
And an important part is that you apparently don't have to make the key exportable:
> So if that's in your threat model don't make the key exportable.
Which now makes it meaningfully more difficult to extract.
I would personally not export it, just like I don't export (and can't export) the key from a security key. That's a feature.
> Say your computer is infected, the malware won't silently do it: it will have to interact with you.
MacOS is so needy about all kinds of fingerprint/password-related things (and has no context of secure desktop) that it is trivial for malware to simulate and no way for the user to tell whether it's genuine, so it's not a real barrier at all.
If the key is marked as exportable the malware will happily export it for you. The only way to defend against that is to make the key non-exportable to begin with.
I hit my touchid probably 10 times a day, seems pretty easy for me to be tricked into hitting touchid thinking that okta forgot my session or something like that.
As a user I prefer a single touch to typing a passphrase every time. A passphrase also has other attack vectors like keylogging, etc., which would allow replays.
But even if security was exactly the same, I'd prefer the touch to the typing.
So it just has to wait until you’re about to do a legitimate operation requiring authentication, intercept that to export the key, and cancel the real one with a bogus error (and you’ll just try again without any second thoughts).
MacOS has also no concept of secure desktop/etc where the OS can use some privileged UI to explicitly tell you what you are signing and prompt for PIN/biometrics. It’s in fact a well-known problem where legitimate dialogs for system/Apple ID password have no distinguishing features from fake ones.
Generally dialogs that require sensitive input provide some way for the user to ensure they are issued by the OS and not a random program. Windows historically used the Secure Attention Key (that's why domain-linked machines used to require pressing Ctrl+Alt+Del to login, to train users to only enter credentials in secure contexts) which is a key combo that the OS always intercepts and thus once pressed you can be assured you are typing into a trusted UI and not a piece of malware emulating the trusted UI.
Of course, this was back in the day when computers were primarily a productivity tool and not an ad delivery vehicle, so it's unlikely this problem will ever be solved.
Unlike a TPM and like a YubiKey, you can configure the secure enclave to require presence (via Touch ID) so that a stealer script would be stopped with a prompt.
Until the next time you touch your Touch ID for any other operation. It seems realistic for an attacker script to anticipate that and open its own prompt at the right moment (i.e. with your finger already on the way to the button).
Can you explain which keys in the secure enclave make this work because it has at least two keysets: a public-private keypair locked to the root key Apple instantiated in hardware as fused links in the chip and so in theory this could include private keys common to all the devices in this chipset generation, and the locally generated unique keys which are tied to this specific device.
Using the first pair or products of the first pair, means in principle your private key is protected by the goodwill of Apple only: if you allow it to exist at rest in a form only this shroud protects, then Apple can read the private key unless the symmetric algorithm used to "unlock this private key with your password" is a good one, and you chose a password wisely. I haven't used the function so I can't comment how they constrain what you put in as a personal lock on these blobs.
You're not really supposed to 'export' keys. Any time you move a key you risk exposing it. The idea of PKI is that only public keys move, the private key stays in one place, ideally never seen.
I've been in the security space for 25 years, and understand the theory of PKI. But I've also been in the ops space for 30 years, and understand that if you don't balance security theory with operational practice, critical business functions can fail.
Ideally yes, the private key is never seen. In reality, it needs to be backed up in a secure place so it can be restored in the event of a failure.
Keep the private key you actively use in the secure enclave. The system you actively use is most at risk.
Keep a secondary offline private key as backup. You can generate and store it in a secure location, and never move it around. Airgapped even if you want. You could even use a yubikey or other hardware for the secondary key giving you two hard to export keys.
It’s important to remember that over time systems develop complexities that can be hard to recover from scratch because by definition air gapped data aren’t ones you are regularly exercising. Here’s an example of this in action from Google’s history
>It took an additional hour for the team to realize that the green light on the smart card reader did not, in fact, indicate that the card had been inserted correctly. When the engineers flipped the card over, the service restarted and the outage ended.
Yeah but if you get a new device, you have to go add its pubkey to every server you ever use. I wish there were an easier way, otherwise it's understandable that people copy privkeys.
> if you get a new device, you have to go add its pubkey to every server you ever use
It’s not too bad, if the number of servers is not too high.
I have different client pub keys on my phone, multiple laptops and desktop computers and manage my authorized keys to be able to ssh into my servers from the devices, as well as from one laptop to another or from my phone to one of the laptops, etc.
Because I already have several client devices I don’t really need any backup ssh keys. The fact that each device has a different key means that if one laptop breaks or my phone is stolen, I can still ssh into everything from one of the remaining devices and remove the pub key of the broken or stolen device from authorized keys and generate new keys on new devices and then using one of the existing devices to add the pub key of the new device to the authorized keys of the servers and other devices.
For me it’s manageable to do it manually. But if you have very many servers you’d probably want to use a configuration management tool like Chef, Ansible, Puppet or Saltstack. Presumably if you have a very high number of servers you’d already be using a configuration management tool like one of those for other configs and setup anyways.
There is an easier way, it's called TLS certificates, it's just that SSH decided not to use it for some reason.
Other systems of this nature have figured out long ago that you should be able to have one personal certificate (stored securely in an airgapped environment), from which you'd generate leaf certificates for your devices every year.
If you’re operating at the scale this is too cumbersome to do manually surely you already have a configuration management system in place to automate this no?
I have keys tied to several random things, including home servers, GitHub, and AWS. Wouldn't call this scale exactly, but when I got a new laptop, it was way easier to just copy .ssh onto it rather than hunting everything down.
That can work really well for systems where you don't need to share your key material very often, or where sharing is optimized for n-key scenarios.
SSH isn't always that. For example, ssh-copy-id by default does not copy over multiple identities.
For that reason, I'd personally prefer to import my (otherwise airgapped) key into my secure hardware exactly once and mark it as non-exportable in the SSH scenario.
Today I make a private/public keypair, and the private key is on my laptop in my encrypted home folder. It also gets backed up to my encrypted offsite backup. That way if my laptop breaks or is stolen, I can restore from backup and be up and running as before.
I was simply asking if that is still possible with this method, nothing more.
And not every service that uses ssh auth allows multiple keys.
It’s possible but would not bring you any extra security.
The advantage of non-exportable, HSM-backed keys is that you are guaranteed that the only way to use that key is to have online access to the HSM, and you can recover from HSM access compromise without having to replace the keys.
If you make the key exportable it is no better than if it was stored on your disk to begin with.
yes, you can export keys using this method, and they will be simlarly secure as password encrypted keys you generate without the secure enclave with openssh, but with the convenience that you can decrypt the key using TouchID on macOS.
Such a setup is marginally more secure than just typing in the passwords, since it is much harder to intercept the TouchID chain from touch to decrypting the SSH key compared to your keyboard to the terminal.
All that said, here are the priorities of a few security technologies:
TouchID:
#1 environment integrity, that is to say, to protect Apple services monopolies and fees such as eliminating password sharing of services accounts, #2 convenience as an alternative to passwords reducing friction when you buy stuff, #3 security.
1password:
#1 convenience, #2 security
I cannot tell you really what is "#1" in security among packaged ready to buy commercial products, Everyone, practically, makes affordances for convenience ahead of security. I suppose there isn't really a great product for normal people that puts security first. Of course, there are an ad hoc collection of practices that amount to, #1 security. But a product? No. Even Apple Lockdown mode... well, they can still just push an update that makes it pretend it is enabled when it is not, so...
As others mention, there is no point to using the Secure Enclave if you have your key stored on disk or in your backup. It’s like putting impressive locks on the front door, while leaving the window open.
Beyond that, you can do that just fine right now by making TWO keys. If you lose the laptop, oh well. Recover with your backup key (which is hopefully kept more securely than you describe - it can be inconvenient to access since it is only needed for recovery).
This also lets you go further in locking things down or providing you notifications, as you can distinguish server side between your usual key and the backup key.
The point of the enclave is to be noncloneable and access limited. Extracting the key for the backup would negate the benefits derived from that.
No but that’s the point. If there’s a copy of the private key out there, then it can be copied. The whole point is that the jedberg-laptop-1 key only ever exists as jedberg-laptop-1. When that laptop gets lost/stolen/destroyed/aged out, there's 100% certainty that it can't be recreated. The other side of that equation is that you need a tree of keys and a whole IT department to manage them so you don't get locked out of your servers. This particular bit of software is about ssh keys and exists within a larger conversation about PKI which you know more about than I, but operationally, you have this, and then you have a root login private key file locked with Shamir secret sharing (ssss on debian) that you distribute to a very select few number of key bearers. And then don't all get on the same plane together, ever.
This was a thing with Google Authenticator. People kept asking how to back up or transfer keys, official answer was you can't and WAI. Eventually they conceded and added a backup option, but it was still confusing. I think this ruined the entire reputation of TOTP.
> if you don't balance security theory with operational practice, critical business functions can fail
i.e. people will circumvent the secure-but-onerous path. (I don't think they can be faulted for trying to get their work done either, I'm agreeing with you)
In this case you can maintain an offline SSH CA and trust that on the remote machines, and then sign yourself leaf certificates against a non-exportable HSM-backed key. In case of loss you just make a new key and sign a new certificate.
Of course this just moves the key management problem somewhere else: now you need to protect the CA key, but that might be easier since you would only need access to it in a disaster recovery scenario if you replaced the laptop or otherwise lost access to your HSM-backed key.
Keeping the certificate’s key as non-exportable in the HSM means you do not need to revoke it as it cannot be compromised (not permanently at least), once you’ve regained access to the HSM you can assume the bad guys are out.
Of course the CA key itself is another story, which is why this merely moves the problem elsewhere (however since you only need access to the CA during initial provisioning of a new certificate key, you can better control access to it).
> Keeping the certificate’s key as non-exportable in the HSM means you do not need to revoke it as it cannot be compromised (not permanently at least), once you’ve regained access to the HSM you can assume the bad guys are out.
How so? I can still lose my Yubikey, and even if the attacker can't export the private key corresponding to a CA-signed SSH certificate, they can still use it, no? How would I "regain access" in this scenario?
I was thinking the HSM in this case is your Macbook and its TPM/Secure Enclave, in which case you'd either recover it or assume the attacker is unable to use it due to biometrics/PIN. I guess the Yubikey has a PIN too with a limited number of tries.
Either way, you either recover the HSM and then don't need to rotate the keys, or you don't in which case you either use OpenSSH's key revocation mechanism (which I believe involves distributing some some sort of CRL to every server), use time-limited SSH certificates and wait out the expiry of the compromised key, or scrap the whole CA and start fresh.
Again this depends on your threat model. The somewhat uncommon requirement where you can't manage your own `authorized_keys` on the remote host complicates things a lot; if you could, then you'd use your existing access (sign yourself a new certificate using your SSH CA) to rotate the whole CA... or just keep two keys in there (primary and backup) and skip the whole CA dance, since it's purely a workaround for the hard requirement of only being able to put one key in authorized_keys.
It's much safer to export a key one time and import it into a new machine, or store it in a secure backup, than to keep it just hanging out on disk for eternity, and potentially get scooped up by whatever malware happens to run on your machine.
Any malware capable of exfiltrating a file from your home folder is also capable of calling the export command and tricking you into providing biometrics.
Strictly speaking people should be using multiple keys so if a device is lost/stolen, you're not left high and dry. Ideally one per device, especially if they don't support some kind of secure enclave.
I keep one in a yubikey protected by a PIN that sits in a safety deposit box, too. This way if I have my laptop, phone, and day-to-day yubikey is a house that suddenly burns down, I'm still ok.
Yeah, that is why you should not [always (depends on your use case)] generate it on a YubiKey.
You need to have:
- an offline master private key backup (air-gapped)
- primary YubiKey (daily use)
- backup YubiKey (locked away)
- revocation certificate (separate storage) (it is your kill-switch)
Having a second YubiKey enrolled is the standard practice.
What people do wrong is:
- They generate directly on YubiKey
- They only use one device
- They do not create a revocation certificate
- They have no offline backups
You generate your GPG keys on a secured system, load the subkeys (not the master because it is not used for daily cryptography) into the YubiKeys, and then remove the secret keys from this system where you generated the keys.
A lot of absolutes are being thrown around in the comments here, unfortunately. It really depends on your scenario.
Generating keys exclusively in (non-backup-able) secure hardware is great if your scenario readily supports multiple keys per server/domain you're authenticating in.
Creating an airgapped backup and loading that into a "daily driver" Yubikey marked as non-exportable can be perfectly fine if that's not the case and you don't want to notify the world every time you're adding or retiring a new Yubikey (for reasons other than key compromise).
Depends on your use case, and you will still have to generate your master key offline even if you want the subkeys generated directly on each YubiKey, which then you sign with the master key.
It is only slightly less secure if you pre-generate subkeys on an offline machine if you want identical subkeys on multiple devices (and if you want exact backups). Sometimes this is exactly what people want.
Ultimately it really depends on your use case.
BTW, please check the parent comments to which I responded.
PS. I think it would be useful for others if you elaborated on your statements (for educational purposes).
I can understand revocation for GPG, but is revocation ever used for SSH? I could understand it if SSH certificates are used, but honestly I've never encountered an org using SSH's cert system.
Well, OpenSSH has a built-in key revocation mechanism (KRL which is just SSH revocation), and there are SSH certificates (with a CA) and certificate revocation, and there is ad-hoc "revocation" by removing keys from the "authorized_keys" file.
If you use your GPG key for SSH, the servers that have your public key do not automatically know that your GPG key was revoked, and SSH authentication will proceed unless you remove the public key from the server OR the server uses an SSH CA/KRL model.
All in all, SSH supports real revocation, but it must be enforced by the server. It is different from GPG where revocation follows the key, not the server.
I have not used KRL myself, but I sort of know how it works. You can generate a new empty KRL, then add keys to revoke, and then to distribute the KRL to servers by configuring OpenSSH to use the KRL file, by adding "RevokedKeys /etc/ssh/revoked_keys.krl" to "/etc/ssh/sshd_config".
The pros of KRL is that they scale better than manual removal for multiple servers, and you can revoke entire CA ranges instead of individual keys if using SSH certificates which is recommended for large setups.
I hope I could clear some things up. Let me know if you have any questions though!
Does OpenSSH's `sshd` even support GPG key revocation? (Assuming you're talking about using the GnuPG card application of Yubikeys, since the newer "native" FIDO security key implementation of OpenSSH does not support importing existing keys to my knowledge.)
These are the most secure options (correct me if I am wrong). The only drawback you may encounter is that you need GnuPG 2.3+, and some compatibility tradeoffs.
On second thought, you may want to remove this line:
compliance de-vs
Because DE-VS only recognizes AES/3DES for ciphers and SHA-2 for digests; conflicts with CHACHA20 and BLAKE2B and will reject operations using these algorithms.
Which makes yubikey impossible to use with geographically distributed backups. You need the backup available at all times for when you want to register with any new service.
This is why you should use a device which allows exporting the seed, like e.g. multi purpose hardware crypto wallets.
This is true for passkeys/webauthn/u2f, which is why it’s trash and a completely flawed and not fit for purpose standard (of course the primary purpose is vendor lock-in, not reliable and disaster-proof authentication).
But SSH allows you to export the public key and then you can enroll it on as many hosts as you want without needing access to the private key, so the backup key can remain in a safe, ideally forever as you should never need it.
I agree that it's inconvenient in many cases, but what vendor am I being locked into, exactly? My primary hardware key can be from a completely different vendor than the backup one, so I don't quite buy the conspiracy angle.
There's also no technical obstacle preventing anyone from creating "paired" hardware authenticators that share the same internal root derivation key and can as such authenticate to all services (at least if they don't demand resident credentials) that were registered to any of the keys in the set.
The fact that these keys don't exist on the market (I believe Yubikey looked into them a while ago) more is evidence for the lack of demand, and less for the existence of a cabal, in my view.
Being locked into a set of a handful of vendors who offer "secure" sync (of course, this is not a true PKI and actual key material is being synced, meaning it's only as secure as the syncing protocol and your authentication to it).
> My primary hardware key can be from a completely different vendor than the backup one, so I don't quite buy the conspiracy angle.
The fundamental flaw is that enrolling an authenticator requires it to be present, making a backup strategy much less resilient as it means your secondary device needs to be constantly present and thus exposed to the same local/environmental risks the primary one is (theft/fire/faulty USB port that fries everything plugged in and you only realize after nuking both your keys). It makes an offline backup scenario like with SSH (where you copy a public key and otherwise leave the authenticator out of reach in a safe place) impossible.
Making it hard/impractical to maintain a reliable backup yourself sure makes those proprietary sync-based services attractive, which also doubles as reducing security since key material is being synced and can potentially be extracted (impossible with a true HSM + PKI implementation).
> preventing anyone from creating "paired" hardware authenticators
Don't certain types of keys involve writing something to the authenticator, fundamentally preventing this (as the backup authenticator won't get this written value)?
> cabal
It doesn't have to be explicit coordinated action like intentionally wanting to prevent people from self-managing passkeys (in fact any hint of it being intentional would be a liability in a potential anti-trust situation, so that's a big no-no); it can be done by simply omitting this scenario, by accident or for "security" purposes, or deprioritizing it to hell. In fact the Credential Migration spec is still a draft and appears quite recent, despite passkeys being heavily promoted for a while: https://fidoalliance.org/specs/cx/cxp-v1.0-wd-20241003.html - you'd think that such as basic feature would be sorted before the push to switch to passkeys no?
> For the initial delivery of Credential Exchange, we focused on the most wide use case [emphasis mine]
"Initial" delivery focuses on the most widespread use-case (how convenient it also happens to be the most corporation-friendly use-case), with everything else coming "later", meaning never. I'm sure it'll rot in some Jira backlog as a liability shield so they can promise they did plan for it and just never got around to it, but everyone understands it will never actually get implemented.
How can the "cartel" "blacklist" anyone? The only thing the FIDO alliance can do is not include a vendor's attestation key as trusted in their vendor database, and software solutions aren't on that list to begin with.
> The fundamental flaw is that enrolling an authenticator requires it to be present [...]
Yes, but that doesn't mean you can't backup the full authenticator state.
Here's a toy WebAuthN implementation that is backed by a passphrase that you remember or write on a piece of paper which works on many websites supporting passkeys and not enforcing attestation (which is the vast majority, since Apple, Google, 1Password, and Bitwarden all don't support attestation for synchronized credentials a.k.a. passkeys): https://github.com/lxgr/brainchain
> Making it hard/impractical to maintain a reliable backup yourself sure makes those proprietary sync-based services attractive
It's also completely open source and can be backed up :) (But again, it's a toy demo – don't use it for anything sensitive!)
All they have to do is publish a "best practices" statement or some RP certification program mandating attestation to be used (and some PR around how only "certified" RPs are secure) and job done. The only reason they didn't do that yet is that Apple is refusing to play ball and support attestation (but this may change).
The threat was clearly there in the original Github issue, and it's just a temporary inconvenience they can't currently follow through on it.
> Yes, but that doesn't mean you can't backup the full authenticator state.
Having the secondary authenticator present in the same vicinity as the primary one exposes it to risks. Having to dump authenticator state at regular intervals now means your backup authenticator must be reachable for writing online, so it can't be a simple "cold storage" backup like a Yubikey in a safe anymore. This also opens up security concerns since you're now dumping and syncing private keys left and right over a network and you lose the peace of mind of using an HSM-backed non-exportable private key where the HSM being unplugged guarantees nobody is currently using your keys.
Seems like a shit ton of complexity and effort to work around a problem OpenSSH elegantly solved 30 years ago.
> Here's a toy WebAuthN implementation
Thanks, I will check it out and read up on it. I'd be genuinely happy to move to WebAuthn if I could build my own hardware authenticators that allow the backup one to remain fully offline in a safe, and not have private keys flying around (if I'm doing that, it's not much of an improvement over syncing passwords - except those I can at least type or tell over the phone in an emergency when I need someone else to act on my behalf).
Edit: so it seems like I am mostly right? Only discoverable credentials count as "passkeys", and those generate per-site private keys, meaning offline, cold-storage backups are impossible. I guess I'm sticking to my password manager then since passkeys would provide no improvement in this case.
> Having to dump authenticator state at regular intervals [...]
Again, you don't inherently have to do this if you only use non-resident keys (which many sites allow; my hardware authenticator does not even support resident keys).
Synchronized resident keys are not the only possible WebAuthN implementation, even though they are getting currently heavily pushed by big stakeholders. The big advantage they come with, though, is that they lost hardware attestation in the process, so everybody is free to use their own implementation instead.
Thinking about it some more: I'm pretty sure that there are crypto wallets that support FIDO (or maybe just U2F, i.e. the predecessor of CTAP2?) as a secondary application, and they are almost always based on a passphrase you can back up and replicate across authenticators as you wish.
> Seems like a shit ton of complexity and effort to work around a problem OpenSSH elegantly solved 30 years ago.
There are very good reasons for requiring the private key at registration time and for mandatory per-site keys in WebAuthN/FIDO, which are arguably the two main differences between WebAuthN and SSH at a protocol level:
Global keys would be a privacy nightmare (as they would become global identifiers), and being able to register a public key without a private key risks both users accidentally registering a key they don't have access to (i.e. availability), and getting social engineered into registering somebody else's key that is not even physically present with them.
But again, per-site keys can absolutely be implemented without having to keep state on the authenticator, since they can be deterministically derived from a root secret.
AFAIK you do , because the hardware key must keep internal state which is also tracked by the server (a monotonically increasing nonce). Offering u2f without this afaik is not compliant and the only way to achieve that would be a central server which keeps state somehow. It’s really fundamentally unsolvable .
Not true. If you use YubiKeys to store your GPG key, it's not a problem. You can have multiple YubiKeys with the same private key, or you can encrypt to multiple recipients.
Nonetheless I'm glad to hear about it. I don't yet use YubiKeys for FIDO, because I was concerned a bit about this enrollment process, and hadn't bothered to figure out what others do.
Yes, that's the point, indeed. One key per device, impossible to extract, so you need to break into the device to use the key.
If you want to maintain backup access, you can use an SSH CA to sign your public SSH keys, then keep the private keys on your device. If you keep the CA keys safe (i.e. physically safe on a flash drive), this means you can even add new keys after you lose all your devices.
This way, you only need to trust your one CA on your servers (so you don't need to copy 20 public keys around for every server).
Plus, if you're setting up a (separate) SSH CA, you can also sign servers' host keys, so you don't need to rely on TOFU to prevent MITM attacks, if that's something you care about.
This is the fundamental paradox of hardware secured keys. Basic ones generate the private key inside and never let it be exported. This allows you to be very sure it won't ever leak but also doesn't let you back it up. Higher end Hardware Security Modules allow the private key to be exported but only when encrypted with the valid public key of a destination HSM.
> If I understand correctly, this means you can't back up the private key, correct? It's in the Secure Enclave, so if you lose your laptop, you also lose the key?
In a business environment, that's what you want. The key is then burned, and you ask your coworkers (who still have access) to remove the old key and store your new one on the servers.
I had being using krypton, with the private key being on my iPhone, and am now using secretive. Never had much of an issue with not having access to my private key. We made rolling out public keys to the servers very easy by using the gitlab key file. So when I get a new Macbook I'd just need to create a new key and upload it to gitlab. We have multiple devops that can run the playbook to roll it out to the servers. And if they have a new Macbook I roll it out for them. And we don't have that many Macbook upgrades anyway.
Probably not the worst thing, you most likely have another way to get into the remote machine, or an admin who can reset you, but still feels like a hole.
Or am I missing something?
ps. It amuses me that my Mac won't let me type Secure Enclave without automatically capitalizing it.
Edit: I understand good security is having multiple keys, I was simply asking if this one can be backed up. OP answered below and is updating their webpage accordingly.