You need to distinguish between "physical qubits" and "logical qubits." This paper creates a single "first-of-a-kind" logical qubit with about 100 physical qubits (using Surface Code quantum error correction). A paper from Google in 2019 estimates needing ~20 million physical qubits ("How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits" - https://arxiv.org/abs/1905.09749), though recent advances probably brought this number down a bit. That's because to run Shor's algorithm at a useful scale, you need a few thousand very high quality logical qubits.
So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.
Thank you for the explanation. It's still an upwards update on the qubit timelines of https://arxiv.org/pdf/2009.05045 (see Fig. 7), but not by an insane amount. We've realized their 95% expectation of qubit progress (1 logical qubit) for 2026, in 2024.92 instead.
Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.
Nice, thanks for linking that paper. I also did below.
The authors argue (e.g. in the first comment here https://scottaaronson.blog/?p=8310#comments) that by their definition, Google still only has a fraction of one logical qubit. Their logical error rate is of order 1e-3, whereas this paper considers a logical qubit to have error of order 1e-18. Google's breakthrough here is to show that the logical error rate can be reduced exponentially as they make the system larger, but there is still a lot of scaling work to do to reach 1e-18.
So according to this paper, we are still on roughly the same track that they laid out, and therefore might expect to break RSA between 2040 and 2060. Note that there are likely a lot of interesting things one can do before breaking RSA, which is among the hardest quantum algorithms to run.
How to understand that logical error rate can reduce exponentially when the system becomes larger? Does it mean each physical qubit will generate more logical qubit when the total number of physical qubits increases?
A simple classical example where this is true is a repetition code. If you represent 1 as 11...1 and 0 as 00...0, randomly flip some bits, and then recover the logical state with majority voting, the probability of a logical error occurring shrinks exponentially as you add more bits to it. This is because a logical error requires flipping at least half the bits, and the probability of that happening decreases exponentially.
The error correcting code used in this work also has a nice intuitive explanation. Imagine a 2D grid of bits, visualized as pixels that can be black or white. Now imagine drawing a bunch of lines of black pixels, and enforcing that lines can only end on the top or bottom boundary (loops are allowed). If there is an even number of lines connecting the top to the bottom, we call that logical 0, and if there is an odd number of lines, we call that logical 1. This again has the property that as you add more bits, the probability of changing between logical 1 and 0 gets exponentially smaller, because the lines connecting top to bottom get longer (just like in the repetiton code).
This code also has the nice property that if you measure the value of a small patch of (qu)bits, there's no way to tell what the logical state is. This is important for quantum error correction, because measurement destroys quantum information. So the fact that local measurements don't reveal the logical state means that the logical state is protected. This isn't true for the repetition code, where measuring a single bit tells you the logical state.
> so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.
This.
People seems to think that because something is end to end encrypted it is secure. They don't seem to grasp that the traffic and communication that is possibly dumped/recorded now in encrypted form could be used against them decades later.
But ... AES is believed to be quantum-safe-ish, so with perfect forwards secrecy this exact threat can be quite well managed.
The currently best known quantum attack on AES requires a serial computation of "half of key length" (Grover's algorithm ... so if they key is 128 bit long then it requires 2^64 sequential steps)
I read about Signal's double-trouble tactics, but I haven't heard about Apple's.
Ah, okay for iMessage, something called PQ3[1], hm, it uses Kyber. And it's also a hybrid scheme, combining ECC. And a lot of peer review.
And there's also some formal verification for Signal's PQXDH [2].
Oh, wow, not bad. Thanks!
Now let's hope a good reliable sane implementation emerges so others can also try this scheme. (And I'm very curious of the added complexity/maintenance burden and computational costs. Though I guess this mostly runs on the end users' devices, right?)
I'm very sorry to admit it but I'm too lazy to read the entire discussion in this thread. Could you please tell me, a mere mortal, at which point the humanity should start worrying about the security of asymmetric encryption in the brave new world of quantum computing?
Funny turn of phrase, but you probably need 20 million physical qubits (or at least you did a few years ago, this number may have dropped somewhat) to do anything significant. I don't think we can be confident we'll get there any time soon.
> quantum computers would be able to retroactively break any public keys that were stored
Use a key exchange that offers perfect forward secrecy (e.g. diffie Hellman) and you don’t need to worry about your RSA private key eventually being discovered.
> Forward secrecy is designed to prevent the compromise of a long-term secret key from affecting the confidentiality of past conversations. However, forward secrecy cannot defend against a successful cryptanalysis of the underlying ciphers being used, since a cryptanalysis consists of finding a way to decrypt an encrypted message without the key, and forward secrecy only protects keys, not the ciphers themselves.[8] A patient attacker can capture a conversation whose confidentiality is protected through the use of public-key cryptography and wait until the underlying cipher is broken (e.g. large quantum computers could be created which allow the discrete logarithm problem to be computed quickly). This would allow the recovery of old plaintexts even in a system employing forward secrecy.
The relevant RSA break is sufficiently powerful quantum computers, which also break ECDH (actually, ECDH is easier than classically equivalent-strength RSA for quantum computers[1]), so no, you’re not fine.
I would actually expect RSA to see a resurgence due to this. Especially because you can technically scale RSA to very high levels potentially pushing the crack date to decades later than any ECC construction. With the potential that such a large quantum computer may never even arrive.
There are several choices with scaling RSA too, you can push the primes which slows generation time considerably. Or the more reasonable approach is to settle on a prime size but use multiple of them (MP-RSA). The second approach scales indefinitely. Though it would only serve a purpose if you are determined to hedge against the accepted PQC algorithms (Kyber/MLKEM, McEliece) being broken at some point.
also that paper (IMO) is ridiculously conservative. Just using 1GB keys is plenty sufficient since it would require a quantum computer with a billion bits to decrypt.
How long does it take to generate a key that big? What probabilities do you need to put on generating a composite number and not a prime? Does the prime need extra properties?
Based on https://eprint.iacr.org/2017/351.pdf it would be about 1900 core hours (but I'm pretty sure optimized implementations could bring it down a bunch). No extra properties needed and moderate probability is sufficient.
Perfect forward secrecy doesn't work that well when NSA motto is - store everything now decrypt later. If they intercept the ephemeral key exchange now they can decrypt the message 10 or 50 years later.
Diffie Hellman doesn’t ever send the key over the wire, that’s the point. There is nothing to decrypt in the packets that tells you the key both sides derived.
Unless they break ECDHE, it doesn’t matter if RSA gets popped.
Diffie Hellman to the best of my understanding also relies on the same hard problems that make the public key cryptography possible. If you trivialize factoring of big numbers, you break both RSA and the original DHE. Not sure how it will work for elliptic curves, but my instinct tells me that if you make the fundamental ECC problem easy, the exchange will also go down.
Perfect forward secrecy requires the exchange of ephemeral keys. If you use either ECC or RSA for this and the traffic is captured a quantum computer will break it.
All perfect forward secrecy means is that you delete your own ephemeral private keys, the public keys stay in the record. And a quantum computer will recover the deleted private keys.
Also, none of the currently accepted post-quantum cryptographic algorithms offer a Diffie-Hellman construction. They use KEM (Key Encapsulation Mechanism).
So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.