C Appendix C: Verifying public keys

“Key verification is a weak point in public-key cryptography”45

At present, there’s no robust method for a user to verify the public key they’re encrypting data to belongs to their intended recipient.46 As a consequence, it would be possible for a malicious or compromised 1Password server to provide dishonest public keys to the user and run a successful Man in the Middle (MITM)A Man in the Middle attack has Alice believing she’s encrypting data to Bob, while she’s actually encrypting her data to a malicious actor who then re-encrypts the data to Bob. The typical defense for such an attack is for Alice and Bob to manually verify they’re using the correct public keys for each other. The other approach is to rely on a trusted third party who independently verifies and signs Bob’s public key. attack. Under such an attack, it would be possible for the 1Password server to acquire vault encryption keys with little ability for users to detect or prevent this.

Story 12 illustrates what might happen in the case of such an attack during vault sharing.

Story 12: Mr. Talk is the cat in the middle

Molly (a dog) joins a team, and as she does, generates a public key pair. Let’s say the public key exponent 17 and public modulus 4171: \(\pk_M = (17; 4171)\). (Of course in the actual system that modulus would be hundreds of digits long.) Only Molly has access to the corresponding private part, \(d_M = 593\). When Patty (another dog) encrypts something using (17; 4171) only Molly, with her knowledge that \(d_M\) is 593 can decrypt it.

Now suppose Mr. Talk (the neighbor’s cat) has taken control of the 1Password server and database. Mr. Talk creates another public key, \(\pk_T = (17; 4183)\). Because Mr. Talk created that key, he knows the corresponding private part of the key, \(d_T\), is 1905.

Patty wants to share a vault with Molly. Suppose that vault key is 1729. (In real life that key would be a much bigger number.) So she asks the server for Molly’s public key. But Mr. Talk, now in control of the server, doesn’t send her Molly’s real public key — he sends the fake public key he created. Patty will encrypt the vault key, 1792, using the fake public key that Mr. Talk created. Encrypting 1729 with (17; 4183) yields 2016. Patty sends that up to the server for delivery to Molly.

Mr. Talk uses his knowledge of \(d_T\) to decrypt the message. So he learns the vault key is 1729. He then encrypts that with Molly’s real public key, (17; 4147), and gets 2826. When Molly next signs in, she gets that encrypted vault key and is able to decrypt it using her own secret, \(d_M\). The message she receives is correctly encrypted with her public key, so she has no reason to suspect anything went wrong.

Mr. Talk was able to learn the secrets Patty sent to Molly, but he wasn’t able to learn the secret parts of their public keys.

Dangerous bend

The use of plain RSA (and small numbers) in Story 11 was to simplify the presentation. The underlying math of the RSA algorithm must be wrapped in a construction that addresses the numerous and large dangers of plain RSA.

For those who wish to check the math of the story recall that:

\(d = e^{-1} \mod \lambda(N)\)47
that for encrypting a message \(c = e^m \mod N\);
and that decrypting a ciphertext \(m = c^d \mod N\).
In our example \(\lambda(4157) = \lcm(43 -1 , 97 - 1) = 672\),
and \(\lambda(4183) = \lcm(47 - 1, 89 - 1) = 2024\).

For simplicity, Story 12 only works through adding someone to a vault, but the potential attack applies to any situation in which secrets are encrypted to another’s public key. Thus, this applies during the final stages of recovery or when a vault is added to any group as well as when a vault is shared with an individual. This threat is probably most significant with respect to the automatic addition of vaults to the Recovery Group as described in “Restoring a user’s access to a vault.”

C.1 Types of defenses

The kind of problem we describe here is notoriously difficult to address, and it’s fair to say there are no good solutions to it in general. There are, however, two categories of (poor) solution that go some way toward addressing it in other systems.

C.1.1 Trust hierarchy

The first defense requires everyone with a public key to prove the key really is theirs to a trusted third party. That trusted third party would then sign or certify the public key as belonging to who it says it belongs to. The user of the public key would check the certification before encrypting anything with that key.

Creating or using a trust hierarchy isn’t particularly feasible within 1Password, as each individual user would need to prove to a third party their key is theirs. That third party cannot be AgileBits or the 1Password server – the goal is to defend against a MiTMA Man in the Middle attack has Alice believing she’s encrypting data to Bob, while she’s actually encrypting her data to a malicious actor who then re-encrypts the data to Bob. The typical defense for such an attack is for Alice and Bob to manually verify they’re using the correct public keys for each other. The other approach is to rely on a trusted third party who independently verifies and signs Bob’s public key. attack launched from within the 1Password server. Although the 1Password clients could assist in some of the procedure, it would place costly burden on each user to prove their ownership of a public key and publish it.

C.1.2 User-to-user verification

The second approach is to enable users to verify keys themselves. They need to perform that verification over a communication channel that’s not controlled by 1Password. Patty needs to talk directly to Molly, asking Molly to describe \(\pk_{M_a}\) in a manner that will allow Patty to distinguish it from a maliciously crafted \(\pk_{M_f}\).

In the case of RSA keys, the crucial values may include a number that would be hundreds of digits long if written out in decimal notation. Thus a cryptographic hash of the crucial numbers is used, which is then made available presented in some form or other. Personal keysets also contain an Elliptic Curve Digital Signature Algorithm (ECDSA)The elliptic curve digital signature algorithm is digital signature algorithm based on elliptic curve cryptography described in FIPS PUB 186-48. key pair that’s used for signing. These keys are far shorter than RSA keys, but may still be too large to be directly compared by humans. Recent research48 has confirmed the long suspected belief that the form of fingerprints makes comparisons difficult and subject to security sensitive errors. Such research does point to ways in which the form of fingerprints can be improved, and it’s research we’re closely following.

The difficulty for users with verifying keys via fingerprints isn’t just the technicalities of the fingerprint itself, but in understanding what they’re for and how to make use of them. As Vaziripour, J. Wu, O’Neill, et al. point out, “The common conclusion of [prior research] is that users are vulnerable to attacks and cannot locate or perform the authentication ceremony without sufficient instruction. This is largely due to users’ incomplete mental model of threats and usability problems within secure messaging applications.”49

Users may need to understand:

  • Fingerprints aren’t secret.
  • Fingerprints should be verified before using the key to which they are bound.
  • Fingerprints must be verified over an authentic and tamper-proof channel.
  • That communication channel must be different from the communication system the user is trying to establish.

The developers of Signal, a well-respected secure messaging system, summarized some difficulties with fingerprints50

Publishing fingerprints requires users to have some rough conceptual knowledge of what a key is, its relationship to a fingerprint, and how that maps to the privacy of communication.

The practice of publishing fingerprints is based in part on the original idea that users would be able to manage those keys over a long period of time. This has not proved true, and has become even less true with the rise of mobile devices.

Although their remediation within Signal has a great deal of merit, only a small portion of Signal users attempt the process of key verification. When they’re instructed to do so (in a laboratory setting) they often don’t complete the process successfully.51

C.2 The problem remains

We’re aware of the threats posed by MiTMA Man in the Middle attack has Alice believing she’s encrypting data to Bob, while she’s actually encrypting her data to a malicious actor who then re-encrypts the data to Bob. The typical defense for such an attack is for Alice and Bob to manually verify they’re using the correct public keys for each other. The other approach is to rely on a trusted third party who independently verifies and signs Bob’s public key., and users should be aware of those, too. We’ll continue to look for solutions, but we’re unlikely to adopt an approach that places a significant additional burden on the user unless we can have some confidence in the efficacy of such a solution.


  1. Free Software Foundation (1999)↩︎

  2. The role of public key encryption in 1Password is described in “How items are shared with anyone” and “Restoring a user’s access to a vault.”↩︎

  3. \(e\) is the public exponent and \(\lambda(N)\) is the Carmichael totient, which can be calculated from \(p\) and \(q\), the factors of \(N\), as \(\lcm(p -1, q-1)\).↩︎

  4. Dechand et al. (2016)↩︎

  5. Vaziripour et al. (2018)↩︎

  6. Marlinspike (2016)↩︎

  7. Vaziripour et al. (2017) and Vaziripour et al. (2018)↩︎