The Heartbleed bug in OpenSSL was major news this week. While you are waiting for sudo apt-get dist-upgrade to run on all your servers, let’s take a minute to reflect on how the SSL trust system works and about the kinds of system relationships that depend on it.
Just to recap, SSL-based trust derives from a “blessed” certificate authority, and this certificate authority can create other certificates that can be “trusted” as long as their lineage can be traced back to the original authority. In this system the answer to “Should I trust you?” is binary; the answer is either “yes” or “no” based on the certificate chain.
In the event the certificate authority is compromised, then all of the derivative trusted systems become suspect as well. Real certificates become indistinguishable from fake certificates and it is no longer possible to know “who or what to trust”. Absolute all-or-nothing trust is an inherent weakness in certificate based trust and the X.509 based trust system.
This situation got me thinking about the differences between human trust systems and computing trust systems. It’s interesting that people take very different approaches to trusting computers as we do with each other. We rarely can answer the question “Do I trust you?” about another person with a simple “yes” or a “no” answer to cover all scenarios. Most of the time the answer is “it depends” on what we are trusting another person to do, which is better captured by the concept of trustworthiness. In human networks you establish trustworthiness based on knowledge that you have about the person, their honesty and their reputation, and information from other people who have had experiences and interactions which are similar to one you are trying to assess.
SSL doesn’t let us assess the trustworthiness of computers. It presents a “definitive” trust answer, in which we place too much confidence. Based on the SSL attestations of websites, we readily hand over our credit cards to strange computers in a way that we wouldn’t do with people.
There are alternatives to SSL for establishing trust in computing. For example, Pretty Good Privacy (PGP) is a trust system that operates a bit closer to how humans think about trust in one another, although it requires additional overhead to manage trust systems that are established on a case by case basis. Moreover, PGP doesn’t necessarily incorporate historical experience akin to the “reliability” parameter of trustworthiness.
This week’s news highlights the need for accessible and manageable trust systems that provide estimates of trustworthiness somewhere in between the type of trust provided by SSL and case by case managed trust systems like PGP.