My last post detailed some of the general hesitance of the early privacy community to considering technology as a solution for preserving privacy. That early hesitance was particularly frustrating because by that time, I had already developed a suite of technologies I knew could help solve the problems being discussed.
These were technologies like electronic money, digital blind signatures, a distributed trustless vault system, and mix networks. Together, these could preserve privacy across messaging, payments, and credentials as our world moved online.
So for this post, and several others upcoming, I thought it would be interesting to step back and take a look at where the road to these technologies began.
For mix networks, the road starts with voting. Voting, and believe it or not, a hot tub.
I came to Berkeley interested in learning more about cryptography. Having already done a few things in the field prior to enrolling, I had started to recognize a huge potential privacy problem that others focused on end-to-end encryption had not yet realized; the metadata, the “who talks to whom and when”, could be at least as revealing as the contents of the encrypted message itself. I felt that allowing third parties to access and aggregate our metadata would have dramatic implications for our digital privacy in the future. So I was interested in finding new applications for cryptography to protect that metadata, which at the time I called “tracking information.” I approached achieving this level of privacy from two different angles.
The first of these was with something I called the “card computer.” I actually presented it in my “Security Without Identification” paper but the idea predates that paper by a long time, before I began attending Berkeley. My concept for the device was exactly as it sounds – a little pocket computer that would help you protect your data and privacy, about the size and thickness of a credit card with a keyboard, a basic display like on a pocket calculator, and a small transmitter like on a garage-door opener. The card would be able to communicate encrypted information with point-of-sale and other computers, with no need for anyone but the owner to handle them. I filed some patents for it, but the idea never really gained much traction. By the time I published the paper, Visa and Toshiba were jointly developing what they termed a smart card--a credit or debit card with a chip in it, like the ones universal today. Unfortunately those cards would end up being proprietary to the issuing corporation and not transparent to the user.
The second angle, which has a more colorful origin story, began at Berkeley. One of my first advisors was a professor named Bob Fabry. He was an expert in computer security at the time, who would later go on to lead BSD’s (Berkeley Software Distribution) development of Unix. He was mainly focused on operating system-level security, so he and I approached the issue of security from different perspectives but spent a great deal of time collaborating on several papers. He also happened to have a magnificent back porch facing the redwood-lined north side of the UC Berkeley campus; complete with a hot tub and what looked like a military-grade radio tower (an expression of his deep affection for ham radio). So one night he and I were sitting in this hot tub, against a backdrop of Northern California redwoods, brainstorming about my interest in metadata, privacy, and cryptography, when it dawned on me that voting could be an interesting starting point for trying to use cryptography to solve a simplified privacy problem.
Good scientific practice is to identify a really simple toy-problem on which to test your theory. And voting, in the context of a computer network, seemed like the ideal problem. See voting is simple, even young children intuitively figure out how to do it. So I thought testing how to enable computers to privately vote on a network while still ensuring the total votes were correct was the simplest way forward.
After a bit of thought, mixing stood out to me as the best way to cryptographically secure voting. And because the ballots one would send through mixnets for voting were effectively just packets of messages, it quickly dawned on me that mixing, and mix networks, could easily be applied to protect privacy for messaging more broadly. After connecting those dots, I realized this idea had the potential to make real impact and set out to make mix networks the subject of my Master’s thesis, “Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms,” which would later be published by the Communications of the ACM.
That paper, and the idea of mixing, set me off on the course that would lead me to blind signatures, ecash, and more. In a future post, I’ll take a bit of a deeper dive into mixnets, how they work, some examples of where they’ve been applied, and why they needed updating to be more effective at scale.