Deps
Deps
Deps
OpenSSL: The Lock on Every Door
S2 E141h 5m · Mar 17, 2026
On April 7, 2014, a bleeding heart logo revealed that 17% of the internet's secure servers had been silently leaking passwords, encryption keys, and credit card numbers through a bug in OpenSSL—and anyone could steal them with just a few lines of code.

OpenSSL: The Lock on Every Door

The Heartbeat That Bled

This is episode fourteen of What Did I Just Install.

On the seventh of April, two thousand fourteen, a Monday morning, system administrators around the world opened their email to find something that had never existed before. A security vulnerability with a name. A logo. A website. And the kind of message that makes your stomach drop.

The vulnerability was called Heartbleed. The logo was a bleeding heart, drawn in red against white, the kind of clean design you would normally associate with a startup launch, not a catastrophe. The website, heartbleed dot com, explained in simple language that a bug in a piece of software called OpenSSL had been silently exposing the private memory of approximately seventeen percent of the internet's secure web servers. Passwords, session tokens, credit card numbers, private encryption keys. All of it, leaking, up to sixty-four kilobytes at a time, to anyone who knew how to ask.

And anyone could ask. The exploit was trivial. A few lines of code. No authentication required. No logs left behind. You could reach into a server's memory the way you might reach into someone's pocket on a crowded train, and they would never know you had been there.

The scramble that followed was unprecedented. Yahoo, the Canada Revenue Agency, the United Kingdom's National Health Service, community forums, banks, governments, all of them vulnerable. Community Health Systems, one of the largest hospital chains in the United States, eventually disclosed that four and a half million patient records had been stolen through the bug. The Canada Revenue Agency shut down its online tax filing system for days during peak tax season. Mumsnet, one of Britain's largest parenting forums, had to force a password reset for every one of its one and a half million users after discovering that accounts had been accessed.

But the part that shocked people most was not the bug itself. Bugs happen. Critical vulnerabilities are found in software every week. What shocked people was what they learned when they looked behind the curtain at who had been maintaining the software that protected their passwords, their banking, their medical records, their email, their everything.

Two people. Essentially two people, working out of their homes, in two different countries, who had never met in person.

Their names were both Steve.

The Invisible Lock

To understand why two Steves were holding up the internet, you first have to understand what OpenSSL actually does. And to understand that, you need to understand the problem it solves. Every time you type a password into a website, every time your browser shows that little padlock icon, every time you send a credit card number to an online store, something has to protect that information as it travels across the open internet. Without encryption, every piece of data you send is a postcard. Anyone who handles it along the way can read it. Your internet service provider, the operators of every router between you and the server, anyone sitting on the same Wi-Fi network at the coffee shop. A postcard.

TLS, which stands for Transport Layer Security, is the protocol that turns those postcards into sealed envelopes. It is the successor to SSL, Secure Sockets Layer, though people still use both names interchangeably. When your browser connects to a website over HTTPS, it performs a handshake. The server presents a certificate that says, in effect, I am who I claim to be, and here is my public encryption key. Your browser verifies that certificate against a list of trusted authorities. If everything checks out, the two sides negotiate a shared secret key, and from that point forward, every byte that passes between them is encrypted. Fast enough that you never notice. Secure enough that intercepting the traffic gives you nothing but noise.

That is the theory. TLS is a protocol, a set of rules. OpenSSL is the software that actually implements those rules. The code that does the math. The functions that negotiate the handshake, verify the certificates, encrypt the data, decrypt the data. It is a library, a collection of tools that other software calls upon when it needs to do anything involving cryptography.

And it is everywhere. Or rather, it was everywhere in two thousand fourteen. Apache, the web server that at its peak ran more than half the websites on the internet, used OpenSSL. Nginx, the other dominant web server, used OpenSSL. Every time you used SSH to connect to a remote server, the underlying cryptographic primitives came from OpenSSL's libcrypto library. VPN software, email servers, instant messaging systems, database connections, API calls, package managers downloading dependencies over HTTPS. If data was being encrypted in transit on the internet, there was a very good chance OpenSSL was the thing doing the encrypting.

Two thirds of the internet's secure web servers ran on it. Not two thirds of secure servers ran software that used it somewhere deep in the dependency tree. Two thirds of encrypted web connections depended on it directly, the way a house depends on its foundation. Strip it away, and nothing above it works.

And this was the software being maintained by two people in their homes.

The Australians Who Started It

The story begins in Australia in nineteen ninety-five, with a developer named Eric Andrew Young. The internet was barely commercial. Netscape had introduced SSL, Secure Sockets Layer, the year before as a way to enable secure web commerce, the ability to type a credit card number into a browser and have it arrive at a server without anyone in between reading it. But Netscape's SSL implementation was proprietary. If you wanted to build a web server or a secure application, you either licensed Netscape's code or you wrote your own.

Young wrote his own. He was working at the University of Queensland in Brisbane, and his implementation was called SSLeay. The name was not poetic. S-S-L-eay. SSL by Eric A. Young, where eay were his initials. It was the kind of name a developer gives something when they expect it to be used by approximately nobody.

Tim Hudson, another Australian developer, joined Young as a collaborator. Together, through the mid-nineties, they built SSLeay into a surprisingly complete and functional SSL library. They released it under a permissive BSD-style license that allowed anyone to use it, modify it, and distribute it freely. This mattered. In an era when cryptography was classified as a munition by the United States government, when the export of strong encryption was illegal under ITAR regulations, an Australian-written crypto library licensed under terms that let anyone use it was not just convenient. It was geopolitically significant. SSLeay was not subject to US export controls.

By nineteen ninety-seven, SSLeay was the most popular free SSL implementation in the world. Apache, the web server that was rapidly becoming the dominant server on the internet, used SSLeay for its HTTPS support via the mod_ssl module. The library was being downloaded thousands of times, compiled into servers, embedded in applications, depended upon by a growing ecosystem of internet services. And it was maintained by two people in Australia who were not being paid for it.

In nineteen ninety-eight, both Young and Hudson left to join RSA Security in Australia. RSA was the company built around the RSA encryption algorithm, one of the foundational patents in public-key cryptography. It was a good job. A real salary. The kind of offer that open source developers in the nineties rarely received. They took it, and their work on SSLeay effectively stopped.

But the code remained. Free, open, BSD-licensed. And other people picked it up.

On December twenty-third, nineteen ninety-eight, a group of developers including Mark Cox from Red Hat and Ralf Engelschall from the Apache project forked SSLeay, renamed it OpenSSL, and set about turning it into the standard open source cryptography toolkit. The name followed the convention of the era. SSL was the protocol. Open meant open source. OpenSSL. Straightforward, descriptive, and destined to become one of the most critical pieces of software in the history of the internet without anyone outside the security world ever learning its name.

The timing was pivotal. The web was exploding. E-commerce was becoming real. Amazon was selling books. eBay was hosting auctions. PayPal was processing payments. Every single one of these services needed encryption, and every single one of them needed an SSL implementation they could afford. OpenSSL was free. OpenSSL was open. OpenSSL was good enough. Within a few years, it was everywhere.

Two Guys Named Steve

For years, OpenSSL existed in the strange twilight zone that we have seen throughout this series. Critical infrastructure, universally depended upon, maintained by essentially nobody. Volunteers came and went. Bug fixes trickled in. The OpenSSL development team, such as it was, fluctuated between one and three active contributors at any given time.

By the two thousands, the project had settled into what would become its long-term pattern, anchored by two people.

Stephen Henson was a British mathematician. Quiet, methodical, deeply knowledgeable about cryptography. He was the primary coder, the person who understood the full OpenSSL codebase more completely than anyone else alive. He worked from his home in the United Kingdom, reviewing patches, fixing bugs, implementing new protocol versions. For years, he was the closest thing OpenSSL had to a full-time developer, though calling it full-time is generous. The pay, when there was pay, came not from OpenSSL's donations or goodwill of the community, but from an unlikely source.

Steve Marquess was the other Steve. And Marquess was not a developer at all. His background was in the United States military and defense contracting. He had spent years working on government and military technology projects, the kind of work where security clearances and acronyms matter more than GitHub profiles. At some point, Marquess had gotten involved with OpenSSL's FIPS validation work.

FIPS stands for Federal Information Processing Standards. If you want to sell encryption software to the United States government, it has to be FIPS one forty dash two validated. The validation process is brutal, expensive, and ongoing. It requires detailed documentation, formal testing by accredited laboratories, and compliance with standards that are measured in hundreds of pages. Most commercial companies spend millions on FIPS validation. For OpenSSL, it fell to Marquess.

Marquess set up the OpenSSL Software Foundation, the OSF, as a mechanism to fund the project. The model was straightforward in theory. Marquess would do FIPS compliance consulting, companies that needed FIPS-validated OpenSSL would pay the OSF for that work, and the revenue would fund actual OpenSSL development. In practice, it meant that Steve Marquess, a man who was not a cryptographer and not a software developer, was running the financial and administrative side of one of the internet's most critical security projects, mostly by himself, out of a house in suburban Maryland.

A few million a year would do grandly. That would let us pay for several full-time SSL developers.

But a few million a year never came. The OpenSSL Software Foundation never received more than one million dollars in income in any given year. And most of that went right back out the door to cover FIPS validation contracts, not to pay developers. The actual donations that the project received, the money from the broader community that depended on OpenSSL for their security, averaged around two thousand dollars per year.

Two thousand dollars. For the software protecting two thirds of the internet's encrypted connections.

The irony of the FIPS funding model was cruel. The reason OpenSSL had any money at all was because the United States government required FIPS validation. Government agencies could not use unvalidated cryptographic software. So the funding loop was this: the government needed OpenSSL to be FIPS-validated, Marquess did the validation consulting, the consulting revenue funded Henson's salary, and Henson used that time to maintain OpenSSL for the entire internet. The government's compliance requirements were effectively subsidizing the internet's encryption, not intentionally, not generously, but as an accidental side effect.

And the FIPS work itself poisoned the codebase. Maintaining FIPS compliance required conditional compilation paths, parallel code branches, and compatibility layers that added thousands of lines of complexity. The very work that funded OpenSSL development was also making OpenSSL harder to maintain, harder to audit, and more likely to contain bugs. Every hour Henson spent on FIPS compliance was an hour not spent on the code that everyone else depended on.

The BuzzFeed article that would eventually tell their story was titled "The Internet Is Being Protected By Two Guys Named Steve." When it ran, in April two thousand fourteen, it noted a detail that made the absurdity concrete. Marquess and Henson, the two people most responsible for the security of the internet's encryption layer, had never met in person. They communicated by email and phone. They lived on different continents. They had been doing this for years.

This is the funding pattern we have seen before. But OpenSSL was the extreme case. Daniel Stenberg maintained curl on a shoestring, but curl had one primary purpose and one primary maintainer by design. The ffmpeg team was underfunded, but at least it had a team. OpenSSL was the padlock on almost every door on the internet, and it was held together by two men who split the work between code and paperwork, neither of them paid what even a junior developer at any of the companies depending on their work would earn.

And unlike most of the packages in this series, the stakes were not about inconvenience. If left-pad disappears, your build fails and you fix it in ten minutes. If OpenSSL has a critical bug, passwords are stolen, private keys are extracted, encrypted communications are silently readable by attackers, and there is no way to know it happened. The failure mode for a cryptographic library is not a crash. It is a silent compromise. Everything appears to work perfectly while your secrets are bleeding into the hands of whoever knows where to look.

The Heartbeat on New Year's Eve

On the thirty-first of December, two thousand eleven, a Saturday, somewhere around an hour before midnight, Stephen Henson committed a code change to the OpenSSL repository. The change implemented a new feature, the TLS heartbeat extension, defined in RFC six five two zero. The code had been written by Robin Seggelmann, a German PhD student at the University of Duisburg-Essen, and it had been submitted as a contribution to the project.

The heartbeat extension was a simple idea. TLS connections can be long-lived, but the underlying TCP connection might time out if no data is flowing. The heartbeat was a keep-alive mechanism. One side sends a small message, a heartbeat request, to the other side. The message contains a payload, some arbitrary data, and a length field that says how big the payload is. The other side is supposed to read the payload, then send it back. A ping. Are you still there? Yes, I am still here. Here is your data back.

Seggelmann's implementation had a bug. A missing bounds check. The code read the length field from the incoming heartbeat request and used that length to determine how many bytes to copy into the response. But it did not verify that the length field actually matched the size of the payload. If you sent a heartbeat request with a payload of one byte but claimed the length was sixty-four thousand bytes, the server would obediently copy sixty-four thousand bytes from its memory into the response. Your one byte, plus sixty-three thousand nine hundred and ninety-nine bytes of whatever happened to be sitting in memory next to it.

I was working on improving OpenSSL and submitted numerous bug fixes and added new features. In one of the new features, unfortunately, I missed validating a variable containing a length. I failed to check that one particular variable, a unit of length, contained a realistic value. This is what caused the bug.

Unfortunately, the OpenSSL developer who reviewed the code also did not notice that a mistake had been made when carrying out the check. As a result, the faulty code was incorporated into the development version, which was later officially released.

Two people. One wrote the code and missed the bug. The other reviewed the code and missed the bug. That was the entire quality assurance process for a change to the most widely deployed cryptographic library on the planet. No second reviewer. No automated bounds-checking tools. No fuzzing. No formal verification. One PhD student and one overworked volunteer, on New Year's Eve, in different time zones, and a single missing line of code that would not be found for over two years.

The vulnerable code shipped in OpenSSL version one point oh point one, released on March fourteenth, two thousand twelve. It sat there, in production, on hundreds of millions of servers, for twenty-six months before anyone noticed.

The Discovery

In early two thousand fourteen, a Google security researcher named Neel Mehta was auditing OpenSSL code. Mehta was part of Google's security team, the group that spent its days probing the internet's infrastructure for weaknesses before attackers could find them. What Mehta found in the heartbeat implementation was the kind of vulnerability that security researchers describe with words like critical, trivial to exploit, and catastrophic in scope.

Independently, around the same time, a team at a Finnish security company called Codenomicon also discovered the same bug. Codenomicon was a small firm that specialized in security testing tools. They had been running their own analysis of OpenSSL and stumbled on the same missing bounds check.

This vulnerability allows an attacker to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users, and the actual content.

Mehta reported the vulnerability to the OpenSSL team on April first, two thousand fourteen. Google, having found it through their own researcher, had already quietly patched their own servers. This is standard practice in responsible disclosure. You fix your own house first, then you tell the vendor, then you wait for a patch, then you go public. The gap between telling the vendor and going public is supposed to give everyone time to prepare.

But the scale of OpenSSL's deployment made coordinated disclosure almost impossible. You could not quietly patch two thirds of the internet. Some organizations were given advance notice. CloudFlare, the content delivery network that sits in front of millions of websites, was notified before the public announcement and managed to deploy a fix. Others were not.

When the public disclosure came on April seventh, it came with something nobody had ever seen before for a security vulnerability. A brand.

Codenomicon had created a website, heartbleed dot com, with a clean design, a bleeding heart logo, and a name that was immediately memorable. Heartbleed. Not C-V-E two thousand fourteen dash oh one six oh. Not a dry advisory buried in a mailing list. A name that normal people could remember, a logo that journalists could put on their front pages, and a website that explained the problem in language that non-technical readers could understand.

The name itself was perfect. The heartbeat extension, designed to keep connections alive, was now causing them to bleed information. The heart was bleeding. It was the kind of name that writes its own headlines, and it did. The New York Times, the BBC, CNN, every major newspaper and television network in the world ran the story. This was not a bug report. It was a global security crisis with a memorable name and a recognizable logo, and it reached an audience that had never heard the term buffer over-read and never would.

The branding of Heartbleed was controversial. Some security researchers argued it was self-promotion by Codenomicon, that naming vulnerabilities like products trivialized the work of actual remediation. Others said it was the most effective piece of security communication in history, that it reached board rooms and living rooms that a CVE number never would have.

Codenomicon was later acquired by Synopsys in two thousand fifteen. But the practice they started stuck. After Heartbleed came Shellshock, POODLE, DROWN, Spectre, Meltdown. Each with a name, a logo, sometimes a website. But Heartbleed was the first, and it set the template. The branded vulnerability. A product launch for a catastrophe.

The disclosure itself raised questions about fairness. Google had patched their own servers before telling anyone. CloudFlare had been given advance notice and deployed a fix before the public knew. But most companies, most system administrators, most of the five hundred thousand vulnerable websites, learned about Heartbleed at the same time as the attackers. The coordination, or lack of it, became its own controversy. Who decides who gets advance warning when a vulnerability affects two thirds of the internet? There is no good answer. Telling too many people risks leaking the vulnerability before a fix is ready. Telling too few people means the internet is undefended while the privileged few protect themselves.

The Bleeding

The fallout was immediate and enormous. Security researchers around the world began scanning the internet to determine the scope. The initial estimates were staggering. Roughly five hundred thousand websites, about seventeen percent of all SSL-enabled servers on the internet, were running vulnerable versions of OpenSSL.

The bug was not just theoretically exploitable. It was trivially exploitable. Within hours of the disclosure, proof-of-concept code was circulating. Within days, real attacks were being observed in the wild.

The Canada Revenue Agency was one of the first high-profile casualties. In the middle of tax season, the agency discovered that approximately nine hundred social insurance numbers had been stolen through the Heartbleed vulnerability. They shut down their online tax filing system for days. The timing could not have been worse.

Community Health Systems, which operated over two hundred hospitals across the United States, disclosed months later that attackers had used Heartbleed to steal personal data for approximately four and a half million patients. Names, addresses, birth dates, social security numbers, phone numbers. The kind of data that makes identity theft trivial.

Mumsnet, the British parenting forum, disclosed that user credentials had been compromised. The site's founder, Justine Roberts, revealed that her own account had been hijacked. One and a half million users were forced to change their passwords.

CloudFlare, the company that had received advance notice of Heartbleed, issued a public challenge after the disclosure. They set up a server running the vulnerable OpenSSL version and challenged researchers to use Heartbleed to extract the server's private encryption key. The point was to determine whether the most catastrophic theoretical impact, full private key extraction, was actually practical or just theoretical. Within hours, multiple researchers had succeeded. The private key was extractable. The theoretical worst case was the actual worst case.

The implications were dizzying. If an attacker had been quietly exploiting Heartbleed before it was publicly discovered, and there was no way to know whether anyone had, they could have captured private keys for any vulnerable server. With those keys, they could decrypt not just future traffic but past traffic that they had previously recorded but could not read. Retroactive decryption. Every encrypted session on every vulnerable server for the past twenty-six months was potentially compromised.

Across the internet, the response was a mass password reset. Major services, Yahoo, Tumblr, Dropbox, GitHub, sent emails to hundreds of millions of users telling them to change their passwords. The global scale of forced password resets had no precedent. And even changing passwords was not enough if the server had not been patched and its certificates had not been reissued. You had to patch the software, revoke the old certificate, issue a new certificate with a new key pair, and only then was it safe for users to change their passwords. Any password change before those steps was potentially compromised by the same bug.

The Scale Nobody Could Grasp

In the weeks that followed, the full scope of Heartbleed became clearer, and it was worse than the initial estimates suggested. The bug did not just affect web servers. OpenSSL was embedded in network equipment, in routers, in firewalls, in load balancers, in VPN concentrators. Cisco disclosed that dozens of its products were affected. Juniper Networks issued advisories. Fortinet, F5 Networks, Check Point, all vulnerable. The hardware devices that formed the backbone of corporate networks, the devices that many system administrators did not even think of as running software, had OpenSSL compiled into their firmware.

And you could not just reboot a router the way you could restart a web server. Firmware updates for network equipment required downtime, testing, change management approvals. Some of these devices were in data centers with strict maintenance windows. Some were in remote locations with no on-site staff. Some were running firmware so old that the vendor no longer supported them. The patch was available, but deploying it to every vulnerable device on the internet was a project that would take not days or weeks but months and years.

The United States Department of Homeland Security issued a formal advisory. The Canadian Centre for Cyber Security issued guidance. Australia, Germany, the United Kingdom, Japan, all issued national advisories. For the first time, a software vulnerability was being treated not as a technical problem but as a matter of national security across multiple countries simultaneously.

And then there was the question that nobody could answer. Had anyone been exploiting Heartbleed before Neel Mehta found it? The vulnerability had been in production for twenty-six months. It left no traces. The heartbeat request was a normal part of the TLS protocol. No logs recorded which heartbeat requests were legitimate keep-alives and which were memory-stealing exploits. Twenty-six months of potential exploitation, and no forensic evidence either way. The security community could only say that exploitation was possible, trivial, and undetectable. Whether it had actually happened was, and remains, unknowable.

The Man Who Did Not Mean To

Robin Seggelmann did not disappear after Heartbleed. He talked to reporters, openly and without evasion, which was braver than many people in his position would have been. He was a PhD student who had been contributing to OpenSSL in good faith. He had written the code. He had made a mistake. He did not try to hide from it.

I was not writing the code for any malicious purpose. The actual bug was introduced by mistake. It was not intentional at all. The developers and I would have caught it if we had been using automated testing tools to test for that kind of issue.

The question of whether the NSA had known about Heartbleed and exploited it before the public discovery was raised almost immediately. Bloomberg reported, citing unnamed sources, that the NSA had been aware of the vulnerability for at least two years and had been using it to gather intelligence. The NSA issued an unusual public denial, stating that they had no prior knowledge of Heartbleed. The denial did not settle the question. It never does with intelligence agencies. But no concrete evidence of NSA exploitation was ever produced.

What was concrete was the code itself. One missing bounds check. One if statement. The fix, when it came, was a few lines long. Check that the payload length field actually matches the payload size. If it does not, silently discard the heartbeat request. The fix was trivially simple, which made the original oversight feel even more gut-wrenching. Not a complex algorithmic flaw. Not a subtle race condition. A missing validation of user input, the kind of bug that a first-year computer science student is taught to avoid.

But context matters. Seggelmann was contributing to a project maintained by exhausted volunteers, reviewed by a single person, on New Year's Eve, with no automated testing infrastructure, no fuzzing, no static analysis tools, no code review process beyond one human being looking at a patch and deciding whether it seemed right. The bug was not a failure of talent. It was a failure of infrastructure. The infrastructure of the project itself, the human and institutional infrastructure that was supposed to catch exactly this kind of mistake, simply did not exist.

The Morning After

In the days after the Heartbleed disclosure, as the scale of the vulnerability became clear, a different kind of shock set in. The technical community had always known, in an abstract way, that important open source projects were undermaintained. But the specifics of OpenSSL's situation were a new level of alarming.

The BuzzFeed article ran on April twenty-fifth, eighteen days after the disclosure. It told the story of the two Steves in detail for the first time, and the reaction in the technology world was something close to collective guilt.

We had all been using this software. Every company, every government, every bank. And none of us had ever asked who was maintaining it, or whether they had enough resources to do the job. We just assumed someone was taking care of it.

The OSF typically receives about two thousand dollars a year in donations. We are in the process of applying for five oh one C three status, at which point things might improve, but even on the most optimistic projections it is still not enough for even one developer salary.

Two thousand dollars a year. For context, a single engineer at Google or Facebook or Microsoft, the companies whose products depended on OpenSSL, earned between two hundred and four hundred thousand dollars per year in total compensation. The annual donations to the software protecting the internet's encrypted connections would not have covered one week of one engineer at any of those companies.

The Linux Foundation moved fast. On April twenty-fourth, Jim Zemlin, the same Linux Foundation executive director who would later announce the Valkey fork of Redis, announced the creation of the Core Infrastructure Initiative, CII. The pitch was simple. Major technology companies would pool money to fund critical open source projects that the internet depended on but nobody was paying for.

Thirteen companies signed up as founding members, each pledging one hundred thousand dollars per year for three years. Amazon, Cisco, Dell, Facebook, Fujitsu, Google, IBM, Intel, Microsoft, NetApp, Rackspace, Qualcomm, and VMware. A pool of roughly three point nine million dollars.

OpenSSL was the first project to receive funding. The CII sponsored two full-time OpenSSL core developers, allocated one hundred and twenty thousand dollars for education in open source development practices, another one hundred and twenty thousand for analysis of critical open source projects, and ninety-five thousand specifically for auditing OpenSSL's code.

It was the most significant organized response to an open source funding crisis in the history of the movement. It was also, in a sense, the minimum. Three point nine million dollars split across three years, across multiple projects, for software that protected trillions of dollars in commerce. The companies that pledged it earned that amount in revenue approximately every few minutes.

The Fork and the Fury

Not everyone believed the right response was to fix OpenSSL. Some believed the right response was to replace it.

Theo de Raadt, the founder of the OpenBSD project, was famously blunt, opinionated, and undiplomatic. OpenBSD had built its entire identity on security. The project's motto was "Only two remote holes in the default install, in a heck of a long time." De Raadt looked at the OpenSSL codebase that Heartbleed had just made infamous, and what he saw disgusted him.

The code is a mess. It contains thousands of lines of VMS support, thousands of lines of Windows support, thousands of lines of FIPS support, none of which we need. There are thousands of lines of code the OpenSSL team intended to deprecate twelve years ago but never got around to removing. Discarded leftovers everywhere.

In April two thousand fourteen, the OpenBSD team forked OpenSSL and announced LibreSSL. The name echoed LibreOffice, the fork of OpenOffice. The message was the same. We can do better.

The cleanup was savage. In the first week alone, the LibreSSL team removed over ninety thousand lines of C code. By the time they had finished their initial pass, over one hundred and fifty thousand lines of content had been stripped out. Support for Classic Mac OS, gone. NetWare, gone. OS/2, gone. Sixteen-bit Windows, gone. OpenVMS, gone. The FIPS compliance code that had consumed so much of Steve Marquess's time and the OSF's money, gone.

De Raadt's team did not just remove dead code. They replaced OpenSSL's custom memory allocator with the standard operating system allocator. This was significant. OpenSSL had been using its own memory management system that allocated large pools of memory and reused them internally, bypassing the operating system's memory protections. The intention was performance. The effect was that bugs like Heartbleed became more dangerous. Modern operating systems use address space layout randomization and guard pages to make memory bugs harder to exploit. OpenSSL's custom allocator defeated those protections. When Heartbleed read past the bounds of a buffer, it was reading from OpenSSL's own recycled memory pool, which was far more likely to contain sensitive data like passwords and private keys than random operating system memory would have been.

The LibreSSL team also replaced OpenSSL's custom random number generator with the operating system's. They ripped out layer after layer of abstraction that had accumulated over fifteen years of development by a rotating cast of volunteers with no consistent coding standards. They found dead code that referenced operating systems that had not been in use since the nineteen nineties. They found workarounds for compiler bugs that had been fixed a decade earlier. They found code that appeared to serve no purpose at all, that nobody alive could explain, that had simply accreted like sediment.

The message was blunt. The problem was not just the Heartbleed bug. The problem was the codebase. The codebase was structurally unsound. It had been built over years by too few people with too little review, accumulating complexity without anyone ever going back to clean it up. Heartbleed was the symptom. The disease was the code itself.

Two months later, in June two thousand fourteen, Google announced its own fork. BoringSSL, created by Adam Langley, a Google security engineer. The name was deliberately prosaic. Security software should be boring. Boring meant predictable. Boring meant no surprises. Boring meant nobody would ever have to learn the name of your cryptographic library from a newspaper headline.

BoringSSL was not intended as a drop-in replacement for OpenSSL. It was Google's internal fork, tailored to Google's needs, used in Chrome, Android, and other Google products. Google explicitly said they would not support outside users. If you wanted to use BoringSSL, you were on your own. Langley was characteristically direct about the reasoning. Google had been maintaining an internal patch set against OpenSSL for years, hundreds of patches that never went upstream because the OpenSSL project did not want them or could not process them fast enough. At some point, maintaining a fork became less work than maintaining a patch set.

So by the summer of two thousand fourteen, the monolithic OpenSSL that had been the internet's sole widely-deployed open source TLS implementation had fractured into three. OpenSSL itself, now receiving actual funding for the first time. LibreSSL, the security-purist fork, stripped down and aggressive. And BoringSSL, Google's quiet internal project, pragmatic and deliberately unfriendly to outsiders.

The parallel to the ffmpeg story we told in episode eight is hard to miss. A critical codebase, undermaintained, accumulating technical debt, eventually shattering into forks after a crisis. The ffmpeg fork happened because of a governance dispute. The OpenSSL forks happened because of a catastrophic bug. But the underlying cause was the same. Too few maintainers, too little funding, too much complexity, too much importance placed on software that nobody was willing to pay for.

The Chain of Trust

The Heartbleed crisis threw a spotlight on OpenSSL, but the deeper story is about the system that OpenSSL implements. The TLS trust chain. Because even if the code is perfect, even if every bounds check is present and every buffer is properly sized, the entire system rests on a question that is ultimately social, not technical. Who do you trust?

When you visit a website over HTTPS, your browser does not just verify that the connection is encrypted. It verifies that the server on the other end is who it claims to be. It does this through certificates. A certificate is, in essence, a signed statement. A trusted third party, called a certificate authority, vouches for the identity of the website. Your browser trusts the certificate authority, the certificate authority vouches for the website, and therefore your browser trusts the website. That is the chain. Root certificate authority, intermediate certificate authority, website certificate. A chain of trust.

Your operating system and your browser ship with a list of trusted root certificate authorities. About one hundred and fifty organizations, worldwide, that are deemed trustworthy enough to sign certificates that your browser will accept without question. Mozilla maintains one such list for Firefox. Apple maintains one for Safari and macOS. Microsoft maintains one for Windows and Edge. Google maintains one for Chrome. These root stores are the bedrock. Everything above them is only as trustworthy as the weakest root.

And the roots have been broken before.

In the summer of two thousand eleven, three years before Heartbleed, a Dutch certificate authority called DigiNotar was hacked. DigiNotar was a small company based in Beverwijk, a quiet town in the Netherlands. They issued certificates for Dutch government websites among other clients. Small, respectable, trusted. Their root certificate was in every browser in the world.

The attacker gained access to DigiNotar's certificate-issuing infrastructure and generated fraudulent certificates for some of the most important domains on the internet. Google dot com. Yahoo dot com. Mozilla dot org. Skype dot com. The CIA. MI6. Mossad. The Tor Project. Over five hundred domains in total. The attacker later identified himself online using the alias Comodohacker, the same person who had previously breached another certificate authority called Comodo. He claimed to be a twenty-one-year-old Iranian acting alone, motivated by Dutch complicity in the Srebrenica massacre. Whether that story was true, whether he was acting alone, whether he had state backing, was never definitively established.

What was established was what the fraudulent certificates were used for. Man-in-the-middle attacks against Iranian citizens. When an Iranian user tried to access Gmail, their connection was intercepted by someone, almost certainly the Iranian government or an entity working with it, presenting a legitimate-looking certificate for Google, signed by a real certificate authority. The user's browser showed the padlock. The connection appeared secure. The URL bar showed HTTPS. But the attacker was sitting in the middle, reading every email, every password, every search query, every attachment. Iranian dissidents, activists, journalists, ordinary citizens checking their email, all of them surveilled through a forged certificate from a small Dutch company that had been compromised.

The first sign that something was wrong came from Iran itself. An Iranian user posted on a Google support forum on August twenty-eighth, two thousand eleven, reporting that Chrome was showing a certificate warning for Google dot com. The warning was caused by certificate pinning, a mechanism where Chrome had been hardcoded to only accept specific certificates for Google's own domains. When a certificate signed by DigiNotar showed up for a Google domain, Chrome rejected it. No other browser at the time had this protection. Firefox, Safari, Internet Explorer, all of them would have accepted the fraudulent certificate without complaint.

The investigation that followed revealed the full scope of the breach. DigiNotar had been compromised for weeks, possibly months, before anyone noticed. Their security practices were described by the subsequent independent investigation as severely deficient. DigiNotar was removed from every browser's trust store within days. The Dutch government, which relied on DigiNotar for its official certificates, had to scramble to find alternatives. The company filed for bankruptcy in September two thousand eleven, less than a month after the public disclosure.

The DigiNotar incident proved that the certificate authority system could be weaponized. A single compromised CA could issue certificates for any domain, and users would have no way of knowing. The padlock would still appear. The URL bar would still show HTTPS. The trust chain would look intact, but the trust would be a lie. And the victims would be the people who most needed encrypted communication, the dissidents and activists whose lives depended on it.

It happened again, on a much larger scale, between two thousand fifteen and two thousand eighteen. This time the problem was not an external hacker but the largest certificate authority in the world. Symantec. Through its various brands, including VeriSign, Thawte, and GeoTrust, Symantec controlled roughly thirty percent of all web certificates. And Google's Chrome security team discovered that Symantec had been issuing certificates improperly.

The details were technical but the summary was damning. Certificates issued without proper validation. Test certificates for live domains. Certificates issued by unauthorized parties operating under Symantec's trust. Over thirty thousand improperly issued certificates in total.

Google responded with a graduated nuclear option. Chrome sixty-six, released in April two thousand eighteen, would stop trusting Symantec certificates issued before June two thousand sixteen. Chrome seventy, released in October two thousand eighteen, would stop trusting all Symantec certificates entirely. Symantec, the world's largest certificate authority, was being ejected from the web's trust system by the world's most popular browser.

The standoff between Google and Symantec was one of the most consequential power struggles in the history of the web. Symantec argued that the response was disproportionate, that thirty thousand certificates out of millions issued was a tiny fraction. Google argued that the point of a certificate authority was trust, that any number of improperly issued certificates was too many, and that a certificate authority that could not be trusted had no reason to exist.

The impact was enormous. Websites that had purchased expensive Symantec certificates, sometimes paying hundreds of dollars per year, suddenly had to replace them on Google's timeline. Major banks, hospitals, government agencies, universities, all scrambling to reissue certificates before Chrome dropped support. The world's largest provider of SSL certificates had been found untrustworthy by the world's largest browser maker. Trust, once broken at the root, cannot be patched.

Symantec sold its certificate authority business to DigiCert before the final deadline. The migration affected millions of websites. And the message was clear. Being a trusted root is not a permanent status. It is a privilege that can be revoked. The approximately one hundred and fifty organizations in the world's browser root stores hold their position on sufferance, not by right. If they fail, they are removed, and the chain of trust that depends on them collapses overnight.

Let Us Encrypt

In the aftermath of Heartbleed, and against the backdrop of an increasingly fragile certificate authority system, a different kind of response was taking shape. One that would not just fix the code or fund the developers, but change the economics of internet encryption entirely.

The idea had been germinating since two thousand twelve, when two groups of people, working independently, arrived at the same conclusion. If HTTPS was supposed to protect everyone, then the barrier to entry was too high. SSL certificates cost money, sometimes hundreds of dollars per year. They required technical expertise to install. They expired and had to be renewed manually. For small websites, personal blogs, nonprofits, organizations in the developing world, the cost and complexity of HTTPS was effectively a gate. And as long as that gate existed, a huge portion of the web would remain unencrypted.

At Mozilla, Josh Aas and Eric Rescorla were leading a team to design a free, automated certificate authority. Separately, Peter Eckersley at the Electronic Frontier Foundation and J. Alex Halderman at the University of Michigan were developing a protocol for automatically issuing and renewing certificates. In May two thousand thirteen, the two groups learned of each other and joined forces.

The Internet Security Research Group, ISRG, was incorporated in May two thousand thirteen. Josh Aas and Eric Rescorla were its founding directors. The founding sponsors included Mozilla, the EFF, the University of Michigan, Cisco, and Akamai. The project they were building was called Let's Encrypt.

If HTTPS is supposed to protect everyone, then certificates should not cost money or require expertise. The entire process, from application to issuance to renewal, should be automated. Zero cost, zero human intervention, zero excuses for not encrypting.

Let's Encrypt was publicly announced on November eighteenth, two thousand fourteen, seven months after Heartbleed. The timing was not coincidental. Heartbleed had demonstrated that the code implementing TLS could fail catastrophically. The DigiNotar breach had demonstrated that the certificate authority system could be compromised. Let's Encrypt addressed a third failure mode, the economic one. If certificates were free and automated, there was no reason for any website on the internet to be unencrypted.

The first browser-trusted Let's Encrypt certificate was issued on September fourteenth, two thousand fifteen. Public service began on December third, two thousand fifteen. The protocol they used, ACME, standing for Automatic Certificate Management Environment, allowed a web server to prove it controlled a domain, request a certificate, and install it, all without human intervention. Certificates were valid for ninety days and renewed automatically. The entire interaction took seconds.

The growth was explosive. Within its first year, Let's Encrypt had issued over ten million certificates. By two thousand nineteen, it had crossed one hundred million. By two thousand twenty-three, it had issued over three hundred million certificates covering more than three hundred and sixty million domain names. The share of web traffic using HTTPS, which had hovered around forty percent in two thousand fourteen, crossed ninety percent by two thousand twenty-two.

Let's Encrypt did not do this alone. Google's decision to flag HTTP sites as "not secure" in Chrome, starting in two thousand eighteen, was a massive push. Firefox followed. Safari followed. Suddenly, if your website did not have HTTPS, browsers displayed a warning that scared away visitors. The combination of free certificates from Let's Encrypt and browser shaming from Google and Mozilla created a pincer movement. The unencrypted web was being squeezed out of existence from both sides.

What Let's Encrypt achieved was not just a technical accomplishment. It was an economic one. They took the cost of basic internet security from somewhere between fifty and several hundred dollars per year, down to zero. They automated a process that used to require command-line expertise. They made encryption the default instead of the exception. A small nonprofit, funded by donations and sponsorships, had done what the entire commercial certificate authority industry had failed to do for two decades. They made HTTPS ubiquitous.

The ninety-day certificate lifetime was controversial at first. Traditional certificates lasted one or two years. System administrators complained that ninety days meant more renewals, more chances for something to break. But the short lifetime was intentional. It forced automation. If you had to renew every ninety days, you could not do it manually. You had to set up auto-renewal. And once auto-renewal was set up, the certificate became invisible. It renewed itself, silently, indefinitely. The inconvenience was a feature. It pushed the ecosystem toward a world where certificates were managed by software, not by humans with calendars and reminder emails.

The parallel to the OpenSSL story is striking. OpenSSL was a piece of critical security infrastructure maintained by volunteers. Let's Encrypt was a piece of critical security infrastructure maintained by a nonprofit. But Let's Encrypt learned from OpenSSL's mistakes. It had institutional backing from day one. It had corporate sponsors. It had a governance structure. It had a budget. It was designed to be sustainable in a way that OpenSSL never was.

The Connection to Trust

Four episodes ago, in the story of the install command, we talked about trust. Ken Thompson's nineteen eighty-four Turing Award lecture. You cannot trust code that you did not totally create yourself. The xz-utils backdoor, discovered a decade after Heartbleed, where a patient social engineer spent three years infiltrating a project and planting a backdoor in a compression library.

OpenSSL sits at a strange intersection of those trust problems. The Heartbleed bug was not a supply chain attack. Nobody planted it deliberately. Nobody social-engineered their way into the project. Robin Seggelmann was exactly who he appeared to be, a graduate student contributing to open source in good faith. Stephen Henson was exactly who he appeared to be, an overworked volunteer doing his best. The bug was just a bug. A human error caught by no process, because no process existed.

But the damage was indistinguishable from an attack. Five hundred thousand vulnerable servers. Millions of compromised credentials. The private keys to encrypted connections, extractable by anyone who knew the trick. If a nation-state intelligence agency had deliberately planted this vulnerability, it could not have designed a more effective one. A trivially exploitable bug in the most widely deployed encryption library on the planet, sitting undetected for over two years, with no logging and no evidence trail.

And that is the deeper lesson of the OpenSSL story. The distinction between a bug and a backdoor, between a mistake and an attack, between negligence and malice, the distinction does not matter to the person whose password was stolen. The outcome is the same. The lock was broken. Whether it was broken by a locksmith who made an error or a thief who picked it, the door was open.

The code that implements trust, the actual executable instructions that encrypt your credit card number and verify your bank's identity and protect your medical records, that code was maintained on a budget of two thousand dollars a year by two people who had never met. And nobody noticed until the lock broke.

This is what episode ten was really about. Not just the install command, not just the registries, not just the dependency trees. The entire system rests on a foundation of trust in strangers. Trust that the code was written correctly. Trust that it was reviewed. Trust that the people maintaining it are who they say they are. Trust that the certificate authority vouching for your bank is itself trustworthy. Trust that the root store in your operating system has not been compromised. Trust, all the way down, resting on infrastructure maintained by volunteers and funded by donations.

And the OpenSSL story adds a layer to that trust equation that the xz-utils story did not. Heartbleed was not malicious. It was not a supply chain attack. It was not an infiltration. It was the most ordinary kind of software bug, a missing bounds check, in the most critical possible location, caught by no process because no process existed. The xz-utils attack was sophisticated, patient, and deliberately evil. Heartbleed was mundane, accidental, and just as devastating. The lesson of xz-utils is that your dependencies can be compromised by enemies. The lesson of Heartbleed is that they can be just as broken by friends who are overworked and underpaid. You do not need a villain. Exhaustion will do.

The State of the Lock

So what has changed?

OpenSSL three point oh was released in September two thousand twenty-one. It was a major rewrite. New architecture, new governance, new license. The old SSLeay plus OpenSSL dual license, a relic from nineteen ninety-eight, was replaced with the Apache two point oh license. The codebase was restructured around a provider architecture, making it modular in ways the old code never was. An OpenSSL Management Committee now governs the project with formal processes for decision-making.

The funding is better. The Core Infrastructure Initiative evolved into the Open Source Security Foundation, OpenSSF, in two thousand twenty, with broader scope and deeper pockets. The founding members of OpenSSF included Google, Microsoft, IBM, Intel, GitHub, and JPMorgan Chase. The budget was measured in millions per year, not thousands. OpenSSL itself has more contributors now than at any point in its history. The two-guys-named-Steve era is over.

But has the underlying problem been solved? The question of whether critical open source infrastructure is adequately funded, adequately reviewed, and adequately protected against both bugs and attacks?

The answer came ten years later.

In March two thousand twenty-four, exactly a decade after Heartbleed, a backdoor was discovered in xz-utils, a compression library used by essentially every Linux distribution. We told the full story in episode ten. A social engineer using the name Jia Tan had patiently infiltrated the project over three years, gained the sole maintainer's trust, received commit access, and planted a sophisticated backdoor that could have allowed attackers to bypass authentication on SSH connections across every major Linux distribution. It was discovered by accident, by a single developer who noticed that SSH connections were five hundred milliseconds slower than expected.

The xz-utils maintainer, Lasse Collin, had publicly disclosed his mental health struggles. The sockpuppet accounts that pressured him into accepting help had exploited that vulnerability. The project had one maintainer, no funding, and no review process. Ten years after Heartbleed. Ten years after the Core Infrastructure Initiative. Ten years after the world learned what happens when critical infrastructure is maintained by exhausted volunteers.

The CII was supposed to fix this. The OpenSSF was supposed to fix this. The hundreds of millions of dollars that major technology companies now spend on open source security were supposed to fix this. And yet xz-utils, a project that sat in the critical path of every Linux server's SSH authentication, was maintained by one person for free. The CII and OpenSSF fund high-profile projects, the ones that make headlines. But the dependency tree is deep, and the projects at the bottom of it, the small, obscure, critical ones that nobody has heard of until something goes wrong, those projects are as vulnerable as OpenSSL was in two thousand thirteen.

The lock on every door was strengthened after Heartbleed. OpenSSL itself is in better shape. Let's Encrypt made encryption the default. LibreSSL and BoringSSL provided alternatives. The certificate authority ecosystem has more oversight. Google's Certificate Transparency project, which requires all certificate authorities to publicly log every certificate they issue, makes it much harder for a compromised CA to issue fraudulent certificates undetected. The DigiNotar attack would be caught in minutes today.

But the pattern underneath, the pattern where volunteers build and maintain the infrastructure that trillion-dollar companies depend on, the pattern that has echoed through every episode of this series, that pattern has not fundamentally changed. It has been acknowledged. It has been funded, partially, selectively, reactively. It has not been solved. The internet did not learn the lesson of Heartbleed. It learned the name.

Where It Connects

Open a terminal on this machine. Type pip install anything. Watch what happens. The terminal shows package names scrolling past, progress bars filling, version numbers resolving. What you do not see is the TLS handshake happening before every single download. Your machine connects to PyPI over HTTPS. It verifies PyPI's certificate against a chain of trust rooted in a certificate authority. It negotiates an encryption key. It downloads the package through an encrypted tunnel. All of this uses OpenSSL, or one of its descendants. The certificate that PyPI presents was likely issued by Let's Encrypt. The root certificate that validates it lives in this operating system's trust store.

Every pip install is an act of trust. Trust that the code is what it claims to be. Trust that the connection is encrypted. Trust that the server is PyPI and not an impostor. Trust that the certificate authority that vouched for PyPI is itself trustworthy. And underneath all of that trust, at the bottom of the stack, cryptographic code that traces its lineage back to an Australian developer in Brisbane in nineteen ninety-five.

SSH into the server. The connection uses OpenSSH, which uses OpenSSL's libcrypto for its cryptographic operations. Every deploy alias, every rsync, every file transfer to the VPS, every time the podcast pipeline pushes a new episode to the server, all of it flows through code that Eric Young started writing three decades ago. The SSH protocol negotiates a key exchange, authenticates the server, authenticates the user, encrypts every byte in both directions. libcrypto handles the math. OpenSSL handles the protocol.

This podcast, the episode you are listening to right now, was assembled on this machine using ffmpeg, which links against OpenSSL for its HTTPS support. The TTS audio was generated by calling APIs over HTTPS. The jingle music was fetched from fal dot ai over HTTPS. The generated episode was synced to the server using rsync over SSH. The server runs nginx, which uses OpenSSL to terminate HTTPS connections from the internet. The RSS feed you are subscribed to is served over HTTPS. When your podcast app checks for new episodes, it performs a TLS handshake. When it downloads this MP3, the transfer is encrypted. OpenSSL is involved in every single step of that chain, from content generation to your ears.

Curl, the tool we covered in episode five, uses OpenSSL. Daniel Stenberg has spent decades navigating the thicket of TLS backends, but on most Linux systems, curl links against OpenSSL or one of its derivatives. Every Homebrew install, every npm install, every git clone over HTTPS, every Docker pull, every API call, all of it touches this code. Python's ssl module, the one that the requests library uses, the one that httpx uses, the one that every Python program that makes an HTTPS connection uses, wraps OpenSSL. Node.js uses OpenSSL. Ruby uses OpenSSL. Even software that claims to use a different TLS library often links against OpenSSL's libcrypto for the underlying cryptographic primitives.

The lock is on every door. It was built by an Australian developer who gave it away. It was maintained by two Steves who never met. It was broken by a graduate student who missed one line of code. And it was rebuilt, not by any single company or government, but by the same messy, underfunded, brilliant, fragile process that built it in the first place. Open source. Volunteers and donations. Trust, all the way down.

A few million a year would do grandly.

The internet never paid it. The internet just used it. And then the internet was surprised when the lock was broken. The surprise is the part that should have surprised nobody.

OpenSSL is already on your machine. Open a terminal and type openssl version. You will see the build, the version number, the date it was compiled. Now type openssl s_client dash connect example dot com colon four four three. Watch the output. You are seeing a live TLS handshake. The certificate chain, from the server's certificate up through the intermediate to the root. The cipher suite that was negotiated. The protocol version. That is the lock on every door, and you just watched it turn.

That was episode fourteen.