This is episode sixteen of What Did I Just Install.
In March of two thousand thirteen, at a Python conference in Santa Clara, California, a twenty-seven-year-old French-American developer named Solomon Hykes walked onto a stage and gave a five-minute lightning talk that would reshape how software gets deployed across the entire industry. He was not presenting the main product of his company. He was showing off an internal tool, a side project that his team had built to solve their own infrastructure headaches. The demo was simple. He typed a few commands. A container appeared. He installed some software inside it. He saved the state. He could now ship that exact environment to any machine running Linux, and it would behave identically. No configuration drift. No missing libraries. No mysterious failures that only happen in production.
The audience was roughly eight hundred people. By the end of the week, the video was circulating through every corner of the software development world. Within months, the side project had eclipsed the company that built it. Within two years, it had attracted over three hundred million dollars in venture capital. Within five years, its creator would be gone, the company would nearly collapse, and the tool itself would become so fundamental to modern software that most developers cannot imagine working without it. The side project was called Docker. And before we can understand what it did, we need to understand the forty-year history of the idea it made famous.
The concept that Docker rides on, isolating one piece of software from everything else running on the same machine, is older than most of the people who use it. In nineteen seventy-nine, during the development of Version Seven Unix at Bell Labs, a programmer named Bill Joy implemented a system call called chroot. Short for "change root." What it did was deceptively simple. It told a process that its entire universe, every file, every directory, everything it could see, started at a particular folder. The process thought it was looking at the whole filesystem. It was actually locked inside a subtree. A jail cell made of directories.
Chroot was not designed for security, and it was not really a container. A determined process could escape. But the idea was planted. What if you could give a piece of software its own little world, its own view of the filesystem, its own network stack, its own process tree, completely isolated from everything else on the machine?
In the year two thousand, FreeBSD took this idea much further with a feature called jails, developed by Derrick T. Woolworth at R and D Associates. A FreeBSD jail was a proper isolation boundary. Each jail got its own IP address, its own filesystem, its own set of running processes. The administrator could run multiple independent systems on a single physical server, each one believing it was the only thing there. Hosting companies loved this. You could sell ten customers their own "server" while actually running everything on one machine. The customer saw a complete system. The hosting company saw efficient hardware utilization. This was virtualization before virtualization had a marketing department.
Four years later, Sun Microsystems built something similar for Solaris called Zones, and it was arguably the most elegant implementation of the idea anyone had produced. Solaris Zones could run at near-native speed because they shared the host kernel rather than emulating hardware. They were real, production-grade isolation technologies used by enterprises and hosting companies throughout the two thousands. They worked. They were stable. And almost nobody outside the server administration world had ever heard of them.
The piece that changed everything came from an unlikely source. In two thousand six, two Google engineers named Paul Menage and Rohit Seth submitted a feature to the Linux kernel called "process containers," later renamed cgroups, short for control groups. Cgroups let you limit how much CPU, memory, and disk bandwidth a group of processes could consume. Combine cgroups with Linux namespaces, which had been slowly accumulating in the kernel since two thousand two, and you had the two fundamental building blocks of containerization. Namespaces controlled what a process could see. Cgroups controlled how much it could use.
In two thousand eight, a project called LXC, Linux Containers, stitched these kernel features together into the first complete container management system for Linux. You could create containers, run applications inside them, and manage their lifecycle. It worked. It was functional. And it required you to understand Linux kernel internals, write configuration files by hand, and essentially be a systems administrator with years of experience. The technology existed. The usability did not. That gap between "the technology works" and "a developer can actually use it" is where Docker would park itself five years later.
Solomon Hykes grew up in France, the son of a French mother and an American father. He was a programmer and an entrepreneur who thought in systems, the kind of person who sees a problem not as a thing to fix but as a pattern to eliminate. In two thousand eight, he started a company in Paris called dotCloud with his co-founder Sebastien Pahl. The idea was a platform-as-a-service, a way for developers to deploy their applications without worrying about servers. Think Heroku, which had launched around the same time and was rapidly becoming the gold standard for painless deployment.
Hykes and Pahl applied to Y Combinator. They were rejected. They applied again, scraping together plane tickets to San Francisco with help from Pahl's father. They were rejected again. Then a YC alumnus named James Lindenbaum, the founder of Heroku itself, vouched for them. The irony is almost too neat. The founder of the platform that dotCloud was trying to compete with was the person who got them into the incubator. Hykes and Pahl were accepted into the Summer twenty ten batch, and they moved to San Francisco.
When we joined Y Combinator, we packaged container technology into dotCloud. We used containers under the hood as our differentiator. But the platform itself was the product. The containers were just how we made it work.
DotCloud launched in two thousand eleven and gained users, but it was fighting in a crowded market. Heroku had a massive head start and the backing of Salesforce, which had acquired it in two thousand ten for two hundred and twelve million dollars. Google had App Engine. Microsoft was building Azure. For a small startup with limited funding, competing on platform-as-a-service against companies with billions in resources was a slow grind toward irrelevance.
But something interesting was happening inside dotCloud. The container engine they had built to power their platform, the thing that packaged applications and their dependencies into isolated environments, was attracting more attention from the developer community than the platform itself. People who tried dotCloud kept asking about the underlying technology. How does that container thing work? Can I use it separately? Can I use it on my own servers?
This is a pattern that recurs in the history of software. Sometimes the scaffolding you build to support the actual product turns out to be more valuable than the product itself. jQuery started as a utility for another project. Rails emerged from building Basecamp. And Docker emerged from building dotCloud. The difference was that Hykes recognized the pivot point and had the courage to take it.
In March of two thousand thirteen, dotCloud open-sourced their container engine under the name Docker and presented it at PyCon. The name was chosen because of the shipping metaphor. A docker is a person who loads and unloads cargo from ships. Shipping containers revolutionized global trade in the nineteen fifties by standardizing how goods were packaged and transported. It did not matter what was inside the container, toys or televisions or tractor parts, the container itself was always the same shape, always fit the same crane, always stacked the same way on every ship, train, and truck. Docker proposed to do the same thing for software.
The logo was a whale carrying a stack of containers on its back. Friendly. Memorable. The kind of thing you put on a laptop sticker. The naming was brilliant in a way that most developer tools fail at. It was a metaphor that actually held up. When you build a Docker image, you are packing your application into a standardized container. When you ship that image to another machine, it arrives intact, with everything it needs already inside. The machine on the other end does not need to have the right version of Python installed, or the right C libraries, or the right configuration files. Everything is in the box.
We had this internal tool to manage containers, and we started getting more attention for the tool than for the platform itself. That is when we realized the real product was Docker.
The company formally pivoted. DotCloud the platform was sold. DotCloud the company renamed itself Docker Inc. Ben Golub, a veteran executive who had previously led Gluster before its acquisition by Red Hat, joined as CEO in two thousand thirteen. The plan was straightforward. Build the developer community around the open-source tool. Monetize later through enterprise products. The same playbook that had worked for Red Hat with Linux, for Cloudera with Hadoop, for MongoDB with their database. Except Docker had something those companies did not have at launch. Docker had momentum that looked like a rocket launch.
To understand why Docker spread so fast, you need to understand the problem it solved. And to understand that problem, you need to have deployed software to a server at least once in your life and watched it fail.
Every piece of software depends on other software. A Python application needs a specific version of Python. It needs specific libraries, each of which might need their own C extensions compiled against specific system libraries, which might need specific versions of the operating system's shared objects. This is the dependency problem, and we covered the history of how package managers try to solve it back in episode ten. Lockfiles pin your dependencies. Virtual environments isolate your Python packages. But none of these solve the deeper problem. They manage the application's own dependencies, but they do not manage the operating system underneath them.
The phrase every developer has said at least once in their career is "it works on my machine." Your laptop runs macOS. The server runs Ubuntu. Your laptop has OpenSSL one point one. The server has OpenSSL three point zero. Your laptop has the JPEG library installed because you ran brew install at some point last year. The server does not. Your application works perfectly at your desk and crashes the moment it touches production. This is not a package management problem. This is an environment problem. Docker solved it by shipping the entire environment.
A Dockerfile is a recipe. It says: start with this base operating system image. Install these system packages. Copy my application code. Install my Python dependencies. Set this environment variable. Expose this port. Run this command. When you build a Docker image from that recipe, you get a single artifact that contains everything. The operating system, the system libraries, the application code, the dependencies, the configuration. You can run that artifact on any machine that has Docker installed, and it will behave exactly the same way. Your laptop. A test server. A production cluster in a data center on the other side of the world. The box is the same everywhere.
It is not about what is in the box. It is about the box being the same shape everywhere it goes. That is what Docker gives you.
This was fundamentally different from what lockfiles and package managers do. Episode ten traced how Yehuda Katz invented the lockfile, how pip and npm and Cargo manage dependency trees. Those tools answer the question "which versions of my libraries should I use?" Docker answers a different question. "What should the entire world around my application look like?" It operates at a lower layer. Not the application's dependencies, but the application's universe.
Docker's adoption curve was unlike anything the infrastructure world had seen. By late two thousand thirteen, months after the PyCon talk, thousands of developers were using it. By two thousand fourteen, the company had raised forty million dollars in a Series B from Sequoia Capital, the same firm that had backed Apple, Google, and Cisco. By two thousand fifteen, Docker had raised over one hundred and fifty million dollars and was valued at roughly one point three billion dollars. By two thousand fifteen, Docker had more GitHub stars than jQuery, more conference talks than any infrastructure tool in a generation, and a valuation that made it the most talked-about startup since Heroku.
Docker Hub launched alongside Docker itself, and it quickly became the central registry for container images. The concept was simple and powerful. A developer could build an image, push it to Docker Hub, and anyone in the world could pull it with a single command. Official images for popular software, maintained by Docker in collaboration with the upstream projects, meant you could have a running PostgreSQL database or a Redis server in thirty seconds. Docker Hub did for containers what npm did for JavaScript packages and what PyPI did for Python. It created a shared commons, a place where the work of packaging software could be done once and reused millions of times. But like every commons in this series, it came with questions about who pays for the infrastructure and who controls the gates.
In June of two thousand fifteen, Docker made a move that would prove critical for the ecosystem but also for its own future. The company donated its container format and runtime specifications to a new organization called the Open Container Initiative, under the Linux Foundation. This was the right thing to do for the industry. Standardizing the container format meant other tools could work with Docker images without depending on Docker itself. But it also meant Docker was giving away the very foundation it was built on. This is the paradox that haunts every open-source company. The more you give away, the more the ecosystem grows, but the more the ecosystem can survive without you.
Then came the Moby incident. In April of two thousand seventeen, at DockerCon in Austin, Texas, Docker Inc. announced that the open-source Docker project was being renamed to Moby. The name was a reference to Moby Dick, the whale from Herman Melville's novel, which fit with Docker's whale mascot. But the execution was catastrophic. The GitHub repository that millions of developers had bookmarked, the one at docker slash docker, was silently renamed to moby slash moby. Links broke. Automation broke. References in documentation, blog posts, and Stack Overflow answers all pointed to a repository that no longer existed under that name. And nobody had been warned.
The community reaction was swift and furious. Developers opened issues on GitHub. Blog posts dissected the confusion. The fundamental complaint was not about the name itself but about the communication. Docker Inc. had taken a project that millions of people used and renamed it without consultation, without warning, without a migration guide. The Moby project was supposed to be the open-source upstream, while "Docker" remained the commercial brand. But in practice, nobody could explain clearly which thing was Moby and which thing was Docker and where one ended and the other began.
This is extremely confusing. Are we using Docker? Are we using Moby? Do I file my bug reports against Moby now? Why was this not discussed with the community before it happened?
Two and a half years later, in November of two thousand nineteen, a GitHub issue titled "Rename moby to docker" would accumulate hundreds of comments from developers asking for the change to be reversed. It never was. The Moby rename stands as one of the most poorly executed open-source rebranding efforts in history. Not because the idea was wrong, separating the open-source project from the commercial product is a reasonable strategy, but because the execution treated the community as an afterthought.
This was not a fork, like the ffmpeg split we covered in episode eight, where a group of developers tried to seize control and the community had to choose sides. And it was not a license change, like Redis in episode nine, where a company pulled the rug on open-source users. It was something more mundane and in some ways more damaging. It was a company that had grown so fast it forgot the community was not just a user base but a partner.
On March twenty-eighth, two thousand eighteen, Solomon Hykes published a blog post titled "Au revoir." The title was fitting. The French farewell. The man who had moved from Paris to San Francisco, who had been rejected by Y Combinator twice before getting in, who had bet his entire company on a side project, was leaving Docker.
The departure had been a slow unwinding. Hykes had stepped down as CTO the previous September, taking the title of Chief Architect and Vice Chairman of the board. His last public contribution to the Moby repository was in October of two thousand seventeen. By the time the "Au revoir" post appeared, he had been drifting away from the company for months. The blog post was gracious, diplomatic, the kind of thing a founder writes when they have been eased out rather than pushed.
Docker has been my life for the last five years. The future of Docker is in the best of hands. I am very proud of what we built together, and I am excited about what comes next, for Docker and for me.
The pattern is one this series knows well. A person builds something extraordinary. A company forms around it. The company grows, takes venture capital, hires executives, pivots toward the enterprise. And the founder, the person who saw the original vision, finds themselves increasingly irrelevant in the organization they created. TJ Holowaychuk left Node and Express for Go in episode three. Salvatore Sanfilippo left Redis in episode nine because maintenance was killing his creativity. Ryan Dahl gave Joyent everything and walked away. Solomon Hykes left Docker Inc. and eventually started a new company called Dagger, building a CI/CD pipeline tool that used, naturally, container-based workflows. The creator of Docker went back to building with containers, just not the containers everyone else was using. When asked about Docker in later interviews, Hykes was reflective rather than bitter. He spoke about the lessons of building an open-source company, about the tension between community governance and corporate control, about the difficulty of being both the technical visionary and the person the board expects to drive enterprise revenue.
Ben Golub, the CEO who had joined in two thousand thirteen and guided the company through its hypergrowth phase, had also departed. Steve Singh replaced him as CEO. The leadership team that had built Docker from a five-minute demo into a billion-dollar company was gone. What remained was a company with four hundred million dollars in total fundraising, hundreds of employees, enterprise customers, and an open-source project that was already outgrowing its creator.
Docker solved the problem of running one container on one machine. But modern applications do not run on one machine. A real production system might need dozens or hundreds of containers, spread across multiple servers, automatically restarting when they crash, scaling up when traffic spikes, scaling down when it drops. This is the orchestration problem. And Docker tried to solve it with a product called Docker Swarm.
Swarm was integrated directly into Docker itself starting in two thousand fifteen. If you already knew Docker, you could use Swarm with minimal additional learning. It was simple, elegant, and practical for small to medium deployments. Docker bet heavily on Swarm as the natural next step for Docker users.
Google had a different idea. In June of two thousand fourteen, Google open-sourced a container orchestration system called Kubernetes, based on over a decade of internal experience running containers at Google scale. The internal system was called Borg, and it managed millions of containers across Google's global data centers. Kubernetes was Borg's philosophy distilled into an open-source project that anyone could use.
The orchestration war lasted roughly three years, from two thousand fifteen to two thousand eighteen, and it was never really a fair fight. Kubernetes was more complex than Swarm. It had a steeper learning curve, more configuration, more concepts to learn. Pods, services, deployments, ingress controllers, config maps, secrets, persistent volume claims. The vocabulary alone was intimidating. Docker Swarm, by contrast, felt like a natural extension of Docker itself. If you could run docker run, you could run docker swarm init and be orchestrating containers in minutes.
But Kubernetes was also more powerful, more flexible, and backed by the combined engineering might of Google, Red Hat, CoreOS, and eventually the entire cloud industry. Amazon Web Services launched EKS. Microsoft Azure launched AKS. Google Cloud had GKE from the beginning. The Cloud Native Computing Foundation, formed in two thousand fifteen with Kubernetes as its flagship project, became the center of gravity for the entire container ecosystem. Every major infrastructure vendor rallied behind Kubernetes. The conference circuit was wall-to-wall Kubernetes talks. The job postings all said "Kubernetes experience required." Docker Swarm, for all its elegance, did not have an army behind it.
By two thousand eighteen, Kubernetes had won. Docker Inc. itself acknowledged this by adding native Kubernetes support to Docker Desktop. The company that had made containers mainstream was now shipping its competitor's orchestration tool inside its own product. The numbers are stark. Kubernetes holds roughly ninety-two percent of the container orchestration market. Docker Swarm holds about two and a half percent.
Docker Swarm continues to serve its users well for simpler deployments. We believe developers should have the choice of orchestration tool that best fits their needs. That is why Docker Desktop now supports Kubernetes natively.
There is an irony here that cuts deep. The container runtime that Kubernetes actually uses under the hood is called containerd. And containerd was originally built by Docker. In two thousand seventeen, Docker extracted it from its own codebase and donated it to the Cloud Native Computing Foundation. Docker built the engine. Docker gave the engine away. Kubernetes took the engine and used it to win the war. Docker's own technology powers the system that replaced Docker's own orchestration product. The open-source paradox, again.
By two thousand nineteen, Docker Inc. was in trouble. The enterprise business was not growing fast enough to justify the burn rate. The stock of executive departures, first Hykes, then Golub, then a series of leadership changes, had created organizational instability. The open-source community was increasingly indifferent to Docker Inc.'s commercial ambitions. Kubernetes had captured the enterprise container market. The company had raised roughly four hundred million dollars and was running out of runway.
On November thirteenth, two thousand nineteen, Docker Inc. announced that it was selling its Enterprise Platform business to Mirantis, a company specializing in Kubernetes and OpenStack infrastructure. The deal included Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane, and the enterprise CLI. The financial terms were not disclosed. What Docker kept was Docker Desktop, Docker Hub, and the open-source project. What Docker lost was its entire enterprise revenue stream.
This is a new chapter for Docker. We are refocusing the company on the developer experience. Docker Desktop and Docker Hub are where the future is.
The aftermath was brutal. Docker raised thirty-five million dollars in emergency funding from Benchmark Capital and Insight Partners. The company shrank to roughly one hundred employees, down from a peak of over five hundred. Mirantis laid off forty percent of the Docker Enterprise team it had just acquired. Scott Johnston became CEO of the new, much smaller Docker Inc. The company that had been valued at over a billion dollars, that had raised four hundred million dollars from some of the most prestigious venture capital firms in Silicon Valley, was now a fraction of its former self, desperately pivoting to survive. This was not a graceful transition. This was a company selling its organs to stay alive.
But Docker did not die. The company that remained was smaller, leaner, and ruthlessly focused on a single bet. Developers need containers. Developers use Docker. If Docker can be the tool that developers reach for every single day, there is a business in that. The question was whether "free and beloved" could become "paid and still beloved," or whether charging money would drive developers to the alternatives that were already circling.
In January of two thousand twenty-two, the company changed the licensing for Docker Desktop. It remained free for personal use, education, and small businesses, but companies with more than two hundred and fifty employees or more than ten million dollars in annual revenue now had to pay. Five dollars per user per month for the Pro tier. Seven for Team. Twenty-four for Business. This was the move that saved Docker Inc. Not because the pricing was aggressive, but because Docker Desktop had become so deeply embedded in developer workflows that most companies simply paid. The alternative was migrating every developer to a different container tool, and the switching cost was higher than the subscription.
By two thousand twenty-one, Docker had raised another hundred and five million dollars from Bain Capital Ventures, at a valuation that suggested the market believed in the pivot. The company had gone from building a container platform to building a developer tools company. Docker Hub reportedly processes over thirteen billion image pulls per month. Docker Desktop sits on the machines of an estimated twenty million developers worldwide. The Docker Hub registry hosts over fourteen million repositories. If you have ever pulled an nginx image, a postgres image, a node image, or a python image to spin up a development environment, you have used Docker Hub. It is the npm registry of the container world, and like npm, its sheer gravitational pull makes it nearly impossible to displace.
Let us do what this series always does and look at what Docker actually depends on. When you pull and run a Docker image, you are interacting with a stack of components that Docker originally built and then systematically gave away.
At the bottom sits the Linux kernel, providing the cgroups and namespaces that make containers possible. These are not Docker's creation. They are Linux features that predate Docker by years. Docker's initial contribution was LXC integration, wrapping those kernel features in a usable interface. In two thousand fourteen, Docker replaced LXC with its own container runtime library called libcontainer, written in Go. This was the first time Docker fully controlled the runtime layer.
Libcontainer was then extracted, cleaned up, and donated to the Open Container Initiative as runc, a standardized low-level container runtime. On top of runc sits containerd, Docker's higher-level container runtime that manages the full container lifecycle, image transfers, storage, and container execution. Containerd was donated to the Cloud Native Computing Foundation in two thousand seventeen. Above containerd sits the Docker daemon and CLI, the parts that developers actually interact with. And wrapping everything together is Docker Compose, which started life as a separate tool called Fig, built by a small company called Orchard Laboratories that Docker acquired in two thousand fourteen.
The dependency tree, in other words, is largely Docker's own children that it has emancipated. runc is Docker's code, running under the OCI's roof. containerd is Docker's code, running under the CNCF's roof. The only external dependency that truly matters is the Linux kernel itself. On macOS, where there is no Linux kernel, Docker Desktop runs a lightweight Linux virtual machine under the hood using Apple's Virtualization Framework. This is why Docker on a Mac has always felt slightly different from Docker on Linux. There is an entire operating system running invisibly between your terminal and your containers.
The macOS situation has spawned an entire ecosystem of alternatives. When Docker changed its desktop licensing in twenty twenty-two, developers who did not want to pay went looking for options. Colima wraps Lima, a lightweight Linux VM manager, and provides Docker-compatible commands without Docker Desktop. OrbStack is a commercial alternative that is significantly faster than Docker Desktop on Apple Silicon, because it uses a custom-built virtual machine engine optimized specifically for macOS rather than the general-purpose approach Docker Desktop takes. Podman, Red Hat's container tool, runs containers without a central daemon at all, which means no background process eating memory when you are not actively running containers. Rancher Desktop, backed by SUSE, offers another open-source path. The Docker Desktop licensing change, which was intended to generate revenue, inadvertently created an entire market of competitors. On countless VPS instances around the world, tools like Uptime Kuma, the monitoring dashboard, run inside Docker containers orchestrated by Docker Compose, which is the way most small self-hosted services get deployed in the two thousand twenties.
Docker did not just change how software gets deployed. It changed how software gets designed. The microservices architecture movement, the idea of breaking a monolithic application into dozens of small, independently deployable services, exploded in popularity at exactly the same time Docker did, and this was not a coincidence. Microservices had been theoretically possible before Docker. But practically, deploying and managing fifty separate services, each with its own dependencies and configuration, was a nightmare without containers. Docker made it feasible. You could put each service in its own container, give it its own dependencies, wire them together with Docker Compose, and suddenly the microservices architecture that Netflix and Amazon had pioneered with massive internal infrastructure was accessible to a team of five.
Whether microservices were actually a good idea for most teams is a debate that continues to rage. Many companies that adopted microservices in the Docker era spent years dealing with the distributed systems complexity they introduced, the network latency, the debugging difficulty, the deployment coordination. Some have since moved back toward monoliths, or to a middle ground they call "modular monoliths." But the architectural shift that Docker enabled, or perhaps encouraged, reshaped how an entire generation of software engineers thinks about building systems.
Here is the meta-narrative underneath Docker's story. Every technology in this series exists on a spectrum between two poles. On one end, tools that solve a narrow technical problem elegantly, like left-pad padding a string or Beautiful Soup parsing HTML. On the other end, tools that reshape how the entire industry thinks about building software. Docker is firmly at the second pole, but not because of any technical innovation it invented.
Docker did not create containers. Chroot was nineteen seventy-nine. FreeBSD jails were two thousand. LXC was two thousand eight. Docker did not create container orchestration. Kubernetes won that war. Docker did not create the container image format. It donated that to the OCI. What Docker created was a user experience. It made containers accessible to a normal developer. Before Docker, using containers meant understanding kernel namespaces, writing LXC configuration files, and being comfortable with low-level Linux internals. After Docker, using containers meant writing a Dockerfile and typing docker build.
This is the same pattern we saw with curl in episode five. Daniel Stenberg did not invent HTTP or FTP or any of the protocols curl supports. He built the interface that made them usable. Kenneth Reitz did not invent HTTP requests in episode two. He made them human. Docker did not invent containers. It made them accessible. And accessibility, it turns out, is what separates a technology that exists from a technology that changes the world.
The shipping container metaphor is not just marketing. It is historically accurate in a way that illuminates Docker's real contribution. Before the standardized shipping container was introduced in the nineteen fifties by a trucking magnate named Malcolm McLean, global trade existed but was brutally inefficient. Loading and unloading cargo ships was done by hand, crate by crate, by dockworkers. Different ports had different equipment. Different ships had different hold dimensions. The standardized container did not change what was being shipped. It changed how it was shipped. The container itself was boring. A steel box. But that boring steel box eliminated an entire category of friction from the global economy.
Docker is a boring steel box for software. And boring steel boxes change the world.
Docker probably sits on your machine right now. Maybe not Docker Desktop, which costs money for commercial use, but perhaps Colima, OrbStack, or Podman, one of the open-source alternatives that provide Docker-compatible container management. On a typical VPS, Docker runs natively on Linux. A monitoring dashboard like Uptime Kuma runs inside a Docker container managed by Docker Compose, watching whether all your other services are still alive. It might be the only service on the server that runs in a container rather than directly on the host, and that is precisely the point. Uptime Kuma is a Node.js application with its own dependencies and its own expectations about the system. Docker means you never have to think about whether those expectations conflict with anything else on the server.
The irony is that many Python applications do not run in Docker at all. They run directly on a VPS as systemd services. The code syncs to the server via rsync. The APIs are served by FastAPI, which we covered in episode twelve, running behind nginx with TLS certificates from Let's Encrypt, which we covered in episode fourteen. Every layer of a modern deployment stack has its own story, its own creator, its own history of near-collapse and unexpected survival.
Docker's story is the story of Solomon Hykes, a French-American kid who got rejected from Y Combinator twice, built a platform nobody remembers, pivoted to a tool everyone uses, renamed it badly, left the company, watched it nearly die, and then watched it survive by charging for the thing it used to give away. It is also the story of a forty-year-old idea, from Bill Joy's chroot in nineteen seventy-nine to Google's cgroups in two thousand six, that needed a whale-shaped logo and a five-minute demo to finally reach the people who needed it most.
The box is the same shape everywhere it goes. That is what Docker gives you. And that sentence, as simple as it sounds, changed how software gets built, shipped, and run across the entire planet.
If Docker is installed on your machine, open a terminal and type docker run dash it alpine sh. In a few seconds you will be inside a running Linux container. Type cat slash etc slash os-release and you will see that you are inside Alpine Linux, a tiny distribution that exists only in this container. Type exit and it is gone. The entire world you were just standing in has vanished. That is the container concept in its purest form, an isolated world that appears and disappears on command. Try it once and the forty years of history in this episode will make visceral sense.
That was episode sixteen.