Compare the clock on your phone to the clock on your wall right now. Not the one on your microwave or your oven, the real clock, the one with hands or a digital face that is not connected to the internet. If you have not set it recently, it is probably several minutes off. Maybe more. Your phone, on the other hand, agrees with every other internet-connected device you own down to a few thousandths of a second. Your laptop. Your tablet. The server that just delivered this podcast to your ears. They all agree. They all tick in lockstep, quietly, invisibly, all day, every day, without anyone thinking about it.
This is not an accident. This is the result of one of the most extraordinary and least appreciated pieces of infrastructure ever built. It involves atomic clocks in underground bunkers, military satellites that were opened to the world because of a Cold War disaster, a nearly blind professor who spent his entire career obsessed with keeping machines in sync, and a vote at a palace in Versailles that decided the future of the second itself. Every financial transaction, every cell tower handoff, every encrypted connection, every database log depends on computers agreeing about what time it is. If they drift apart by more than fifty milliseconds, things start to break. If they drift by more, the damage cascades. Stock trades execute in the wrong order. Security certificates expire prematurely. Routing tables corrupt. The internet, in a very real sense, loses its mind.
This is the story of how we taught machines to agree on the time, and what happens when they cannot.
David Lennox Mills was born in Oakland, California in nineteen thirty-eight. He had glaucoma from birth. A surgeon saved some of the vision in his left eye when he was a child, but his sight was limited for his entire life. He attended a school for the visually impaired in San Mateo. His mother was a pianist. His father was an engineer. Young David inherited something from both of them, a love of rhythm and a love of systems, and these two passions fused into a lifelong obsession with clocks.
He was also a ham radio enthusiast, which might seem like a trivial detail but turns out to be central to the story. Ham radio operators are deeply familiar with the problem of synchronization. You need to know the exact time to coordinate frequencies, to schedule contacts across time zones, to decode signals that are time-stamped at their source. Mills understood, from his teenage years onward, that accurate time was not an abstract luxury. It was the foundation of reliable communication.
After earning five degrees from the University of Michigan, including a PhD in Computer and Communication Sciences in nineteen seventy-one, Mills went to work at COMSAT, a company that operated communications satellites. This was the late nineteen seventies, and the ARPANET, the military research network that would become the internet, was growing. More and more computers were connecting to each other across the country and across the Atlantic. And Mills noticed something that was driving him crazy. Their clocks did not agree.
Every computer has an internal oscillator, a quartz crystal that vibrates at a specific frequency to count off seconds. The problem is that these crystals are not perfect. They drift. Temperature changes make them speed up or slow down. Manufacturing tolerances mean no two crystals are exactly alike. Left alone, a computer's clock will wander by seconds, then minutes, then hours, diverging from every other computer around it. For most purposes in the late seventies, this did not matter much. But Mills could see where things were heading. If networks were going to carry transactions, if they were going to coordinate distributed systems, if they were going to do anything that required agreeing on the order in which events happened, the clocks had to agree.
He began working on the problem. By nineteen eighty-five, he had a protocol. He called it the Network Time Protocol, or NTP. The idea was deceptively simple. A computer sends a time-stamped message to a server, the server responds with its own timestamp, and the computer calculates both the time difference and the network delay, then adjusts its clock accordingly. But the elegance was in the details. Mills designed NTP to be fault-tolerant. A computer does not trust a single time source. It queries multiple servers, compares the results, throws out the outliers, and arrives at a best estimate. The system is layered in strata, like geological formations. Stratum zero is the actual time source, an atomic clock or a GPS receiver. Stratum one is a server directly connected to that source. Stratum two gets its time from stratum one, and so on. Each layer adds a tiny bit of uncertainty, but the hierarchy ensures that every machine on the network is anchored, ultimately, to something real.
This narrative is decidedly personal, since the job description for an Internet timekeeper is highly individualized and invites very few applicants.
Mills also built the hardware. In the late seventies at the University of Maryland, he had invented something called the Fuzzball, one of the first modern routers. It was a DEC PDP-eleven computer loaded with his custom software, and he kept one under the desk in his home office in Maryland. Six Fuzzballs were used in nineteen eighty-six to build the first backbone of the NSFNET, the network that would grow into the modern internet. And NTP was running on all of them, quietly ticking away, keeping the machines in sync.
By nineteen eighty-eight, Mills had refined NTP to the point where computers that had been telling wildly different times could be synchronized to within tens of milliseconds. By the late nineties, he had pushed the accuracy to microseconds on local networks. The protocol he built, essentially alone, became one of the oldest continuously running services on the internet. It is now implemented in every major operating system. Windows. MacOS. Linux. Android. iOS. Every device you own is running NTP or a protocol descended from it. A colleague at the University of Delaware named Charles Boncelet said it simply.
Dave had a love, perhaps even an obsession, with timekeeping.
David Mills retired from the University of Delaware in two thousand eight after twenty-two years of teaching. He continued working on NTP as an adjunct professor, refining the algorithms, writing RFCs, maintaining the reference implementation. In two thousand thirteen, IEEE gave him its Internet Award for sustained contributions to quality time synchronization. On January seventeenth, twenty twenty-four, Mills died at his home in Newark, Delaware. He was eighty-five years old. Vint Cerf, the co-inventor of TCP/IP, announced the news on the Internet Society mailing list, calling Mills an iconic element of the early internet. The protocol Mills built still runs. The clocks still tick. Most people have never heard of him.
To understand why synchronizing time is so difficult, you have to understand what a second actually is. For most of human history, a second was defined astronomically. It was one eighty-six thousand four hundredth of a mean solar day. The Earth rotates, the sun crosses the sky, and we divided that cycle into hours, minutes, and seconds. This worked fine for millennia, as long as you did not need to be terribly precise.
The problem is that the Earth is not a reliable clock. Its rotation is slowing down, dragged by the gravitational tug of the Moon on the oceans. Tidal friction is converting rotational energy into heat, and the effect is measurable. The mean solar day is now about sixty-eight milliseconds longer than it was in nineteen hundred. That does not sound like much, but it accumulates. Over decades, astronomical time and precise laboratory time drift apart by seconds. There are also short-term irregularities. Earthquakes can redistribute the Earth's mass and change the rotation speed. The twenty eleven earthquake in Japan shortened the day by about one point eight microseconds. Melting ice caps move water toward the equator, slowing the spin like a figure skater extending their arms. Even the wind exerts enough force on mountain ranges to cause measurable wobbles.
In nineteen sixty-seven, the scientific community decided they had had enough of the Earth's unreliable spinning and redefined the second. The new definition had nothing to do with astronomy. A second was now nine billion one hundred and ninety-two million six hundred and thirty-one thousand seven hundred and seventy oscillations of the radiation emitted by a cesium-133 atom transitioning between two specific energy states. This was not a number chosen for elegance. It was chosen to match the astronomical second as closely as possible at the moment of the definition, and then let atomic precision take over from there.
The devices that count those oscillations are called cesium beam clocks, and they are extraordinarily accurate. The best modern atomic clocks, which use rubidium fountains and optical lattices, are precise to roughly one hundred picoseconds. To put that in perspective, a picosecond is to a second what a second is to about thirty-two thousand years. These clocks would not gain or lose a second for three hundred million years. There are about four hundred of them in laboratories around the world, maintained by national metrology institutes, and their outputs are averaged together by the International Bureau of Weights and Measures, known by its French initials BIPM, in a suburb of Paris. The resulting time scale is called International Atomic Time, or TAI.
But TAI has a problem. It does not care about the Earth. It just counts perfect seconds, one after another, without reference to whether the sun is overhead or not. Left alone, TAI would gradually drift away from solar time. Noon would creep later and later, by about a minute per century. For astronomers, sailors, and anyone who cares about the relationship between clocks and the sky, this is unacceptable.
The solution, introduced in nineteen seventy-two, was Coordinated Universal Time, or UTC. UTC ticks in atomic seconds, just like TAI, but every now and then an extra second is inserted, a leap second, to pull the clock back into alignment with the Earth's rotation. The decision of when to insert a leap second is made by an organization with one of the most magnificent names in all of bureaucracy: the International Earth Rotation and Reference Systems Service, or IERS, based in Paris. When the difference between atomic time and astronomical time approaches zero point six seconds, IERS announces that a leap second will be added at the end of either June thirtieth or December thirty-first, with six months' notice.
Since nineteen seventy-two, twenty-seven leap seconds have been added. The last one was on December thirty-first, twenty sixteen. All of them have been positive, adding a second rather than removing one. In theory, a negative leap second is possible, skipping a beat if the Earth speeds up enough, but it has never happened. And every single time a leap second has been inserted, something has broken.
The leap second is supposed to be simple. At twenty-three fifty-nine and fifty-nine seconds UTC, the clock ticks to twenty-three fifty-nine and sixty, a time that ordinarily does not exist, and then rolls over to midnight. One extra tick. Sixty-one seconds in the final minute of the day. What could go wrong?
Everything, as it turns out. No commonly used operating system was originally designed to handle a minute with sixty-one seconds. The way most Unix systems dealt with it was to step the clock backward by one second at midnight, repeating the last second of the day. But this means the same second occurs twice, which creates a logical impossibility for any software that assumes time always moves forward.
In two thousand five, Google noticed that a number of its internal systems simply stopped working during the leap second. Services that depended on precise timestamps became confused, logs contradicted each other, and coordination between data centers stumbled. It was a contained incident, but it sent a shiver through the engineering team.
In two thousand twelve, it was everybody's turn. On June thirtieth, when the leap second was inserted, Reddit went down. So did LinkedIn, Gawker, FourSquare, Mozilla, and Yelp. Qantas Airlines' entire check-in system failed, grounding planes and stranding passengers. The problem was a bug in the Linux kernel that caused the system to spin endlessly, consuming all available processing power, when the clock repeated a second. Thousands of servers around the world locked up simultaneously. The outage lasted hours for some companies. And the truly maddening part was that it had been predicted. People had warned that the kernel's leap second handling was fragile. But leap seconds happen so rarely, once every few years, and so irregularly, that fixing the bug was never prioritized.
In two thousand fifteen, it was Twitter and parts of the Android operating system. In two thousand sixteen, the last time a leap second was inserted, Cloudflare had an outage. The website security company's DNS resolver produced negative time values because its software computed a duration that spanned the leap second and arrived at an impossible negative number. Time literally went backward, and the system panicked.
These are not minor services. Cloudflare protects a significant fraction of all websites. Reddit is one of the highest-traffic sites in the world. Qantas is a major international airline. Each time, the cause was the same. A single extra second, announced months in advance, inserted at a predictable time, broke systems that were not designed to handle the discontinuity. And this kept happening because the developers who wrote the software assumed, reasonably enough, that time is continuous. That a second always follows the previous second. That midnight comes once per day.
The most precise time most of us encounter in daily life does not come from a server running NTP in a data center. It comes from space. Specifically, it comes from a constellation of roughly thirty satellites orbiting about twenty thousand two hundred kilometers above the Earth, each carrying multiple atomic clocks, and each broadcasting a continuous signal that says, in effect, I am here and the time is now. This is the Global Positioning System, and it is simultaneously a navigation system, a timing system, and the direct result of a Cold War catastrophe.
On August thirty-first, nineteen eighty-three, Korean Air Lines Flight 007 departed from Anchorage, Alaska, bound for Seoul. It was a Boeing 747 carrying two hundred and sixty-nine passengers and crew. Shortly after takeoff, the aircraft's autopilot system, which should have been set to navigate via inertial guidance waypoints along its assigned route, instead locked onto a magnetic heading and held it. The plane began to drift. Not dramatically. Not in a way the crew noticed. Just a slow, steady veering to the north and west, away from the planned track and toward the Soviet Union.
The crew did not realize they were off course. Air traffic control did not catch the error. There was no satellite-based navigation system available to civilian aircraft in nineteen eighty-three that could have shown them their exact position. The plane crossed into Soviet airspace over the Kamchatka Peninsula, home to sensitive military installations and missile test sites. Soviet radar tracked the intruder. Fighter jets scrambled. Attempts at contact, if any were made, failed. The 747 flew on, its crew relaxed and unaware, chatting in the cockpit about the flight. When the plane crossed over Sakhalin Island, still deep inside Soviet airspace, Major Gennadiy Osipovich, piloting an Su-fifteen interceptor, fired two air-to-air missiles.
The damaged plane spiraled into the Sea of Japan near Moneron Island. There were no survivors. The Soviet Union initially denied everything. President Ronald Reagan called it a massacre and a crime against humanity. It became one of the defining crises of the late Cold War, occurring just weeks before the Soviet early warning system produced a false alarm that nearly triggered a nuclear exchange. The world came closer to annihilation that autumn than most people realize.
Two weeks after the shoot-down, Reagan made an announcement that would reshape the world in ways nobody fully anticipated at the time. The United States military's Global Positioning System, then still under development and classified, would be made freely available for civilian use once it was completed.
I have determined that the GPS system will be made available for civilian use to prevent tragedies like the one that claimed two hundred and sixty-nine lives.
The full GPS constellation was not operational until nineteen ninety-five. And even then, the military deliberately degraded the civilian signal through a policy called Selective Availability, introducing intentional errors that limited civilian accuracy to about a hundred meters. It was not until May second, two thousand, when President Clinton ordered Selective Availability turned off, that civilians got the full precision of GPS. Overnight, accuracy jumped from a hundred meters to about ten. The world of smartphones, ride-hailing apps, precision agriculture, and turn-by-turn navigation followed.
But here is the part of the GPS story that most people do not know. GPS is primarily a timing system. Navigation is a byproduct. Each satellite carries multiple cesium and rubidium atomic clocks, and each one broadcasts an extremely precise time signal along with its orbital position. Your phone receives signals from at least four satellites, compares their timestamps to its own clock, and uses the differences to calculate its position through trilateration. The math only works because the time signals are accurate to billionths of a second.
This means GPS satellites are also the world's most widely accessible atomic clock references. NTP servers around the world use GPS receivers as their stratum-zero time source. The satellites sync the servers, the servers sync your computer, your computer syncs its applications, and the whole chain reaches all the way back to the cesium atoms in orbit. Every time your phone shows the correct time, it is because a set of military satellites, freed by a Cold War tragedy, are raining down atomic time from space.
After the two thousand five leap second debacle, Google's engineers sat down and had a conversation that would eventually change how the entire tech industry thinks about time. The problem was clear. Inserting a sharp discontinuity into the time stream, even a single second, caused cascading failures in distributed systems. Google's infrastructure spans millions of servers across dozens of data centers worldwide, and every one of them needs to agree on the time. A one-second jump backward was intolerable.
Their solution was elegantly weird. Instead of inserting the leap second as a sudden step, they would smear it. Starting ten hours before the leap second and continuing ten hours after, Google's NTP servers would run their clocks very slightly slower than real time, stretching each second by about eleven point six microseconds. Over the twenty-hour window, those tiny stretches added up to exactly one extra second. The clocks never jumped. Time never went backward. Nothing broke.
We thought about various possible approaches to the problem, and the leap smear seemed to be by far the most practical and reasonable. What we would have the computers do is very slowly smear out that one second rather than do it all of a sudden.
Google used this approach for the leap seconds in two thousand eight, two thousand twelve, two thousand fifteen, and two thousand sixteen. They later refined it to a twenty-four-hour linear smear, from noon to noon UTC, which Amazon also adopted for AWS. The frequency change during the smear is about eleven point six parts per million, which is within the manufacturing and thermal errors of most quartz oscillators. The machines cannot even tell the difference.
But the smear introduced a new problem. During those twenty-four hours, Google's clocks disagree with UTC. Not by much. At worst, by half a second. But they disagree. And different companies smear differently. Google uses a twenty-four-hour window. Meta smears over seventeen hours. Bloomberg smears over two thousand seconds after the leap. If your system talks to servers running different smear schedules, the timestamps will not match. The solution to one discontinuity created a patchwork of incompatible micro-discontinuities.
The tech industry's frustration boiled over. In twenty twenty-two, Meta published a blog post calling leap seconds a risky practice that does more harm than good. Google, Amazon, Microsoft, and others lobbied for the system to change. The argument was not just about convenience. It was about the growing dependency of critical infrastructure on precise time. High-frequency trading operates on microsecond margins. Autonomous vehicles need sub-millisecond positioning. The power grid synchronizes generators using GPS-derived time. Telecommunications networks depend on clock agreement for everything from cell tower handoffs to the encoding of 5G signals. A single mishandled leap second in the wrong system at the wrong moment could cascade far beyond a few websites going offline.
On November eighteenth, twenty twenty-two, representatives from fifty-nine nations gathered at the Palace of Versailles, west of Paris, for the twenty-seventh General Conference on Weights and Measures, the same body that defines the kilogram, the meter, and the second itself. Agenda item four was the future of the leap second.
The debate had been brewing for decades. Scientists and engineers had argued about it since the late nineteen nineties. Votes in two thousand twelve and two thousand fifteen had ended in deadlock, the can kicked down the road each time. But the mounting evidence of real-world harm, combined with aggressive lobbying from the technology sector, had shifted the balance.
Patrizia Tavella, the head of the time department at BIPM, had shepherded the resolution through years of negotiation. The proposal was to stop inserting leap seconds by twenty thirty-five and to allow atomic time and astronomical time to drift apart by more than one second for the first time in half a century. The discrepancy would be allowed to grow, perhaps to as much as a minute over the next fifty to a hundred years. When it eventually became large enough to matter, the solution might be a leap minute, smeared over an extended period, with no discontinuity at all.
This historic decision will allow a continuous flow of seconds without the discontinuities currently caused by irregular leap seconds.
The resolution passed. Russia voted against it, not on principle, but because Moscow wanted to push the date to twenty forty. Russia's GLONASS satellite navigation system, unlike GPS, incorporates leap seconds into its timekeeping, and updating the constellation would require new satellites and ground stations. The compromise date of twenty thirty-five accommodated most concerns while giving the industry a clear deadline.
There was one more twist. In the years leading up to the vote, the Earth had started behaving strangely. After decades of steadily slowing rotation, it began to speed up. On June twenty-ninth, twenty twenty-two, the Earth recorded its shortest day ever, finishing its rotation one point five nine milliseconds ahead of schedule. If this acceleration continued, the world might face something unprecedented. A negative leap second. Removing a tick instead of adding one. No system on Earth has ever been tested against a negative leap second. The prospect terrified engineers even more than the positive ones that had already caused so much damage.
The vote at Versailles did not abolish the connection between civil time and the heavens. The resolution explicitly preserved the link between UTC and the Earth's rotation. It simply loosened the leash, allowing a larger drift before any correction is needed. The twenty-eighth General Conference, scheduled for twenty twenty-six, is expected to finalize the details, including the exact maximum divergence and the method for eventual correction.
Step back and consider the absurdity of the situation. Every computer on Earth needs to agree about what time it is, and they do this through a chain of trust that starts with atoms vibrating in sealed chambers in laboratories in Paris and Boulder and Tokyo, passes through military satellites that were freed because a plane was shot down over the wrong ocean, travels through a protocol designed by a nearly blind professor who kept a router under his desk, and ends at the quartz crystal in your pocket that drifts by milliseconds every hour and has to be constantly corrected.
The system is astonishingly fragile. NTP has no built-in authentication in most deployments. A determined attacker can feed false time to servers and cause them to drift, a technique that has been used to bypass security certificates and disrupt cryptocurrency networks. GPS signals can be spoofed. Atomic clocks can fail. The bureaucratic infrastructure that coordinates it all, BIPM, IERS, the ITU, is funded on a shoestring and staffed by researchers who could earn far more in the private sector.
And yet it works. Billions of devices, synchronized to within milliseconds, all day, every day. Financial markets execute trillions of dollars in trades with microsecond timestamps that hold up in court. Power grids spanning entire continents coordinate their alternating current in phase, because the generators all know what time it is. Cell towers hand off your phone call in mid-sentence without a glitch, because both towers agree on the timing of the signal.
The reason it works is the same reason the internet works, and the web works, and Unicode works. Small groups of people, often working for very little money and even less recognition, built systems that were more robust than they had any right to be. David Mills did not set out to synchronize the world. He just noticed that the clocks did not agree, and it bothered him. The engineers at Google did not set out to redefine the second. They just noticed that their servers kept crashing every few years. Reagan did not set out to put atomic clocks in everyone's pocket. He was trying to score a political point against the Soviet Union. The best infrastructure is almost always accidental. Built by people solving the problem in front of them, not the one history would remember.
There is a room at the National Institute of Standards and Technology in Boulder, Colorado, that contains some of the most accurate timekeeping devices ever built. They are called rubidium fountain clocks, and their design incorporates five Nobel Prizes in physics. They cool rubidium atoms to near absolute zero using lasers, toss them upward in a fountain, and measure the frequency of their quantum transitions as they fall. These clocks are precise to about one hundred picoseconds. They would not drift by a second if they ran for hundreds of millions of years.
There is something both humbling and terrifying about that precision. We have built machines that measure time more accurately than the Earth keeps it. The planet we live on, with its sloshing oceans and shifting tectonic plates and slowly receding Moon, is no longer a reliable enough clock for our purposes. We have outgrown our own world's rhythm. And so we have replaced it, piece by piece, with something artificial. Something that runs on the vibrations of atoms in sealed chambers, distributed by satellites and protocols and undersea cables, maintained by committees that meet in palaces and professors who keep routers under their desks.
David Mills, the man they called the internet's Father Time, spent his final years in his home in Delaware, his vision almost entirely gone. He had watched the protocol he invented in nineteen eighty-five become so deeply embedded in the world's infrastructure that removing it would be like removing oxygen from the atmosphere. Nobody thinks about it. Nobody thanks it. It just runs. And every second, on every device, on every network, on every continent, it asks the same question it has been asking for forty years: What time is it? What time is it? What time is it?
The answer, give or take fifty milliseconds, is always the same. And that tiny margin, that invisible agreement, is the foundation of everything.