Somewhere in the Atlantic Ocean right now, roughly four thousand meters below the surface, in water so dark and cold and pressurized that most life cannot survive, there is a cable about as thick as a garden hose. It is sitting on the ocean floor, or buried a meter or two beneath the sediment, running in a more or less straight line from a nondescript building on the coast of Virginia Beach to another nondescript building on the coast of Bilbao, Spain. Inside that cable are strands of glass thinner than a human hair, and through those strands, pulses of light are carrying hundreds of terabits of data per second. Your email. Your video call. Your stock trade. Your streaming movie. The signal crosses the Atlantic in about sixty milliseconds, roughly the same time it takes you to blink.
This cable is one of roughly five hundred and fifty active submarine cables currently lying on the ocean floor worldwide, collectively spanning over one point four million kilometers. They carry approximately ninety-five percent of all intercontinental data. Not satellites. Not radio. Glass fibers in rubber-coated tubes on the ocean floor, connected at each end to concrete buildings that look like electrical substations and are often located in unremarkable coastal neighborhoods where the residents may not even know what happens inside.
The internet feels like it is in the air. It feels wireless, weightless, somewhere up in the cloud. This is a magnificent illusion. The internet is one of the most physical things human beings have ever built. It is glass and copper and concrete and steel. It is ships and shovels and trenches and vaults. It has a shape. It has chokepoints. It has vulnerabilities that would terrify you if you understood them. And the map of how it all connects, from the cables on the ocean floor to the routers in the exchange points to the root servers that translate every name you type into a number a machine can find, is one of the most remarkable and fragile engineering achievements in history.
There are fewer than seventy ships in the world capable of laying or repairing submarine cables. To put that in perspective, there are more aircraft carriers in the world's navies than there are cable ships in the world's oceans. These are specialized vessels, purpose-built or heavily modified, equipped with massive spools of cable, dynamic positioning systems that let them hold station in rough seas, and remotely operated vehicles that can work at depths of several thousand meters. They are crewed by engineers and technicians with skills so specialized that the labor market for cable ship crew is one of the tightest in the maritime industry.
Most of these ships were built around the year two thousand, during the dot-com boom, when investors were pouring money into new transatlantic and transpacific cables. When the boom collapsed, the orders for new ships dried up. Between two thousand four and two thousand ten, not a single new cable ship was delivered. Only five were built in the following decade. The fleet is aging. Roughly two-thirds of all cable maintenance vessels will reach the end of their service lives within the next fifteen years. One ship, the Finnish Telepaatti, was built in nineteen seventy-eight and is still technically in service.
You cannot just walk in and purchase ship time. Scheduling is really paramount right now. It takes time to get a slot in the ship schedules, and things are not very flexible.
When a cable breaks, and roughly two hundred breaks happen worldwide every year, someone has to find the fault, sail a repair ship to the location, haul the cable up from the ocean floor, splice in a new section, test it, and lower it back. In shallow water, divers can handle the work. In the deep ocean, everything is done by robot. The average repair takes ten to twenty days. But that assumes a ship is available. When the Léon Thévenin, the only dedicated repair vessel serving the African coast, was already committed to another job during the March twenty twenty-four cable outages off West Africa, repairs were delayed by weeks. Four cables were damaged simultaneously by an underwater landslide in a canyon off Côte d'Ivoire, and there simply were not enough ships in the region to fix them all at once.
The cables themselves are astonishing objects. The total diameter without armor is about twenty millimeters, roughly the size of a garden hose. With protective armor, which is added in shallow water where anchors and fishing trawlers pose the greatest risk, the cable might be fifty millimeters across. Inside are optical fiber pairs, surrounded by layers of steel wire, polyethylene, copper for power delivery to the inline amplifiers called repeaters, and waterproofing. A modern cable like MAREA, which runs from Virginia to Spain and was completed in twenty eighteen, can carry two hundred and twenty-four terabits per second. The cables are designed for a twenty-five-year lifespan, though many are pushed beyond that.
Fishing trawlers and ship anchors cause about two-thirds of all cable faults. Not malice. Not natural disasters. Fishermen dragging heavy gear across the ocean floor, snagging a cable, and either breaking it outright or damaging the armor enough that saltwater seeps in and corrupts the signal. This happens more than a hundred and fifty times a year. In the Atlantic alone, there are more than fifty repairs annually. Earthquakes, underwater landslides, and volcanic activity account for another ten percent. Sharks once had a reputation for biting cables, attracted by the electromagnetic fields, though there is no documented evidence of shark-related damage in modern cables. The real threats are far more mundane, and far more constant.
The cables come ashore at buildings called cable landing stations. There are hundreds of them around the world, typically located in coastal towns chosen for their proximity to both the ocean and the terrestrial fiber networks that carry traffic inland. From the outside, they look like small industrial buildings. Some have no signs at all. Inside, the submarine cables terminate, the optical signals are amplified and regenerated, and the traffic is handed off to the land-based networks that carry it to data centers and exchange points.
The real magic happens at the Internet Exchange Points, or IXPs. These are the places where different networks physically meet and exchange traffic. Think of them as town squares for the internet. If your data needs to travel from one network to another, from your mobile carrier to the server farm hosting your streaming service, for example, it almost certainly passes through an IXP.
The largest IXP in the world is DE-CIX in Frankfurt, Germany, which regularly handles more than fourteen terabits per second of peak traffic. The second largest is AMS-IX in Amsterdam. London has LINX. São Paulo has IX.br. Nairobi has KIXP. There are now over nine hundred IXPs operating in more than a hundred countries. They are run by nonprofits, by commercial operators, by consortiums of internet service providers, and occasionally by governments. Some occupy entire floors of data centers. Others fit in a single rack of equipment.
The principle is simple but powerful. Without an IXP, if your data needs to get from Network A to Network B, it might have to travel through Network C and Network D first, potentially crossing oceans and continents before doubling back. This adds latency, costs money, and wastes bandwidth. At an IXP, Network A and Network B can connect directly, handing traffic across a switch in the same room. This is called peering, and it is the reason the internet is fast enough to feel instant even though it is made of physical wires stretched across the planet.
The location of IXPs is not random. They cluster in cities with dense fiber connectivity, reliable power, political stability, and a critical mass of networks willing to peer. Frankfurt became a hub partly because of Germany's central location in Europe and partly because of the legacy of the Cold War, when West Germany invested heavily in telecommunications infrastructure. Amsterdam benefited from the Netherlands' tradition of open networking and its position as a transatlantic cable landing point. In Africa and South Asia, the growth of IXPs over the last decade has been transformative. Before Kenya's KIXP, most African internet traffic had to leave the continent entirely, routing through Europe or the United States before coming back. An email from Nairobi to Lagos might travel through London. The establishment of local exchange points kept that traffic on the continent, slashing latency and costs.
Every time you type a web address into your browser, a system called the Domain Name System, or DNS, translates that human-readable name into the numeric IP address that the network actually uses. This translation happens billions of times per day, and it starts at the top of a hierarchy so fundamental to the internet's operation that it is sometimes called the phone book of the internet, though that metaphor barely does it justice.
At the top of the DNS hierarchy are the root servers. There are exactly thirteen root server addresses, labeled A through M. This number is not arbitrary. It is a constraint imposed by the size of a single DNS response packet, which needs to fit within five hundred and twelve bytes to work reliably across all networks. Thirteen addresses was the maximum that could be listed in a single response.
But thirteen addresses does not mean thirteen physical machines. Through a technique called anycast, where the same IP address is advertised from multiple locations simultaneously and traffic is routed to the nearest one, those thirteen addresses are served by well over a thousand actual servers distributed across more than a hundred and thirty countries. When your browser queries a root server, the response comes from whichever physical instance is closest to you on the network, not necessarily closest geographically.
The root servers are operated by twelve different organizations, a mixture of government agencies, universities, military research labs, nonprofits, and private companies. The A root is run by Verisign. The B root by the University of Southern California's Information Sciences Institute, the same institution where Jon Postel once kept the internet's address book on scraps of paper. The D root by the University of Maryland. The F root by Internet Systems Consortium. The K root by RIPE NCC, the European internet registry, based in Amsterdam. And so on. No single organization controls all thirteen. The deliberate distribution of control was an intentional design choice, ensuring that no single failure, no single government order, no single corporate decision could take down the entire naming system.
The root servers do not actually know the IP address of every website. What they know is where to find the authoritative servers for each top-level domain, dot-com, dot-org, dot-uk, dot-se, and so on. When your browser asks a root server for the address of a website, the root says I do not know, but here is who does, and points you to the next level down. That server points you further down, and so on, until you reach the server that has the actual answer. The entire lookup, which may involve several round trips across the network, typically completes in under a hundred milliseconds.
This system works so well that most people have no idea it exists. It just looks like you typed a name and a page appeared. Behind that appearance is a distributed database that is queried trillions of times per day, anchored by thirteen addresses served by a thousand machines in a hundred and thirty countries, operated by a dozen organizations with no common boss. And it was originally maintained by one man in sandals who kept the records on pieces of paper.
If the DNS is the internet's phone book, the Border Gateway Protocol, or BGP, is its postal service. BGP is the routing protocol that tells networks how to reach each other. Every time your data packet leaves your home network and travels across the internet, BGP is what determines the path it takes, hop by hop, from your network to the destination network, across all the intermediate networks in between.
The internet is not one network. It is a network of networks, roughly seventy-five thousand autonomous systems, each independently operated. Your home ISP is one autonomous system. Google is another. Amazon is another. A university in Sweden is another. BGP is the protocol through which these autonomous systems announce their existence and their connectivity to each other. Each system says, in effect, I can reach these addresses, and I am connected to these neighbors. The routers at the borders of each system listen to these announcements, compare them, choose the best path to each destination, and forward traffic accordingly.
The astonishing thing about BGP is that it is built entirely on trust. When an autonomous system announces that it can reach a particular block of IP addresses, the rest of the internet believes it. There is no built-in verification. No cryptographic proof. No central authority that checks whether the announcement is legitimate. If a small ISP in Florida announces that it can reach every address on the internet, and its neighbors do not filter that announcement, the entire internet will start sending traffic to Florida. This is not a theoretical vulnerability. It has happened.
The most famous incident occurred on February twenty-fourth, two thousand eight. The government of Pakistan had ordered all internet service providers in the country to block access to YouTube because of a video it deemed offensive. Pakistan Telecom, one of the state-owned providers, decided to implement the block using BGP. Instead of simply dropping packets destined for YouTube at their own border, they announced a more specific route for YouTube's IP addresses, a slash-twenty-four covering part of YouTube's address space, and pointed it at a null interface, a black hole. This would have worked perfectly if the announcement had stayed inside Pakistan's network.
It did not stay inside Pakistan's network.
Pakistan Telecom's upstream provider, PCCW in Hong Kong, did not validate the announcement. PCCW accepted the route and forwarded it to its own peers and transit providers. Within minutes, the false route had propagated across the entire internet. Because Pakistan Telecom's announcement was more specific than YouTube's own, covering a smaller, more precise block of addresses, routers everywhere preferred it. YouTube traffic from users on every continent was suddenly being sent to Pakistan, where it vanished into a black hole. YouTube was offline for roughly two hours for most of the world.
I became interested in this immediately as I was concerned that I would not be able to spend my evening watching imbecilic videos of cats doing foolish things, even for a cat.
YouTube's engineers fought back. Within eighty minutes they began announcing their own matching slash-twenty-four routes to compete with Pakistan's false ones. Pakistan Telecom tried prepending their autonomous system number to make their route look less attractive, but it did not help much. The problem was finally resolved when PCCW manually blocked Pakistan Telecom's announcements for the hijacked addresses. The total outage lasted about two hours. But for those two hours, a censorship order in Islamabad had accidentally broken one of the most popular websites on Earth.
This was not the first BGP incident and it was not the last. In nineteen ninety-seven, a small ISP in Florida accidentally announced routes for a massive portion of the internet, partitioning Sprint's entire network from the rest of the world. In twenty thirteen, researchers discovered a BGP-based man-in-the-middle attack originating in Belarus that was redirecting traffic from major US financial institutions and government networks. In twenty seventeen, Russian state telecom Rostelecom leaked routes for financial institutions through what may or may not have been an accident. In twenty twenty-two, during Russia's invasion of Ukraine, Russian telecoms intentionally hijacked BGP routes for Twitter, and the hijack leaked out onto the global internet, briefly affecting Twitter access in other countries.
The internet's routing system is, in a very real sense, held together by gentlemen's agreements. Networks are supposed to filter their customers' BGP announcements, accepting only routes that the customer is authorized to originate. But many do not. The tools exist, RPKI and RPSL provide mechanisms for cryptographically validating route origins, but adoption has been slow and uneven. Decades after the first catastrophic BGP incident, the fundamental vulnerability remains. Any network, anywhere in the world, can announce that it is the rightful destination for any addresses on the internet, and if its neighbors are not paying attention, the rest of the world will believe it.
Stand back and look at the whole picture. Submarine cables stitching continents together, coming ashore at cable landing stations in coastal towns. Terrestrial fiber running inland to data centers and exchange points. IXPs where networks meet and exchange traffic, peering across switches in the same room. Root servers anchoring the DNS, translating names to numbers in milliseconds. BGP routing traffic between seventy-five thousand autonomous systems, each trusting the others to tell the truth about what they can reach. And underneath all of it, the physical reality of glass fibers, copper wires, power supplies, cooling systems, and the sixty-odd ships that can fix things when they break.
The geography of this system is not neutral. The thickest cables and the largest exchange points cluster in North America, Western Europe, and East Asia, the regions that built the internet and still generate the most traffic. Africa has seen enormous growth in connectivity over the last decade, but still relies on a relatively small number of submarine cable routes, many of which converge at a handful of landing points. When four cables off West Africa broke simultaneously in March twenty twenty-four, entire countries lost most of their internet connectivity because there were not enough alternative routes.
Chokepoints exist everywhere. The Strait of Malacca, between Malaysia and Indonesia, carries cables connecting Europe to Asia. The Red Sea is another bottleneck. The English Channel is crossed by dozens of cables, all passing through a narrow corridor. The South China Sea is increasingly contested territory, and multiple cables there have been damaged in recent years under circumstances that governments have described as suspicious. Intelligence agencies have tracked Russian vessels, including the research ship Yantar, loitering near critical cable junction points in European waters. China has reportedly developed deep-sea cable-cutting ships capable of operating at four thousand meters.
The ownership of this infrastructure has shifted dramatically in the last decade. Historically, submarine cables were built and owned by consortiums of telecommunications carriers. Today, the largest investors in new cable systems are the hyperscalers, Google, Meta, Amazon, and Microsoft. These companies need so much bandwidth between their data centers that it makes economic sense for them to build their own cables rather than buy capacity from carriers. Google alone has invested in or fully owns more than a dozen submarine cables. This means that a significant and growing share of the internet's physical infrastructure is owned by the same companies that run the largest services on the internet. Whether this concentration of ownership is a feature or a vulnerability depends on whom you ask.
The internet was designed, famously, to survive a nuclear war. The original ARPANET was funded by the Department of Defense specifically to create a communications network with no single point of failure, one that could route around damage. And in many ways, this design philosophy survived the transition from a military research network to the global commercial internet. If one cable breaks, traffic reroutes through another. If one exchange point goes down, peering happens elsewhere. If a root server becomes unreachable, the anycast system directs queries to the next nearest instance. The redundancy is real.
But redundancy has limits. The redundancy exists in the core, where cables are numerous and alternative routes are plentiful. At the edges, where the developing world connects, the margins are thinner. A country served by two submarine cables is not resilient. A region where all cables pass through the same undersea canyon is not resilient. A routing system that trusts every announcement without verification is not resilient. The internet survives daily disasters, fishing trawlers snagging cables, misconfigured routers leaking bad routes, data center power failures, because the damage is always small and local. The question that keeps network engineers awake at night is what happens when the damage is large and coordinated.
A twenty twenty-five report by the security firm Recorded Future warned that state-backed attacks on submarine cables, particularly by Russia and China, are likely to increase as part of hybrid grey-zone tactics, actions that fall below the threshold of open warfare but are designed to degrade and disrupt. The report noted that at least five incidents involving anchor-dragging in the Baltic Sea and around Taiwan in twenty twenty-four and twenty twenty-five were attributed to Russian or Chinese-linked vessels operating under suspicious circumstances. Proving intent is nearly impossible, which is precisely the point. A ship drops anchor, a cable breaks, and the responsible party shrugs and calls it an accident.
The submarine cable industry estimates that it needs roughly three billion dollars in investment over the next fifteen years just to replace aging repair ships and meet growing demand. At least fifteen new ships are needed to replace retiring vessels, plus five additional ships for Asia. This is not Silicon Valley money. This is not venture capital money. This is heavy industrial maritime investment, the kind of spending that requires long-term contracts and patient capital and governments willing to classify submarine cables as critical national infrastructure, which many are only now beginning to do.
The internet has a shape. It is not a cloud. It is not a web, not really, though the metaphor is closer. It is a physical thing, built from glass and copper and steel, stretched across ocean floors and buried under city streets, concentrated in exchange points and cable landing stations and server rooms that hum with the heat of computation. It has geography. It has politics. It has chokepoints and vulnerabilities and dependencies that most of its users never think about.
The routing that holds it all together runs on trust and convention, not on proof. The ships that repair it are fewer than the aircraft carriers of the world's navies. The root servers that anchor its naming system are operated by twelve organizations with no common authority. The cables that carry ninety-five percent of its intercontinental traffic are about as thick as a garden hose, lying on the ocean floor, vulnerable to fishing trawlers and anchors and earthquakes and, increasingly, to the deliberate actions of states.
And yet it works. Every day, trillions of packets traverse this system, finding their way from source to destination across tens of thousands of networks, through hundreds of exchange points, over cables spanning every ocean. The latency from New York to London is about seventy milliseconds. From Stockholm to Tokyo, about two hundred. A DNS lookup completes before you can finish a thought. Your streaming video arrives in packets so well-timed that the picture never stutters.
The people who built this, the cable engineers and the network operators and the protocol designers and the ship captains, did not set out to build something beautiful. They set out to send a message from here to there, as fast as possible, as cheaply as possible, with as few interruptions as possible. The beauty is accidental. It is the beauty of a system that grew without a master plan, shaped by economics and geography and physics and the stubborn insistence of engineers that things should work.
The next time someone tells you the internet is in the cloud, remember the garden hose at the bottom of the Atlantic. Remember the sixty ships. Remember the thirteen addresses and the thousand machines. Remember the routing protocol that runs on trust and the exchange point in Frankfurt handling fourteen terabits per second. Remember that all of this, every bit of it, is physical. It has a shape. And the shape is extraordinary.