Actually, AI
Actually, AI
Actually, AI
Neural Networks: A Million Knobs
S1 E29m · Apr 03, 2026
Frank Rosenblatt built a machine in 1958 that could learn from examples—then a jealous rival's theorem convinced the world it was impossible, killing neural networks for a decade.

Neural Networks: A Million Knobs

This is episode two of Actually, AI.

The Brain That Is Not a Brain

You have heard the phrase a thousand times. Neural network. And your brain, obligingly, pictures a brain. Little neurons firing, electrical signals jumping across synapses, a digital mind awakening in silicon. The metaphor is everywhere, in press releases, in TED talks, in the breathless coverage every time a chatbot does something surprising. And the metaphor is wrong. Not slightly wrong. Deeply, structurally, misleadingly wrong.

A neural network has about as much in common with a brain as a paper airplane has with a seven forty seven. Same vague principle, things moving forward through a structure, completely different mechanism. Your brain has roughly eighty six billion neurons, each one connected to thousands of others through electrochemical processes that neuroscience still does not fully understand. It learns through attention, emotion, sleep, dreams, social interaction, forgetting, remembering, getting confused and then suddenly seeing clearly. A neural network does none of that. It multiplies numbers together.

The "neural" part of the name comes from a single paper written in nineteen forty three, by a neuroscientist who wrote poetry and a teenager who was homeless. They borrowed the language of biology because they were modeling the brain. But what they actually built was a mathematical abstraction so stripped of biological reality that calling it "neural" is like calling a stick figure a portrait. The name stuck anyway, and it has been confusing people for eighty years.

The Psychologist Who Built a Learning Machine

In nineteen fifty seven, a psychologist at the Cornell Aeronautical Laboratory in Buffalo, New York, built a machine that could learn. His name was Frank Rosenblatt, and he was not what you would expect from someone building military research hardware. He played music, studied the stars through a three thousand dollar telescope he had bought for his home observatory, and once got into a discussion about Fermat's Last Theorem in a music lounge. A colleague named George Nagy later put it simply.

Knowing Frank made me appreciate the difference between very bright and genius.

That was actually Nagy speaking about Rosenblatt, and the distinction mattered. Rosenblatt did not just build a machine. He grasped a problem that would define the next seven decades of computer science. He called his machine the Perceptron, and it was funded by the United States Navy.

The Mark One Perceptron was a physical thing you could touch. Four hundred photocells arranged in a twenty by twenty grid, wired through a massive plugboard to banks of potentiometers, those adjustable knobs that control electrical resistance. The knobs were the machine's memory. When the Perceptron saw a pattern and got the answer right, tiny electric motors turned the knobs slightly in one direction. When it got the answer wrong, they turned the other way. After fifty trials, it could reliably tell whether a shape was on the left side or the right side of its visual field.

That does not sound impressive now. But in nineteen fifty eight, it was electrifying. The Navy held a press conference. The New York Times ran a story reporting that the machine was the embryo of a computer that the Navy expected would be able to walk, talk, see, write, reproduce itself, and be conscious of its existence. That was not what Rosenblatt said. But the damage was done. The expectations were set at a height no machine could reach for decades.

Then came the backlash. Marvin Minsky, a brilliant researcher at MIT, had been Rosenblatt's classmate at the Bronx High School of Science. They knew each other. Rosenblatt called Minsky "the loyal opposition." At conferences, they debated publicly while colleagues watched in amazement. In nineteen sixty nine, Minsky and his colleague Seymour Papert published a book called Perceptrons that proved, mathematically, that a single layer perceptron could not solve certain basic problems. The machine could not even learn XOR, the simple logical function that outputs true only when its two inputs disagree.

No machine can learn to recognize something unless it possesses, at least potentially, some scheme for representing it.

The field went cold. ARPA defunded neural network research. Researchers moved on. And Frank Rosenblatt, the musician, astronomer, and genius who had grasped the future two decades early, drowned on his forty third birthday while sailing in Chesapeake Bay. It was nineteen seventy one, just two years after Minsky's book. The machine he championed sat in storage. It is at the Smithsonian now.

How a Million Knobs Learn

Here is what a neural network actually does. Forget the brain. Think of a machine made entirely of knobs.

Each knob takes in some numbers, multiplies each number by a weight, adds them together, and then decides whether to pass the result forward or stay quiet. That is it. One knob is trivial. It is less capable than a pocket calculator. But connect thousands of them in layers, the output of one layer feeding into the input of the next, and something strange emerges. The combination starts to approximate patterns in data that no individual knob could capture alone.

When you type a message to a chatbot, your words get broken into tokens, the pieces from episode one, and each token gets turned into a list of numbers. Those numbers flow into the first layer of knobs. Each knob does its tiny multiplication, adds things up, decides what to pass on. The output flows into the next layer. And the next. In a modern large language model, there are billions of these knobs, arranged in dozens of layers, and every one of those knobs holds a single number that was set during training.

The analogy that works best is not a brain. It is a mixing board in a recording studio. Thousands of sliders, each one controlling one small aspect of the sound. Any individual slider does almost nothing on its own. But someone who knows how to set all of them, in exactly the right combination, can produce music that makes you cry. The neural network is the mixing board. Training, which is episode three, is the process of finding the right setting for every slider. And here is where the analogy breaks down: nobody designed the settings. Nobody understands why one particular combination of billions of numbers produces coherent language while a slightly different combination produces nonsense. The settings were found by a process that is more like erosion than engineering, billions of tiny adjustments wearing a path through mathematical landscape.

Why Understanding Vanishes

This matters because it explains something you have probably noticed. AI can write a perfect paragraph about quantum physics and then, in the next sentence, confidently state something completely wrong. It can solve a complex logic puzzle and then fail at counting the letters in a word. This is not a bug. It is the consequence of how the machine works.

There is no central processor inside a neural network that understands your question, reasons about it, and formulates a response. There are millions of numerical pathways that, when your specific input flows through them, produce an output that looks like understanding. Whether it is understanding, whether there is something it is like to be a pattern flowing through a billion knobs, is one of the great open questions of our time. But the mechanism itself is clear. Numbers in. Multiplications. Numbers out. Every impressive thing AI does, and every embarrassing failure, flows from the same simple process repeated at enormous scale.

The real surprise of neural networks is not that they work like brains. It is that they work at all. A collection of components, each one doing nothing more than multiplying and adding, somehow learns to write poetry, recognize faces, translate languages, and carry on conversations. The mathematics says this should be possible, and we will get to that proof in the deep dive. But the mathematics says nothing about why these particular knobs, in this particular arrangement, produce this particular magic. That gap between "we can prove it works in theory" and "we do not know why it works in practice" is where the real story of artificial intelligence lives.

The Thread

Everything in this series passes through the machine we just described. The tokens from episode one are the inputs that flow into these layers of knobs. Episode three will explain how all those knobs get set, the process called training, which turns a machine full of random numbers into something that can hold a conversation. And episode four covers the specific architecture, the particular arrangement of knobs, that dominates modern AI. It is called a transformer, and it is why we went from "interesting research curiosity" to "everyone on the planet talking about AI" in about five years.

The deep dive for this episode goes much further. The full drama between Rosenblatt and Minsky. The tragic story of Walter Pitts, the teenage genius behind that original nineteen forty three paper. The contested history of who really invented the training method that makes all of this work. And the theorem that proves neural networks can, in principle, approximate anything, while telling you absolutely nothing about how to actually do it.

That was episode two. The deep dive is next in your feed.