On a spring morning in nineteen seventy six, a programmer named Steve Johnson stormed into his colleague's office at Bell Labs, cursing. Johnson, the creator of yacc, one of the most important parsing tools in Unix history, had just wasted an entire morning debugging a program that was perfectly correct. He had found the bug days earlier. He had fixed it. He had edited the source file, saved it, and gone home satisfied. But the next morning, when he compiled and ran the program, the bug was still there. The fix was sitting in the source file, but the compiled binary on disk was the old one. He had forgotten to recompile.
The colleague he was ranting to was Stuart Feldman, and Feldman was nodding along because the exact same thing had happened to him the night before. He had spent part of the previous evening struggling with an executable that did not reflect his latest changes, chasing a phantom through code that was already correct. Two experienced programmers at one of the most prestigious research labs in the world, both defeated by the same trivial mistake. Not a logic error. Not a design flaw. Just the human tendency to forget a step in a sequence.
Feldman looked at the problem and saw something deeper than forgetfulness. He saw a graph. Every program is a web of relationships. This source file produces that object file. That object file, combined with three others, produces an executable. If any source file changes, the object files that depend on it are out of date, and everything downstream of those object files is out of date too. A human trying to keep track of this web in their head will eventually forget a node. Not because they are stupid, but because the web grows faster than human memory can follow.
That weekend, Feldman wrote a tool. He wrote it over a weekend, rewrote it the following weekend, and called it make. It would become one of the most widely used programs in the history of computing, earn him an award from the Association for Computing Machinery twenty seven years later, and inflict a single character of frustration on every programmer who would ever encounter it. This is the story of that tool, that character, and the man who could not bring himself to fix either one.
Stuart Feldman is not the person most people would imagine when they think of the creator of a foundational Unix utility. He studied astrophysical sciences at Princeton, the kind of degree where you learn about stellar nucleosynthesis and the large scale structure of the universe. He went to MIT for his doctorate, but in applied mathematics rather than astrophysics, a pivot that moved him from studying stars to studying the abstract structures underneath computation. After graduating, he joined Bell Labs, the research laboratory where Unix was being born on the sixth floor of a building in Murray Hill, New Jersey, the same building where Ken Thompson was writing grep and Dennis Ritchie was finishing the C programming language.
Feldman was part of the original group that created Unix, but his contributions were different from the operating system kernel work that Thompson and Ritchie are famous for. Feldman was a toolmaker. He wrote the first Fortran seventy seven compiler, an enormous undertaking that made it possible to run one of the world's most important scientific programming languages on Unix systems. He worked on ALTRAN, a language for symbolic algebra. He worked on EFL, the Extended Fortran Language. These are not the glamorous parts of computing history, but they are the parts that let scientists and engineers actually get work done. Feldman was, from the beginning, the person who noticed where the friction was and built something to reduce it.
His career after Bell Labs traced an arc through the upper reaches of American technology. He moved to Bellcore, the research arm that spun off from Bell Labs after the breakup of AT and T. He spent eleven years at IBM, rising to Vice President of Computer Science in IBM Research, where he drove the company's long term science strategy and founded the Institute for Advanced Commerce. He left IBM for Google, where he became Vice President of Engineering, responsible for all of Google's engineering offices in the eastern half of the Americas, plus Asia and Australia. In two thousand six, his peers elected him president of the Association for Computing Machinery, the oldest and most prestigious professional society in computing. He later became president of Schmidt Sciences, the philanthropic initiative funded by former Google CEO Eric Schmidt, where he oversees programs that fund scientific research.
He holds an honorary doctorate in mathematics from the University of Waterloo. He is a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the American Association for the Advancement of Science. He has taught courses at Harvard, Princeton, Berkeley, and Yale. He co-founded the magazine ACM Queue with Steve Bourne, the creator of the Bourne shell.
And yet. When Stuart Feldman stands at a podium to accept an award, the thing people want to talk about is a tab.
To understand why make mattered in nineteen seventy six, you need to understand what building software looked like before it existed. A programmer would write source code in one or more files. To turn that source code into a running program, they would type a sequence of commands. Compile this file. Compile that file. Link them together. Copy the result somewhere. If the program had ten source files, the build process might involve twenty or thirty commands, typed in the right order, with the right flags, every single time.
Most programmers dealt with this by writing shell scripts. A file called build dot sh that contained all the commands in sequence. Run the script, wait for it to finish, and your program is built. This worked, but it had a devastating flaw. The shell script ran every command every time. Changed one line in one file? The script recompiles everything. On a small program, this takes seconds. On a large program in nineteen seventy six, when compilers were slow and computers were shared among dozens of users, a full rebuild could take hours. And most of those hours were wasted, recompiling files that had not changed.
Feldman's insight was that the problem was not about commands. It was about relationships. A program is not a sequence of steps. It is a web of dependencies. And if you describe the web instead of the steps, the computer can figure out the minimum set of actions needed to bring everything up to date.
This was a genuinely radical idea. In nineteen seventy six, almost everything in computing was imperative. You told the computer what to do, step by step, in order. Feldman chose the opposite. He made a tool where you describe what depends on what, and the tool figures out the steps. You tell make that program dot exe depends on main dot o and utils dot o, and that main dot o depends on main dot c, and that utils dot o depends on utils dot c. You tell it that the way to turn a dot c file into a dot o file is to run the C compiler. And then you walk away. Make looks at the file system, checks the modification timestamps on every file, builds a directed acyclic graph of all the dependencies, performs a topological sort to determine the correct order, and executes only the commands whose outputs are older than their inputs.
That last part, the timestamp comparison, was the beautiful trick. Unix had always tracked when files were last modified. Feldman realized he could use those timestamps as a free, automatic change detection system. If main dot c has a modification time of three fifteen and main dot o has a modification time of three ten, then main dot o is out of date because the source changed after the object was built. If utils dot c has a timestamp of two forty five and utils dot o has a timestamp of three twelve, then utils dot o is fine. Only main dot o needs to be rebuilt, and therefore only the final link step needs to run. On a project with a hundred source files where you changed one, make runs two commands instead of a hundred and one. The savings were not incremental. They were transformational.
The description of this web lived in a file that Feldman called a Makefile. The syntax was sparse to the point of brutality. A target, followed by a colon, followed by its prerequisites, on one line. Then, on the next line, indented with a tab character, the command to run. That was almost the entire language. Target, prerequisites, recipe. The world as it should be, not the steps to get there.
Feldman published a paper describing make in the journal Software Practice and Experience in nineteen seventy nine, three years after he wrote it. By then, make had already spread through Bell Labs and out into the Unix world. The paper was titled simply Make, a Program for Maintaining Computer Programs. It was eleven pages long. The tool it described would outlive every other program mentioned in the same issue of the journal.
Now we need to talk about the tab.
In the Makefile syntax, the line that tells make what command to run, the recipe line, must begin with a tab character. Not spaces. Not any whitespace. A tab. Specifically, a horizontal tab, ASCII code nine, in column one. If you use spaces, even if they look identical on your screen, make will not recognize the line as a recipe. It will throw an error, usually something cryptic about a missing separator, and refuse to do anything.
This has been the single most common source of frustration for every programmer who has ever written a Makefile. Editors replace tabs with spaces. Copy and paste from web pages converts tabs to spaces. Code formatters standardize indentation to spaces. Version control systems sometimes mangle whitespace. The error message does not say you used spaces instead of a tab. It says missing separator. Generations of programmers have stared at a Makefile that looks perfectly correct, character by character, and been unable to figure out why it does not work, until someone tells them to check for invisible whitespace differences.
Feldman knew it was a bad idea almost immediately. Here is the story, in his own words, as recorded in Eric Raymond's The Art of Unix Programming.
Why the tab in column one? Yacc was new, Lex was brand new. I had not tried either, so I figured this would be a good excuse to learn. After getting myself snarled up with my first stab at Lex, I just did something simple with the pattern newline tab. It worked, it stayed. And then a few weeks later I had a user population of about a dozen, most of them friends, and I did not want to screw up my embedded base. The rest, sadly, is history.
There is so much packed into that quote. Lex, the lexical analyzer generator, had just been created by Mike Lesk at Bell Labs. Feldman wanted to learn it, so he tried to use it to parse the Makefile syntax. He got tangled up in the complexity of writing proper lexical rules. So instead of designing a thoughtful, flexible syntax, he grabbed the simplest possible pattern, the regular expression for a newline followed by a tab, and used that to distinguish recipe lines from everything else. It was a hack. A weekend hack, in a tool written over a weekend. And it worked, so he kept it.
The second part of the quote is where the comedy turns into tragedy. Within weeks, roughly a dozen people were using make. Twelve users. Feldman, a thoughtful engineer who cared about his colleagues, decided that breaking backward compatibility for twelve people was not worth it. So the tab stayed.
Decades later, when Feldman received the two thousand three ACM Software System Award for make, he walked to the podium and began his acceptance speech with five words.
I would like to apologize.
Half the audience laughed. The other half, the non-programmers in the room, looked confused. Feldman later described this as a perfect bipartite graph of the audience, divided cleanly into two groups with no edges between them. Those who had been burned by the tab, and those who had not.
He summed up the entire arc of the tab in a single sentence that deserves to be carved into the wall of every computer science department on earth.
Even though I knew that tab in column one was a bad idea, I did not want to disrupt my user base. So instead I wrought havoc on tens of millions.
Twelve users. Tens of millions. The ratio between the people Feldman was trying to protect and the people he ultimately frustrated is roughly one to a million. It is the most consequential backward compatibility decision in the history of computing, and it was made in good faith, by a kind person, about a tool he wrote over a weekend, to avoid annoying his friends.
To understand why make survived, you have to understand what it actually is, underneath the infamous tab. And what it is, at its core, is the conductor of an orchestra.
The Unix philosophy, articulated by Doug McIlroy and embodied by Thompson, Ritchie, and their colleagues at Bell Labs, says that you should write programs that do one thing and do it well, and write programs to work together. The C compiler, cc, does one thing. It compiles C source code into object files. The linker, ld, does one thing. It combines object files into executables. The archiver, ar, does one thing. It bundles object files into libraries. Each tool is small, focused, and excellent at its specific task.
But someone has to coordinate them. Someone has to know that you run cc before ld, that you run ar only if you are building a library, that you skip recompiling files that have not changed, and that you do all of this in the right order. That coordinator is make. It is not a compiler. It is not a linker. It does not know anything about C, or Fortran, or any other language. It knows about files, timestamps, and dependencies. It is the conductor's score, the sheet music that tells each instrument when to play.
This abstraction, knowing about relationships rather than implementations, is what made make universal. Feldman designed it for C programs on Unix, but the Makefile does not mention C. It mentions files. Any process that transforms input files into output files, where the outputs should be rebuilt when the inputs change, can be described in a Makefile. This is not an accident. It is a consequence of Feldman's mathematical training. He did not solve the specific problem of compiling C programs. He solved the general problem of maintaining consistency in a directed acyclic graph of file dependencies.
The directed acyclic graph, the DAG, is the heart of make. Directed means each relationship has a direction, source produces target. Acyclic means there are no loops, you cannot have A depending on B depending on A, because that would be a paradox, an impossible build. Graph means it is a network of nodes and edges, not a simple chain. A single source file might be used by three different targets. A single target might depend on twenty sources. The graph can be wide, deep, and complex, far more complex than any human could navigate manually.
When you run make, it reads the Makefile, constructs this graph in memory, performs a topological sort to determine a valid build order, checks the modification timestamp on every file in the graph, identifies which targets are out of date, and executes only the recipes needed to bring them up to date. This entire process, from reading the file to starting the first compilation, takes milliseconds. It is the same algorithm that computer science students learn in their second year of undergraduate studies. But Feldman implemented it in a practical tool in nineteen seventy six, when most of those textbooks had not been written yet.
By the mid nineteen eighties, Unix had spread from Bell Labs to universities, to workstations, to the early commercial computing world. But Unix was not free. AT and T controlled the source code, and as the company began to commercialize Unix, access to the tools became restricted and expensive. Richard Stallman, a programmer at the MIT AI Lab, launched the GNU Project in nineteen eighty three with the goal of creating a completely free Unix-like operating system. To build a free Unix, you needed free versions of every Unix tool. Including make.
GNU Make was written by Roland McGrath, and the circumstances of its creation are remarkable even by the standards of free software history. McGrath began working for the Free Software Foundation in the summer of nineteen eighty seven. He was fifteen years old. A teenager, writing one of the foundational tools of the GNU operating system. That same summer, he also began writing the GNU C Library, glibc, the piece of software that sits between every program and the operating system kernel, handling the most fundamental operations like reading files, allocating memory, and creating processes. McGrath would maintain glibc for the next thirty years, finally stepping down in two thousand seventeen as maintainer emeritus.
GNU Make took Feldman's original design and extended it in ways that made it powerful enough to build the entire Linux kernel and most of the free software ecosystem. Pattern rules, which let you write a single rule that applies to every dot c to dot o transformation instead of writing one rule per file. Functions for text manipulation, conditional directives that let the Makefile adapt to different operating systems, and critically, parallel builds.
The parallel build feature, activated with the minus j flag, is where the dependency graph pays its biggest dividend. Because make knows the full graph, it knows which targets are independent of each other and can be built simultaneously. On a multi-core machine, this changes everything. One measurement showed the Linux kernel compiling in one hundred and sixty seven minutes on a single processor core and twenty eight minutes on sixteen cores. Six times faster, with no changes to any source code or any Makefile. Make already knew which files could be compiled in parallel. The graph contained the answer. All the minus j flag did was let make act on what it already knew.
GNU Make became the make. When people say make today, they almost always mean GNU Make. It ships with every Linux distribution, every Mac through the developer tools, and is available on Windows through multiple compatibility layers. The GNU Make manual, written by Stallman and McGrath, has been published as a physical book and is one of the most referenced technical documents in computing. A teenager's summer project from nineteen eighty seven became infrastructure that billions of devices depend on.
Since Feldman wrote make in nineteen seventy six, dozens of tools have been created to replace it. Not one of them has succeeded. The graveyard of make replacements is one of the most crowded cemeteries in computing, and the pattern of failure is almost always the same.
First came Autotools, the GNU Build System, a trio of programs called Autoconf, Automake, and Libtool that were supposed to solve the problem of building software portably across different Unix systems. Autoconf generates a configure script that tests the target system for available features. Automake generates Makefiles from simpler templates. Libtool handles the nightmare of building shared libraries. Together, they became the standard way to distribute open source software for over a decade. But they did not replace make. They wrapped it. The output of the Autotools pipeline is a Makefile. Make is still doing the actual work. The tools just made the Makefile portable. And the complexity of Autotools itself became legendary. Learning M4 macros, debugging the configure script when it produced the wrong results, understanding why the generated Makefile was ten thousand lines long when your project had twelve source files. Developers wrote entire blog posts with titles like Autotools Is the Worst Software Ever Written. The tools that were supposed to make make easier became harder than make itself.
Then came CMake, created in two thousand at Kitware, a small company that built scientific visualization software. CMake attacked a different problem. Instead of generating Makefiles from templates, CMake generates Makefiles from its own description language. Or it generates Visual Studio project files. Or Xcode projects. Or Ninja build files. CMake is a meta-build system, a tool that produces input for other build tools. It has become the de facto standard for C and C++ projects, with widespread adoption across the industry. But it did not replace make either. On Linux, the most common CMake workflow generates Makefiles and then runs make. CMake sits on top of make. The dependency graph still belongs to make.
Then Ninja, created by Evan Martin at Google in two thousand ten. Martin was working on the Chrome browser, and he was frustrated. Chrome's codebase had around thirty thousand input files, and GNU Make took ten to forty seconds just to figure out that nothing needed to be rebuilt. That overhead meant that every time Martin fixed a typo, he waited half a minute before the compiler even started. He built Ninja as a stripped down, speed obsessed alternative that does almost nothing except run the dependency graph as fast as possible. It has no fancy syntax, no conditionals, no functions. The files that Ninja reads are not meant to be written by humans. They are meant to be generated by CMake or Meson or some other higher level tool.
Then Meson, created by Jussi Pakkanen in two thousand twelve over a Christmas holiday. Pakkanen, a Finnish physicist, was frustrated by the complexity and slowness of existing build systems. He designed Meson to be fast, reliable, and easy to use. Meson generates Ninja files rather than Makefiles, and it has attracted a devoted following, particularly in the Linux desktop world. Projects like GNOME, systemd, and X dot Org have adopted it. But Meson did not replace make either. It replaced the human experience of writing build files, not the underlying execution model. And for the vast majority of projects on earth, make remains the default.
Then Bazel, Google's internal build system Blaze, open sourced in two thousand fifteen. Bazel is the most ambitious attempt to rethink build automation from scratch. It tracks dependencies at the individual file level, caches build results across the entire organization, supports remote execution on server clusters, and uses a deterministic, hermetic build model where every input is fully specified and every output is reproducible. It is powerful, sophisticated, and backed by one of the largest technology companies in the world. And most programmers have never used it. Bazel's complexity is proportional to its power, and for a project with fewer than a million lines of code, that complexity is almost never worth the investment.
Here is the pattern. Every replacement either wraps make, generates input for make, or reimagines the build process at a scale that only matters for very large projects. For the middle of the distribution, the tens of millions of projects with a dozen to a few thousand source files, make is still the simplest tool that solves the problem. It is preinstalled. It requires no setup. The Makefile syntax, tab and all, can be learned in an afternoon. And the dependency graph algorithm, Feldman's core insight from nineteen seventy six, has not been improved upon because it does not need to be improved upon. Topological sort on a DAG with timestamp comparison is the correct algorithm for this problem. Everything else is ergonomics.
In nineteen ninety seven, an Australian software developer named Peter Miller published a paper with one of the best titles in computing history. Recursive Make Considered Harmful. The title was a deliberate reference to Edsger Dijkstra's famous nineteen sixty eight letter Go To Statement Considered Harmful, and like Dijkstra's letter, Miller's paper attacked a practice that nearly everyone was doing without questioning it.
The practice was recursive make. In a large project with many subdirectories, each containing its own source files, the standard approach was to put a Makefile in each subdirectory and have the top level Makefile invoke make recursively in each one. Change directory to the library folder, run make there. Change directory to the server folder, run make there. Change directory to the client folder, run make there. This felt natural. It mirrored the directory structure. It let each part of the project manage its own build.
Miller showed that it was fundamentally broken. The problem is that each recursive invocation of make constructs its own dependency graph from its own Makefile. No single invocation sees the whole picture. If a header file in the library folder changes, the Makefile in the server folder might not know about it, because the dependency crosses a directory boundary. The result is builds that sometimes produce incorrect binaries, builds that compile too much or too little, and build orders that are fragile and sensitive to the sequence in which subdirectories are visited.
The irony is exquisite. Make is a declarative tool. You describe the full dependency graph, and make determines the correct actions. Recursive make is an imperative pattern imposed on top of a declarative tool. You are telling make to execute in a specific order, subdirectory by subdirectory, which is exactly the kind of step by step thinking that make was designed to replace. The users of a declarative tool had found a way to use it imperatively, and in doing so, they broke the very guarantee that made it valuable.
Miller's solution was simple. Use a single Makefile for the entire project. Let make see the complete dependency graph. This is more work to set up initially, but it produces correct, fast, parallelizable builds. The paper was influential, widely cited, and largely ignored. Recursive make remained the dominant pattern for years, because it was easier and because most projects were small enough that the bugs it produced were rare enough to be blamed on other causes.
Peter Miller died in two thousand fourteen at the age of fifty three, after a long battle with leukemia. He was a significant contributor to the Debian project, the GNU gettext internationalization system, and several other open source projects. His build tools, Aegis and cook, were alternatives to make that implemented his ideas about correct dependency management. They never achieved mainstream adoption. But his paper lives on as one of the most cited critiques in software engineering, a warning about what happens when you take a tool that sees the whole picture and force it to look through a keyhole.
If make had remained a tool for compiling C programs, it would still be important. But the dependency graph is a universal concept, and make turned out to be a universal tool.
Data scientists use Makefiles to define analysis pipelines. Raw data goes into a cleaning script, which produces a cleaned dataset, which feeds into a statistical model, which produces a report. Change the raw data, and the entire pipeline runs. Change only the model, and only the model and the report are rebuilt. The same timestamp logic that Feldman used for C compilation in nineteen seventy six works perfectly for a Python data pipeline in two thousand twenty six. The language of the recipes changes. The logic of the graph does not.
Researchers use Makefiles to build LaTeX documents. A thesis might have twenty chapters, each in its own file, with a bibliography database, a dozen figures generated by scripts, and cross references that require multiple compilation passes. A Makefile can express all of these dependencies, run the LaTeX compiler the right number of times, regenerate only the figures whose source data changed, and produce the final PDF with a single command.
DevOps engineers use Makefiles to orchestrate deployment workflows. Build the Docker image. Push it to the registry. Update the Kubernetes configuration. Run the database migration. Each step depends on the previous one, and if any step fails, the subsequent steps should not run. This is a dependency graph. Make handles it natively.
The reason make works for all of these use cases is that Feldman did not build a C compilation tool. He built a dependency resolution engine that happens to have been first used for C compilation. The Makefile does not care what language your project is written in. It does not care whether the things it is building are programs, documents, datasets, or Docker images. It cares about files, timestamps, and the relationships between them. Everything else is just the recipe, the shell command on the tab-indented line, and make runs that command without knowing or caring what it does.
This is the quality that separates tools that last from tools that are replaced. Make solves a problem at the right level of abstraction. Not too specific, not too general. Specific enough to be immediately useful, general enough to apply to problems its creator never imagined. Feldman, the applied mathematician, had found the mathematical core of the build problem and exposed it cleanly. Fifty years of attempts to replace make have mostly been attempts to improve the syntax, the ergonomics, the user experience. The core algorithm remains unchanged because it was correct from the beginning.
And now, the reason this episode exists. The question this series keeps asking. When should you not use AI?
Dependency resolution is a graph problem. Given a set of nodes, a set of edges defining which nodes depend on which, and a property on each node, its modification timestamp, determine the minimum set of nodes that need to be rebuilt and the correct order to rebuild them in. This is a well defined, fully deterministic problem with a known optimal algorithm. Topological sort on a directed acyclic graph, with timestamp comparison. The algorithm runs in time proportional to the number of nodes plus the number of edges. It gives the correct answer every time. It has given the correct answer every time since nineteen seventy six.
Now imagine asking a large language model to solve this problem. You give it a Makefile with fifty targets and two hundred dependencies. You ask it which targets need to be rebuilt after changing a specific source file. The model reads the Makefile, which is just text, and tries to simulate the dependency traversal in its attention mechanism. For a small graph, it might get the right answer. For a medium graph, it will probably miss a transitive dependency, a target that depends on a target that depends on the changed file. For a large graph, it will hallucinate, inventing dependencies that do not exist or missing ones that do.
This is not a temporary limitation that will be solved by larger models or better training. It is a structural mismatch. Language models are statistical pattern matchers. They predict the next token based on patterns in their training data. Graph traversal is not a pattern. It is an algorithm. The correct answer to which nodes are reachable from this node depends on following specific edges through a specific graph, and there is no statistical shortcut. You either traverse the graph or you get the wrong answer.
And even if a model happened to get the right answer for a particular graph, you could not trust it. You cannot prove that the model's output is correct without running the actual algorithm to verify it. At which point, you might as well have run the algorithm in the first place. Make gives you the correct answer in milliseconds, for free, with a mathematical guarantee. A language model gives you a probable answer in seconds, for money, with no guarantee at all.
The comparison extends beyond correctness. Make is reproducible. Run it twice with the same inputs, and you get the same outputs. Always. A language model is non-deterministic by design. Even with the temperature set to zero, the outputs can vary between runs due to floating point arithmetic, batch scheduling, and internal implementation details. For a build system, where the entire purpose is to produce consistent, reliable results, non-determinism is not a quirk. It is a disqualification.
Make is also auditable. A Makefile is a plain text file that any programmer can read, understand, and verify. The dependencies are stated explicitly. The recipes are visible. There is no black box. If the build does something wrong, you can trace exactly why by reading the Makefile and following the graph. An AI-generated build plan offers no such transparency. Why did it decide to rebuild this file? Why did it skip that one? The model cannot explain its reasoning because it does not have reasoning. It has pattern completion.
And make is free. Not just open source free, but computationally free. It uses negligible resources. It starts instantly. It runs on any machine. An LLM-based build system would require network access, API keys, compute resources, and ongoing costs. For a task that make solves in under a second on a forty year old computer, the idea of routing it through a neural network running on a GPU cluster is not just unnecessary. It is absurd.
Here is the decision that this episode proposes. When your problem is a graph, use a graph algorithm. When your problem is determining which files are out of date and what to do about it, use the tool that was purpose-built for exactly that problem in nineteen seventy six and has been giving the correct answer ever since. AI is extraordinary at tasks that require understanding meaning, generating creative content, recognizing patterns in ambiguous data. Dependency resolution is none of those things. It is arithmetic on graphs. Make does arithmetic on graphs the way a calculator does arithmetic on numbers. Perfectly, instantly, and without opinions.
Stuart Feldman wrote make over a weekend in April of nineteen seventy six because his friend Steve Johnson was angry about forgetting to recompile a file. The tool was a hack. The syntax was borrowed from a failed experiment with a lexical analyzer. The tab delimiter was a shortcut that Feldman knew was wrong within weeks of choosing it. He did not want to disrupt twelve users, so he left it in. Fifty years later, virtually every piece of software on earth, from the Linux kernel to the applications on your phone, was built using a tool that carries that original tab, that original hack, that original weekend's work in its DNA.
Feldman went on to write compilers, lead research divisions at IBM, run engineering at Google, preside over the ACM, and oversee scientific funding at Schmidt Sciences. He received honorary doctorates, fellowships, and awards. He taught at four of the most prestigious universities in the world. But when he stood up to accept the ACM Software System Award in two thousand three, twenty seven years after writing make, the first thing he did was apologize.
The apology is the perfect ending to the make story, because it captures something essential about the relationship between programmers and their tools. Make is not beautiful. The tab is indefensible. The syntax is cryptic. The error messages are hostile. Every replacement is more pleasant to use. And yet make endures, because underneath the ugly surface is a perfect algorithm. The dependency graph with timestamp comparison is the correct solution to the build problem, and no amount of syntactic improvement changes the fact that the hard part was already solved, over a weekend, by a mathematician who was thinking about graphs while his friends were complaining about forgetting to recompile.
The tool that works is the tool you should use. Sometimes that tool was written over a weekend. Sometimes it carries a fifty year old mistake in its first column. Sometimes the person who wrote it spent the rest of his career apologizing for it. And sometimes the math just got it right the first time.