This is episode forty-four of Git Good, and the last episode of Season Two.
On April seventh, two thousand twenty-five, Git turned twenty years old. The anniversary arrived quietly. There was no conference. No cake. No livestream with confetti and a countdown. GitHub published a question and answer session with Linus Torvalds. GitLab published another one. A few blog posts appeared. And that was it. Twenty years of the most widely used developer tool on Earth, marked by a handful of articles and a long weekend.
When the interviewers asked Torvalds where he expected to be with Git in twenty years, his answer was characteristically blunt.
Still using it, yes. Maybe not talking about it.
That is the thing about infrastructure. When it works, it disappears. Nobody celebrates the twentieth anniversary of a water main. The best version of success is the version where nobody has to think about you at all. Git achieved that kind of success years ago. Billions of commits flow through it every day. It tracks nearly every piece of software that runs on nearly every device on Earth. And the person who built it in two weeks no longer follows the mailing list much.
But the world that Git operates in is not the same world that existed in two thousand five. The code is different. The people are different. The threats are different. And the question that this season has been building toward, the question underneath every episode about walls and workflows and shadows and power, is simple. What comes next?
Git has a problem that most of its users will never know about, and that is exactly why it is so hard to fix.
When Torvalds built Git, he chose SHA-1 as its fingerprinting algorithm. Every commit, every file, every tree in a Git repository gets a unique identifier, a long string of characters generated by running the data through SHA-1. That identifier is how Git finds things, verifies them, links them together. It is the heartbeat of the entire system.
In two thousand five, SHA-1 was considered secure. By two thousand seventeen, it was not. Researchers at Google and a Dutch university demonstrated a practical collision, two different files that produce the same SHA-1 fingerprint. The theoretical weakness that cryptographers had warned about for years became real. And because Git uses SHA-1 for everything, the entire system was suddenly built on a foundation that the security community no longer trusted.
Torvalds was unbothered. In his twentieth anniversary interviews, he made his position clear.
People kind of think that using the SHA-1 hashes was a huge mistake. But to me, SHA-1 hashes were never about the security. It was about finding corruption.
He has a point. Git uses SHA-1 to detect accidental data corruption, not to guard against deliberate attacks. For that purpose, it still works. But the world changed around the original design. Supply chain attacks became real. Episode thirty-six of this series told you about the xz backdoor, about Jia Tan spending two years infiltrating a project by exploiting trust. In a world where attackers spend years crafting sophisticated attacks against software infrastructure, a broken hash algorithm is not a theoretical concern. It is a door left unlocked.
The fix is straightforward in concept. Replace SHA-1 with SHA-256, a stronger algorithm with a longer fingerprint. The Git project has been working on this since two thousand twenty, when version two point twenty-nine added initial SHA-256 support. But "initial support" and "actually usable by the entire ecosystem" are separated by a canyon.
As of late two thousand twenty-five, Git itself fully supports SHA-256. So does the Python implementation called Dulwich. So does Forgejo, the community-driven forge. But GitHub, where the vast majority of the world's repositories live, does not support SHA-256 repositories at all. Neither do most of the other tools in the ecosystem. GitLab has experimental support. Most everything else has nothing.
This creates what developers call a chicken-and-egg problem. Nobody switches to SHA-256 because the hosting platforms do not support it. The hosting platforms do not rush to support it because nobody is switching. The migration requires something like two hundred to four hundred patches to the Git codebase alone, of which roughly half have been written. Patrick Steinhardt, a core Git contributor, put it plainly.
The transition will likely not be an easy one, and it may result in a few hiccups along the road.
Git three point zero, the major release that will make SHA-256 the default for new repositories, has been discussed, debated, and deferred. Some estimates say late two thousand twenty-six. Others say the timeline is undetermined. The interoperability layer, the bridge that lets SHA-1 repositories talk to SHA-256 repositories during the transition, is still being built.
It is the most significant internal change in Git's twenty year history. And it is happening so slowly that by the time it is finished, most users will never know it happened at all. That is infrastructure for you. The hardest work is the work nobody sees.
Git made a promise in two thousand five. Every clone is a complete copy. You get everything. Every file, every commit, every branch, every line of history from the first day to the last. Your laptop holds the same data as the server. That is what distributed means.
For most of Git's life, that promise was a strength. It meant you could work offline. It meant no single point of failure. It meant every developer had a full backup. But then repositories got big. Not big like a few thousand files. Big like the Windows operating system, which has over three hundred thousand files and a history measured in hundreds of gigabytes. Big like the monorepos at Google and Facebook, where thousands of engineers share a single repository that contains more code than any one person could read in a lifetime.
For repositories at that scale, the original promise became a burden. A new engineer joining the Windows team does not need every file in the operating system. They do not need commit history from two thousand twelve. They need the thirty directories they are going to work in, right now, and nothing else.
Partial clone and sparse checkout are Git's answer. Partial clone lets you download repository metadata, the structure and history, without downloading the actual file contents. Files are fetched on demand, when you need them. Sparse checkout lets you tell Git which directories you want to see in your working copy and hide everything else. Combine them and a repository that used to take four hours to clone takes two minutes. A checkout that used to materialize ninety thousand files materializes three thousand.
These features have been maturing steadily through Git two point forty-nine and beyond, with improvements to the path-walk algorithm, a new git-backfill command for recovering objects, and better integration with the partial clone protocol. They work. Companies are using them. The monorepo problem, for years the one area where Git genuinely struggled, is being solved.
But here is the thing that matters for this episode. Partial clone changes what "distributed" means. When your clone does not contain all the data, you depend on the server to provide it on demand. The server becomes essential again. You cannot work on a plane without network access the way Torvalds imagined. You are distributed in theory and centralized in practice. The great cycle that Season One tracked, from centralized to distributed and back, turns one more time.
In late two thousand nineteen, a Google engineer named Martin von Zweigbergk started a hobby project. He had spent years working on Mercurial, the version control system that lost the format wars to Git, and he had opinions about what a better user experience could look like. He wanted to build something that kept Git's strengths, its speed, its data model, its enormous ecosystem, while fixing the interface problems that have frustrated developers for twenty years. The problems that episode twenty-three of this series called "the education problem." The problems that put four of the ten most viewed Stack Overflow questions in the Git category.
He called it Jujutsu, after the Japanese martial art, and invoked it on the command line as jj. It is written in Rust. It is fully open source. And here is the clever part. It does not replace Git. It wraps around it.
Jujutsu reads and writes the same dot-git directory that Git uses. Your teammates can keep using Git on the same repository while you use jj. There is no migration, no conversion, no flag day where everyone has to switch at once. You just start using a different front end, and the storage underneath is identical.
What makes Jujutsu different is not one feature but a philosophy. The staging area, the thing that confuses every new Git user, does not exist. Your working copy is automatically a commit, always, from the moment you start editing. Conflicts are not emergencies that block your work. They are first-class objects that you can store, move, and resolve later. Every operation you perform is recorded in a log, and that log gives you an undo button for your entire repository state. Not just your last commit. Your entire state.
Von Zweigbergk synthesized the best ideas from Git, from Mercurial, and from years of watching developers struggle. What started as a hobby project became his full-time work at Google. By two thousand twenty-five, Jujutsu had over twenty-seven thousand stars on GitHub, a growing community, and a reputation as the tool that might actually fix Git's worst problems without asking anyone to leave Git behind.
And the Git project noticed. Patrick Steinhardt, speaking about Git's future direction, acknowledged what Jujutsu had exposed.
That moment when you realize that a tool simply fixes all the UI issues that you had and that you have been developing for the last twenty years was not exactly great.
Git is learning. Version two point fifty-four is planned to include new commands like git history split and git history reword, ideas that Jujutsu pioneered. The influence flows in one direction. Jujutsu sits on Git and improves the experience. Git watches what works and absorbs the lessons into itself.
This is what healthy competition looks like in open source. Not a war. Not a fork. A conversation between tools, where the newer one shows the older one what it could become, and the older one has the humility to listen.
Season One told you how GitHub re-centralized what Git distributed. A tool designed so that no server is special, hosted on one platform that became so dominant that leaving it feels impossible. That tension has run through every episode of Season Two. The fork wars. The platform wars. The copilot problem. Corporate power built on community code.
The response has been building for years, and it takes a form borrowed from social media. Federation.
The protocol is called ForgeFed, and it is an extension of ActivityPub, the same standard that powers Mastodon and the broader fediverse. The idea is straightforward. If you host your code on one Forgejo server and your collaborator hosts theirs on another, you should be able to follow them, open issues on their projects, submit patches, and receive notifications, all without either of you creating an account on the other's server. The same way you can follow someone on one Mastodon instance from a different one.
Forgejo, the community-driven fork of Gitea, is the primary implementer. As of late two thousand twenty-five, they have built the ability to federate repository stars across instances. It is a small first step, the equivalent of being able to "like" a post on someone else's server. The harder features, federated issue tracking, cross-server pull requests, moderation tools, access control, are still in development. The project has received funding from the NLnet Foundation, with a grant deadline of April two thousand twenty-six.
Codeberg, the largest Forgejo instance, hosts over three hundred thousand repositories. That is not nothing. But it is a rounding error compared to GitHub's hundreds of millions. Federation faces the same problem it faces everywhere. Convenience pulls people toward the big platform. Network effects are gravitational. Mastodon did not replace Twitter. Forgejo will probably not replace GitHub.
But "replace" was never really the point. The point is that the option exists. That there is a path where your code, your issues, your collaboration history do not belong to a single company. The point is that when the next acquisition happens, when the next terms of service change arrives, when the next pricing model shifts, there is somewhere else to go. Federation is not about winning. It is about making sure that losing is not permanent.
Every episode of this season has touched it. The AI thread that the spec called a spice, not the main dish. It showed up in the copilot episode, in the vibe commit episode, in the supply chain episode. AI-generated code flowing through pull requests. AI writing commit messages that nobody reads. AI-trained models that learned from every public repository on GitHub without asking permission.
Now, at the end of the season, it is time to confront the question directly. What does version control look like when most of the code is not written by humans?
Git's data model was designed for human workflows. Every commit has an author field and a committer field. Every change has a message explaining why. Git blame traces each line back to the person who wrote it. These fields assume a world where a human sat down, thought about a problem, typed some code, and explained their reasoning. That world is not gone. But it is shrinking.
When a developer prompts an AI to generate a function, then commits the result without reading it carefully, who is the author? The developer who wrote the prompt? The model that generated the code? The millions of people whose open source code the model was trained on? When git blame points at a line and the answer is "a language model prompted by a person who did not fully understand the output," what does blame even mean?
These are not theoretical questions. They are happening right now, thousands of times a day, in repositories all over the world. And Git has no metadata for them. There is no field for "this was AI-generated." There is no way to distinguish a commit where a human wrote every line from a commit where a human typed a prompt and pressed accept.
Torvalds, in late two thousand twenty-five, compared AI to compilers.
AI is just another tool, the same way compilers free people from writing assembly code by hand, and increase productivity enormously but did not make programmers go away.
He was positive about vibe coding as a way into programming, a way for people who could not code to get computers to do things. But he was clear about its limits. It could be, in his words, a horrible idea from a maintenance standpoint. And he was not using AI-assisted coding himself.
The tension is this. Git was built to track what changed and who changed it. AI makes both of those questions harder to answer. What changed is increasingly generated by models that produce plausible code without understanding it. Who changed it is increasingly a collaboration between a human and a machine in a ratio that nobody records. The version control system that was designed to provide accountability is being asked to track changes that are, in a fundamental sense, unaccountable.
Some researchers are working on what they call semantic diffing, tools that understand not just which lines changed but what the change means. Others are exploring intent-based versioning, where the system tracks what the developer wanted, not just what the code does. These ideas do not exist as products yet. They are sketches on whiteboards and papers at conferences. But the pressure is building. When AI generates enough of the code, the old model of line-by-line diffs and human-authored commit messages will not be enough.
Git will adapt. It always has. The question is whether the adaptation happens inside Git, the way SHA-256 and partial clone are happening inside Git, or whether it happens in something built on top of Git, the way Jujutsu is built on top of Git, or whether it requires something entirely new. Twenty years from now, the answer might be obvious. Right now, it is not.
Junio C Hamano took over Git maintenance on July twenty-sixth, two thousand five. Less than four months after Torvalds created it. That was twenty years ago. He is still there.
Season One introduced him in episode four as the quiet steward. Episode forty of this season devoted an entire chapter to his story, the two decades of reviewing patches, managing releases, resolving disputes on the mailing list with a patience that borders on superhuman. He works at Google, which has sponsored his Git maintenance since two thousand eleven. He is not unpaid. But he is invisible in the way that all maintainers of critical infrastructure are invisible.
When the twentieth anniversary articles were published, the interviewers talked to Torvalds. Not to Hamano. The person who spent two weeks building the thing got the interviews. The person who spent twenty years maintaining it got mentioned in a paragraph. When Torvalds was asked about Hamano, he was generous.
I think Junio has been exemplary.
And then, in another interview, he said the thing that matters more.
Yes, you should definitely talk to Junio, not to me, because he has been doing a great job.
Nobody did. At least not in any interview that made the headlines. That gap, between the credit that goes to the creator and the silence that surrounds the maintainer, is the story underneath every episode of this season. The invisible maintainers. The green squares nobody counts. The money nobody pays. The burnout nobody sees until something breaks.
There is no public succession plan for Git. The project has contributors from Microsoft, Google, GitHub, and GitLab. The bus factor is better than it was in two thousand five. But Junio's judgment, his sense of what belongs in Git and what does not, his ability to hold the project together through twenty years of competing interests, that is not something a committee can replace.
I just kept working on it. There was always more to do.
When the transition happens, and it will happen because nothing is forever, it will be the most important moment in Git's history since the two weeks that created it. Season One told you about Torvalds handing Git to Hamano. The next handoff, from Hamano to whoever comes next, will determine whether Git stays coherent or fractures under the weight of its own success.
Torvalds built Git to solve one specific problem. The Linux kernel needed a version control system, fast, distributed, capable of handling thousands of parallel contributors. That was it. He was not thinking about startups, or social coding, or artificial intelligence, or a seven and a half billion dollar acquisition. He was thinking about patches and merge conflicts and the fact that BitKeeper was gone and he needed something by next week.
But the choices he made in those two weeks, the choices Season One traced through every episode, turned out to be exactly right for problems that did not exist yet. Content-addressing, where every piece of data is identified by its content rather than its name, means every file can be cryptographically verified. That matters for supply chain security in ways Torvalds never imagined in two thousand five. Local-first means you can work without a network connection, which matters when a single platform becomes so essential that its outages stop entire companies. Cheap branching means experimentation costs nothing, which matters for workflows that did not exist when branching was expensive.
Good design has emergent properties. The features you build for one reason turn out to be useful for reasons you never anticipated. Git was not designed to be a social platform, a hiring signal, a security tool, or the foundation of modern continuous deployment. It was designed to track kernel patches really fast. Everything else emerged.
In the GitLab anniversary interview, Torvalds was asked whether he thought the high-level design still held up.
I still think the high-level design is just very good.
Twenty years in, nothing has proven him wrong. The internals are being updated. The interface is being improved. New tools are being built on top. But the core design, the content-addressed directed acyclic graph, the cheap branching, the distributed model, has not changed. It did not need to.
Season One told you how Git was built. From filing cabinets to the distributed future. Twenty-two episodes tracing the arc from code librarians and magnetic tapes through the format wars and the two-week sprint and the seven and a half billion dollar acquisition. It ended with Git triumphant, and its future open. This season told you what that triumph actually produced.
The wall that keeps beginners out, the education problem that puts four of the top ten Stack Overflow questions in the Git category. The workflow wars, where a blog post from two thousand ten still starts arguments. The green squares that reduced a career to a grid of colored pixels. The escape, where Git wandered out of the terminal and into law, science, and art. The shadows, where secrets hide in history and blame becomes a weapon and supply chains built on trust get exploited by patience. The power, where forks become governance tools and platforms become gatekeepers and AI trains on every public commit. And the invisible people, the ones holding it all together, paid nothing or paid badly, burning out quietly while the industry they sustain celebrates someone else.
These are not Git's failures. They are the consequences of Git's success. Every one of these problems exists because Git won so completely that it became the substrate underneath everything. You do not have an education problem with a tool nobody uses. You do not have workflow wars around a tool nobody cares about. You do not have supply chain attacks through a system nobody depends on. The problems are proportional to the victory. And the victory is total.
Where does Git go from here? The answer depends on which layer you are looking at, and whether you believe the biggest changes will come from inside Git or from around it.
SHA-256 will eventually replace SHA-1, slowly, over years, in a migration so careful that most developers will never notice. Partial clone and sparse checkout will make monorepos routine instead of heroic. Jujutsu or something like it will smooth the rough edges that have frustrated developers since two thousand five, either by replacing Git's interface or by teaching Git to replace its own. Federation will give people who care about ownership a place to stand, even if it never reaches the scale of GitHub. And AI will keep writing more of the code, generating more of the commits, filling more of the history, until the question of what version control even means in a world of generated code becomes impossible to ignore.
Torvalds said he expected Git to stay relevant for the foreseeable future. He is probably right. Not because Git is perfect, but because it has the most powerful force in technology on its side. Network effects. Everyone uses Git because everyone uses Git. The switching cost is enormous. And nothing on the horizon is so much better that the pain of switching becomes less than the pain of staying.
But if something does eventually replace Git, it will not come from a competitor. It will come from the same place Git came from. Someone with a problem, a strong opinion about how things should work, and a weekend with nothing else to do. That is the pattern. SCCS. RCS. CVS. Subversion. BitKeeper. Git. Each one built by someone who was frustrated with the last one, each one so obviously better to its creator that they could not understand why anyone would use anything else. The next turn of the wheel is out there somewhere, in a hobby project or a side thread or a design document that nobody is paying attention to yet.
Maybe it will be Jujutsu, grown from a wrapper into something that stands on its own. Maybe it will be something that does not track lines of code at all, but tracks intentions, decisions, reasoning, the things that actually matter when you are trying to understand why software works the way it does. Maybe it will be something we cannot imagine yet, built for a world where the relationship between humans and code is fundamentally different from what it is today.
Whatever it is, it will build on the foundation that Git laid. Content-addressing. Distributed storage. Cheap branching. Immutable history. The idea that every change should be tracked, every version should be recoverable, every contributor should be recorded. Those ideas did not start with Git. But Git made them universal. And universal ideas do not go away. They get absorbed into whatever comes next.
Twenty years ago, a programmer in Portland, Oregon woke up on a Sunday morning, angry that his version control system had been taken away, and decided to build a new one. Two weeks later, he had something that worked. Four months later, he handed it to someone else and went back to his real job. The person he handed it to is still there, twenty years later, still reviewing patches, still managing releases, still keeping the lights on.
Between them, they built the invisible foundation underneath nearly every piece of software on Earth. And in the way of all truly successful infrastructure, almost nobody thinks about it. It just works. You type the commands. The code moves. The history records itself. And somewhere, a maintainer you have never heard of makes sure it keeps working tomorrow.
Git changed how humans collaborate on the most complex things humans build. It did not do it through marketing or corporate strategy or a master plan. It did it through good design, stubborn maintenance, and the network effects of being the right tool at the right time. Whatever comes next will build on that. The next twenty years of version control, whatever they look like, will owe a debt to a two-week sprint born from anger, and to the quiet steward who made sure the result survived long enough to change the world.
That was episode forty-four, the finale of Git Good Season Two. Thank you for listening.
Git maintenance run. You type it and nothing visible happens. Behind the scenes, Git reorganizes its data, prunes loose objects, updates commit graphs, and optimizes the structures that make every other command fast. It is the quiet work that keeps everything running, whether anyone notices or not. Set it to automatic with git maintenance start and it runs in the background on a schedule, silently, indefinitely. The most important work is always the work that nobody sees.