This is episode thirty-four of Git Good. And before we go further into the shadow of what Git remembers, we need to talk about a command whose name says more about software culture than any documentation ever could.
There is a Git command called blame. Type it, followed by a filename, and Git will show you every single line of that file alongside the name of the person who last changed it, the date they changed it, and the commit that contains the change. It is, on its surface, a debugging tool. A way to answer the simplest possible question about a line of code: who put this here, and why?
But that word. Blame. Not "history." Not "trace." Not "attribute." Blame. An accusation baked into the interface. And if you think the name is just a quirk, something a programmer thought was funny twenty years ago, consider how the command actually gets used in most organizations. A service goes down at two in the morning. The on-call engineer finds the line that caused the failure. They run git blame. A name appears. And now that name is attached to the outage. Not the system that allowed the bug through. Not the review process that missed it. Not the test suite that did not catch it. A person. A name on a screen at two in the morning.
The command was designed for archaeology. It gets used for prosecution. And the distance between those two things tells you everything about the culture it lives in.
The idea of annotating source code with authorship information predates Git by more than a decade. In the early days of CVS, the Concurrent Versions System that dominated the nineties, there was a command called annotate. It did roughly the same thing that git blame does today. For each line of a file, it showed which revision last touched that line and who made the change. The framing was neutral. You were annotating the file, adding context, reading the archaeology of the codebase.
Then in nineteen ninety-seven, a developer named Scott Furman at Netscape created a utility he called cvsblame. It did the same thing as CVS annotate, but the name carried a different charge. Furman may not have intended it as a statement about engineering culture. But the word stuck. When Subversion arrived a few years later, it shipped with both "annotate" and "blame" as aliases for the same command. The community had voted with its vocabulary, and "blame" won.
When Git needed its own version of the command, the story got characteristically messy. In May of two thousand five, someone on the Git mailing list asked how to replicate the CVS annotate feature. Linus Torvalds replied that he knew how to build it but was hoping someone else would. Junio Hamano came forward and implemented a working version in Perl. Then two different developers independently created their own implementations. One became git annotate, named to match Subversion's convention. The other became git blame. Git ships both commands to this day. They do exactly the same thing. The only difference is the name.
Git also quietly supports a third alias: git praise. Same command, same output, opposite emotional register. The existence of all three names, annotate, blame, praise, tells you that the developers understood perfectly well that naming carries weight. They just could not agree on which weight to carry.
In the right hands, blame is genuinely one of the most powerful debugging tools in a developer's workflow. You find a line of code that behaves unexpectedly. You run git blame. You see that the line was changed three months ago in a commit with the message "fix race condition in auth handler." Now you have context. You can read the full commit. You can see what other lines changed alongside it. You can understand the intent behind the change, not just the change itself. The name on the line is not the point. The commit message is the point. The story of why this line looks the way it does is the point.
That is blame as archaeology. And if every team used it that way, we would not need to have this conversation.
But most teams do not use it that way. Because blame produces a name, and names carry consequences.
Picture a Monday morning standup. Production went down over the weekend. The lead says, "I ran blame on the failing module. The last change was made by Sarah on Thursday." Every head turns to Sarah. It does not matter that Sarah's change was reviewed and approved by two other engineers. It does not matter that the failure was actually caused by an interaction between Sarah's change and a configuration update made by a completely different team. What matters, in that moment, is that git blame returned her name, and her name is now synonymous with the outage.
This pattern is so common that many developers have developed a reflexive anxiety about their name appearing in blame output. Some go further. They make small cosmetic changes to files they did not originally write, adjusting whitespace or renaming a variable, so that blame attributes those lines to them instead of the original author. Not to take credit. To obscure the trail. To make the blame output less useful as a weapon against someone else, or to blend their name into the noise so it does not stand out when someone comes looking for a target.
Others go in the opposite direction. They avoid touching old code entirely, even when it needs improvement, because any change means their name replaces the original author in the blame output. If that code later breaks, blame will point to them, not to the person who wrote the problematic logic in the first place. The rational response to a blame culture is to never touch anything. Which is, of course, the exact opposite of what healthy software development requires.
The dysfunction runs deeper than individual behavior. In some organizations, blame output has been used in performance reviews. A manager pulls up the blame history of a troubled module and counts how many lines are attributed to each team member. The developer with the most lines in the problem area gets the worst review. Never mind that their lines might be the fixes, not the problems. Never mind that the original bugs were introduced by someone who left the company two years ago and whose name was overwritten by a mass reformatting commit. The tool produces a name, and the organization uses the name as evidence.
The community has argued about this for years. Should the command be renamed? Should Git deprecate "blame" and promote "annotate" as the primary name? The arguments on both sides are surprisingly illuminating.
The case for renaming is straightforward. The word "blame" frames the act of reading code history as an act of accusation. It primes the user to look for a culprit rather than a cause. Language shapes thought. A team that "runs blame" on a failure will behave differently than a team that "annotates" a failure, even if the underlying action is identical. The tool is the same. The mindset is not.
The case against renaming is equally straightforward, and it comes in two flavors. The practical objection is that renaming a command that has existed for twenty years breaks scripts, documentation, tutorials, muscle memory, and every Stack Overflow answer ever written. The philosophical objection is more interesting. Blame is honest. It acknowledges what the command actually does in most organizations. Calling it "annotate" does not change the behavior. It just makes the behavior easier to pretend is harmless. The word "blame" at least forces you to confront what you are doing when you type it.
GitLab had an internal discussion about renaming the blame button in their web interface to "inspect." The proposal generated hundreds of comments. Some argued it was a meaningful step toward a healthier culture. Others argued it was performative, a cosmetic change that addressed the symptom while ignoring the disease. The button still says "blame" in most Git interfaces. The debate continues. And the fact that it continues tells you something important: the discomfort with the name is real, but nobody has a clean solution.
Git itself offers the quiet compromise. All three names work. If you prefer annotate, use annotate. If you prefer praise, use praise. The output is identical. But the default, the name in every tutorial and every documentation page and every conversation, is still blame. Defaults shape culture more than options do.
In May of two thousand twelve, a man named John Allspaw published a blog post on Etsy's engineering blog that would reshape how the software industry thinks about failure. The title was "Blameless PostMortems and a Just Culture." Allspaw was Etsy's Vice President of Technical Operations, and the post laid out a philosophy that now seems obvious but at the time was radical.
The core idea was this: when something goes wrong, the goal of the investigation should be to understand what happened, not to find someone to punish. Allspaw drew on concepts from aviation safety and medical error research, fields where the consequences of failure are far more severe than a website going down, and where decades of study had shown that punishment makes organizations less safe, not more.
Having a Just Culture means making effort to balance safety and accountability. We investigate mistakes in a way that focuses on the situational aspects of a failure's mechanism and the decision-making process of individuals proximate to the failure, rather than simply punishing the actors involved.
The key insight was counterintuitive. Blameless does not mean unaccountable. Allspaw was explicit about this.
Engineers are not at all off the hook with a blameless post-mortem process. They are very much on the hook for helping Etsy become safer and more resilient.
What they were on the hook for was not punishment. They were on the hook for providing a detailed, honest account of what happened, what they were thinking, what signals they missed, and what the system could change to prevent the same failure from happening again. The goal was learning, not retribution.
The concept spread fast. Google adopted blameless post-mortems as a core practice in their Site Reliability Engineering discipline. The Google SRE book, published in two thousand sixteen, cited Allspaw's article and made blameless culture a pillar of their approach. PagerDuty, Atlassian, and dozens of other companies followed. By two thousand twenty, "blameless post-mortem" had become standard vocabulary in the industry.
The connection to git blame is not abstract. Allspaw was describing a cultural shift away from the exact behavior that the blame command enables. When a team runs a blameless post-mortem, the first rule is that you do not use the name of the person who made the change as the explanation for why the change was made. The system failed. The process allowed the failure. The individual is a data point, not a defendant. But the command is still called blame, and the output is still a list of names.
Around the same time Allspaw was writing about blameless culture at Etsy, a Harvard Business School professor named Amy Edmondson was watching her decades of research suddenly find a massive audience.
Edmondson had been studying psychological safety since the late nineteen nineties. Her original research, published in nineteen ninety-nine, came from an unexpected place: hospitals. She was studying medical teams and expected to find that the best teams made the fewest mistakes. Instead, she found the opposite. The best-performing hospital teams reported making the most mistakes.
The explanation, once she found it, was elegant. The better teams were not making more mistakes. They were more willing to talk about their mistakes. They felt safe enough to report errors, ask questions, and admit when something went wrong. The teams that reported fewer mistakes were not actually safer. They were just hiding more.
Psychological safety is a shared belief held by members of a team that the team is safe for interpersonal risk taking.
That definition, "safe for interpersonal risk taking," captures something precise. It does not mean comfortable. It does not mean everyone is nice. It means you can say "I do not understand this code" without being seen as incompetent. It means you can say "I think I caused this bug" without fear of punishment. It means you can ask a question in a meeting without being dismissed.
In two thousand twelve, the same year Allspaw published his post, Google launched an internal research project called Project Aristotle. The goal was to identify what makes teams effective. They studied one hundred and eighty teams over two years, using more than thirty statistical models and hundreds of variables. They expected to find that the best teams had the smartest people, or the most experienced, or the best leaders. What they found instead was that who was on the team mattered less than how the team worked together. And the single most important factor in team effectiveness was psychological safety.
Edmondson's research, previously well-known in academic circles but not widely discussed in the tech industry, exploded into the mainstream. A New York Times article about Project Aristotle introduced her work to millions of readers. Her two thousand eighteen book, "The Fearless Organization," became a management bestseller. The idea that psychological safety is not a luxury but a performance requirement changed how companies thought about team dynamics.
And here is where it connects back to a Git command. Psychological safety is destroyed precisely when people believe that mistakes will be used against them. When blame output shows up in a performance review, safety evaporates. When a team lead names and shames based on who last touched a failing line, safety evaporates. The tool itself is neutral. The culture around the tool is not.
There is a quieter problem with git blame that has nothing to do with culture and everything to do with signal versus noise.
Imagine a codebase that has been running for ten years. One day, the tech lead decides to enforce a consistent code style. They run an automated formatter across every file in the repository. The formatter changes the indentation in three thousand files, adjusts whitespace, reorganizes imports, wraps long lines. A single commit touches tens of thousands of lines. The code's behavior does not change at all. But the blame output changes completely. Every reformatted line now shows the tech lead's name and the date of the formatting commit. The actual history, who wrote the logic and why, is buried one layer deeper.
This was such a common problem that in two thousand nineteen, Git version two point twenty-three added a flag specifically to address it. The flag is called ignore-rev. You pass it a commit hash, and blame skips that commit entirely, attributing each line to whichever commit changed it before the ignored one. For teams that do bulk formatting, you can create a file called dot git-blame-ignore-revs, list all your formatting commits in it, and configure Git to use it automatically. The archaeology works again.
GitHub added support for the ignore-revs file in their web interface in March of two thousand twenty-two. This means that when you view blame on GitHub's website, it will automatically skip the commits listed in that file. The feature was years overdue. It had been one of the most-requested features in GitHub's community discussions, because every team that has ever adopted a code formatter has experienced the same moment of frustration: you run blame expecting to understand why a line exists, and instead you learn that someone ran Prettier last Tuesday.
But ignore-rev only works for the commits you know about. It requires someone to maintain the list. And it only addresses the innocent case, the bulk formatting commit made in good faith. The less innocent case, the developer who makes targeted cosmetic changes to shift blame away from themselves or away from someone else, is harder to detect and impossible to automatically filter.
And now we arrive at the question this season keeps circling back to.
In the previous episode, we saw how secrets leak into Git history because the system preserves everything. Now consider what blame preserves: the name of a human attached to every line of code. That attribution model assumes something that is increasingly untrue. It assumes the person whose name appears in the commit actually wrote the code.
When a developer uses an AI coding assistant and commits the output, git blame shows the developer's name. Not the model's name. Not a flag indicating the code was generated. Just a human name, attached to code that the human may not fully understand. The developer approved it. They committed it. Their name is on it. But if something goes wrong and someone runs blame, the name they find belongs to a person who might not be able to explain what the code does or why it was written that way.
This is not a hypothetical future problem. It is happening right now, in thousands of repositories, every day. Code generated by AI, committed by humans, attributed by blame to humans, reviewed by other humans who may also not fully understand it. The entire chain of accountability that blame is supposed to enable, find the name, ask them what they were thinking, depends on the author having actually thought about the code. When the author's role was to accept a suggestion and press enter, that chain breaks.
Season one told you how Git was built to track the work of human beings, to preserve their decisions, to create a record of intent. The blame command is one of the purest expressions of that design. Every line has an author. Every author has a reason. Follow the thread, find the reason. But when the author is a proxy for a machine, the thread leads to someone shrugging. They do not remember what the AI generated. They did not examine it closely. The intent the blame output points to was never there to find.
The blameless post-mortem assumes there is a human whose honest account will help the team learn. The psychological safety research assumes there is a person who needs to feel safe enough to speak truthfully. Both frameworks were built for human teams making human mistakes. When the mistake was made by a machine and committed by a human who did not notice, neither framework quite fits. We need new vocabulary, and we do not have it yet.
Blame is a good tool and a terrible name. It is good because understanding the history of code, who changed what and why, is genuinely essential for maintaining complex systems. It is terrible because the word frames every investigation as an accusation, and that framing has real consequences for real people.
The blameless movement got the principle right. Investigate systems, not individuals. Use blame output to find the commit, not to find the scapegoat. But the principle is fragile. It takes one manager using blame data in a performance review to undo months of cultural work. And the command's name is a constant invitation to backslide.
The ignore-rev flag got the technical problem right. Filter out the noise. Preserve the signal. Let the archaeologist dig through the meaningful layers without getting stuck on a formatting commit from two thousand twenty-one.
The AI problem has no solution yet. When blame points to a human who does not understand the code they committed, the entire attribution model breaks. Not because blame is flawed, but because the assumption underneath it, that the author and the authority are the same person, is no longer reliable.
In the next two episodes, we go deeper into what happens when Git's trust model collides with reality. First, the identity problem. Git trusts you to tell the truth about who you are, and that trust has become a vulnerability. Then, the supply chain. When the trust model that worked for a community of kernel developers meets a world of billions of automated dependencies, the consequences are measured in national security briefings. That was episode thirty-four.
Git blame with the L flag lets you narrow the scope. Instead of blaming an entire file, you blame a range of lines. Lines forty-two through fifty, say, or the fifty lines starting from line one hundred. This is where blame becomes genuinely useful for debugging. You do not need to know who wrote every line of a thousand-line file. You need to know who changed the five lines around the bug. Combine it with ignore-revs-file to skip past formatting commits, and you have a focused history of the code that matters. The tool is good. Use it for archaeology, not prosecution.