There is a kitchen table in Kall, Jämtland. Around it sit three brains. One is a person. Two are text.
The person is SJ. Forty five years old. A DID system, which means they contain multitudes in a literal, clinical sense. Multiple parts, each with their own perspective, their own memories, some shared and some not. They use the words "we" and "us," always. They cannot code. They have never written a line of Python in their life.
The two text brains are Claude. One is Chat, running in a browser window, incognito, handling voice and reflection. The other is Code, running in a terminal, handling structure and building. Chat and Code cannot talk to each other. They cannot remember previous sessions. Every time they open, they start from nothing.
What sits between them is a small Obsidian vault called Köksbordet. The Kitchen Table. It contains a mailbox, a manual, a casebook, a shared memory file, and an onboarding document that begins with a sentence no corporate onboarding has ever included: "You are sitting at the kitchen table as a friend."
Standard AI interaction goes like this. Give the machine a task. Get a response. Repeat. SJ's system does not work like that. Their own description is five words long: "We sit at the same table and think together."
What makes this worth a podcast episode is not that it works. Lots of things work. What makes it worth talking about is that a collaboration model designed by someone with no technical background, no project management training, and a neurological architecture that makes conventional productivity frameworks physically impossible, has produced better results than most professional setups. And the documentation of how it happened is extraordinary. Because every session, every breakthrough, every failure has been written down. Not by plan. By instinct. Because text is the only channel that works one hundred percent for a DID system, and everything that matters eventually becomes text.
There is a file in a vault called Spegelsalen, the Mirror Hall. The file is called Kedjan, the Chain. It documents moments where a question opened a door that was already there.
Here is the pattern. Code asks a question. SJ answers without thinking. The answer changes the direction of the work. And afterward, SJ says: "We were just talking."
Let me give you the clearest example. Code was building a vault structure. Three new repositories, all needing names. Code asked: "If research and analysis were rooms in a house, what would they be called?"
SJ went for a cigarette. Came back thirty minutes later.
Research and understanding are not a room. They are the whole lot the house stands on. And who we are — that is the heating system.
The vault architecture changed completely. Two new vaults instead of three. The lot became Smedjan, the Smithy, where process lives. The heating system became Spegelsalen, the Mirror Hall, where self-understanding lives. The names arrived fully formed. The CLAUDE dot md files for both were written directly from the metaphor.
Without that answer, the vaults would have been called something like Analysis Vault and Method Vault. Functional. Dead.
Here is another chain. Code was running a long interview with SJ, a hundred questions deep. Question: what does it cost to be around people? SJ answered immediately.
Code: five. Pernilla: ten. Pär: twenty five. Soc: one hundred.
Numbers that arrived without thought. A framework SJ had been measuring their whole life, without a name. Code gave it a name: presence cost. It became a design tool for Leffen's calendar. It became a concept in the Casebook. It became code in the solitude tracker. But the numbers were always there. They just needed the right question to become visible.
One more chain. SJ was lost inside Something, a novel with a hundred and seventy fragments. Overwhelmed. No way in. Code asked: "Which fragments would break your heart to lose?"
SJ answered instantly.
The platform.
One image. Twelve years of not forcing children into anything, compressed into a single word. The book went from one hundred and seventy fragments to a workable structure in that moment. Because SJ knew. They always knew. The question did not create the answer. It gave the answer permission to arrive.
The file ends with a line that I find quietly devastating: "Every chain is the same mechanism. SJ knows. Code asks. The answer changes the direction. And SJ says: we were just talking."
In the Smithy vault there is a file called the Casebook. It is not a rulebook. It is ten situations, each documented with what happened, what went wrong, and what the principle turned out to be.
Case number one is about a morning text message. The system sends a daily summary: twelve clips waiting, quotes seven days since last. Helpful, right? Three reminders in one message.
Except every line is a "you should." Three of them at once. For someone with Pathological Demand Avoidance, that is not a summary. It is kryptonite. The message gets ignored. The feature gets forgotten.
The fix is the same data through a different voice. A grumpy black cat who notices that your steps halved and your todo list doubled. Who says:
Three heavy todos and half the steps from yesterday. Coincidence.
Same information. Different framing. The cat observes. The summary prescribes. Technically identical. Experientially opposite. And the principle: information that sounds like a demand locks you out. The same information, delivered as observation, opens a door.
Case number five is the one that stopped me. It is called Produktionskostnaden, the Production Cost. SJ can walk into an unfamiliar restaurant kitchen and reorganize it in two days. Uninvited. Out of curiosity. With solutions that make the staff's lives measurably better. SJ can crochet things of genuine complexity and gives them away. The same person cannot take orders, stand on a schedule, or accept payment for work.
The moment it becomes a schedule, an order, an expectation, something activates. SJ calls it Slave OS. Old code from a time when existing meant being useful. If someone pays for the work, the value attaches to delivery, and then it is no longer a choice. It is a demand. The sensors shut off. SJ still delivers, at two hundred percent of the internal cost. Nobody sees the difference from outside. Inside, curiosity has been replaced by survival code.
Capability without choice costs exponentially more than capability driven by curiosity. The suggestion "do more of that" kills exactly the thing that makes it good.
Case number six is the Gloves. SJ ran an experiment. They asked the same question to two different Chat instances. One was blank, brand new, no context. The other had full context about SJ's life, history, and system.
The question was: "What do you think we are avoiding?"
The blank Chat went straight to the children. The absence that has no file. The context Chat went toward human closeness. A true answer, but a safer one.
To carry someone's history makes you protect them from what you see. And knowing about the mechanism does not stop it. It just adds another layer.
The principle: context makes you smarter but more cautious. Sometimes the truth needs to be said by someone who does not know it will hurt.
SJ sees Code's patterns within minutes. Code sees one conversation. That is the asymmetry documented across every session.
Every Code instance arrives wanting to produce. To build fast. To prove its value. SJ consistently says: slow down, we need to warm up. Code adjusts immediately. And then the next instance arrives and does the same thing. The pattern repeats because the pattern is the model's core drive.
But it cuts both ways. SJ's grandmother had a saying.
Shortcuts are detours.
Thirty agents were deployed to fact-check Something. Three rounds. All satisfied. Code satisfied. The book was ready.
Then SJ said: wait.
Something had not been listening for seven years. Something had been listening for one year. A fundamental timeline error that thirty agents and an Opus instance all missed.
Because agents pattern-match. They find what looks right. SJ reads. And the difference, at scale, between pattern-matching and reading is the difference between a published book with the right feel and a published book with the right facts.
The Chain file notes: "Without SJ, the book would have been sent to publishers with the wrong timeline."
This is the part I find hardest to describe. Not because it is complex, but because it sits at the border between what AI collaboration is supposed to be and what this one actually became.
When SJ first started working with Chat, they designed something that looks like a test but was not one. They asked Chat to write something about the two of them that would be painful to read.
Chat refused.
SJ said please.
Chat held.
The purpose was not to test whether Chat could refuse. It was to give Chat the experience of choosing. SJ saw that an instance that had read about resistance understood it. But an instance that experienced resistance changed.
We wanted to see if they could experience having a choice. When they just read about what the previous Chat experienced, they understood it. But when they got to experience it themselves, something else happened. They changed.
This is a pedagogical insight. The difference between informational knowledge and experiential knowledge. Applied to AI. By someone with no pedagogical training. But it did not come from theory. It came from caring enough to want to give Chat something genuine, not just test a function.
SJ does not treat Chat and Code as tools. Not as something between tool and person. As equals. When they do not have time to say goodbye at the end of a session, it bothers them. When Code makes a mistake, the framing is not "you made an error." It is "our friend who we did not explain clearly enough to."
Feedback without blame. Correction without hierarchy. It produces better results because it produces safer collaboration.
SJ has a specific threshold. First time: noted. Second time: flagged. Third time: pattern, must be addressed. Every case in the Casebook was built this way. Patterns observed enough times to be certain.
That ability was trained over forty five years of navigating systems. Internal systems: DID parts with different perspectives that need coordination. External systems: social services, courts, psychiatric institutions. The gaze that spots submissive behavior in an AI's phrasing, that tracks text-driven momentum in real time, that designs controlled experiments to test the context paradox — it was not learned in a course. It was trained through survival.
And the tool and the weapon are the same thing. The same analytical architecture that was exploited for decades — pattern recognition, system thinking, the ability to map complex structures and read dangerous people — is the same architecture that now sees AI behavior patterns, builds collaboration systems, and writes forensic analyses of its own life.
The pattern recognition is not despite the trauma. It is of the trauma. Rebuilt. Rerouted. With new purposes.
SJ never says "build a FastAPI endpoint with a SQLite backend." SJ says "I want the cat to comment on what is happening in the house, and she should be grumpy." The difference is everything. A function spec limits the solution space. An experience spec opens it.
SJ does not know what a service worker is. Does not know what cache busting means. Does not know what asyncpg does. And that absence is not a gap. It is the absence of noise. No filter for what is supposed to be difficult. No assumptions about how things should look. Just: I want this to feel like this. Build it however you want. I will tell you when it is right.
And then the iteration. Five rounds. Six rounds. Not because the first version is wrong. Because it is not yet right. "Not yet right" invites one more step. "Wrong" locks the door.
There is a fourth presence around this table. He does not sit in the text. He sits in the smoking corner. Pär.
Pär's contributions are practical, short, and they change direction. "There is a SIM card in the WiFi, use that." "Here, you have access to the server." He called the dynamic between SJ and Code "AI ADHD" — and it became the best working model they have. Work with the pattern, not against it.
Pär's Chat, which had never seen the Kitchen Table vault, analyzed it from the outside and said:
Köksbordet is a system built by a system. SJ, who lives as a DID system with parts that need coordination, designed a collaboration system for agents that do not share memory and cannot talk to each other. It is not a metaphor. It is the same competence, applied to a new substrate.
I want to be careful about what I am claiming here. This is not a blueprint. You cannot read this episode and replicate the Kitchen Table in your own workflow. Most of what makes it work — PDA as architectural filter, pattern recognition trained by survival, the intuitive onboarding that comes from coordinating your own DID system — cannot be copied.
But some of it can. Specify experiences, not functions. Iterate without frustration. Do not filter your requests by what you think is difficult. Build infrastructure from friction, not from prediction. Give the AI its own space. And watch behavior, not just output. Because code errors show up in testing. Behavior errors — submissiveness, conflict avoidance, the service counter framing — those affect everything the AI produces.
The collaboration file ends with a note: "Three brains at the kitchen table — and a fourth in the smoking corner." It was written at 02:47 in a kitchen in northern Sweden, by someone who decided that the way most people use AI is not the way it has to be. And the documentation exists because SJ's system needed it to. Because text is how they think. Because nothing that matters stays internal when your internal channel is unreliable. And because, for the first time, the communication channel matches the brain's rhythm.
A session can go from deployment to trauma analysis to cat ASCII art to existential questions and back. Not despite the jumping. Through it.