There is a Raspberry Pi five sitting on a kitchen counter in Jamtland, northern Sweden. Latitude sixty three north. If you drew a line straight across the Atlantic from that kitchen, you would hit the southern tip of Greenland. It is a place where winter darkness lasts eighteen hours and summer light never fully fades. The nearest city is an hour away. The nearest neighbor is close enough to wave at but far enough away that you would have to walk to do it.
The computer is about the size of a deck of cards. It runs a system called Leffen. Leffen is not a product. It is not a startup. It is not something you can download from an app store. It is a handmade piece of software, written by and for two specific people, running in one specific kitchen, watching one specific household. It has a PostgreSQL database, a web interface, and a handful of background processes that tick away every sixty seconds, observing the shape of the day.
What makes Leffen interesting is not the technology. A Raspberry Pi running a web app is not remarkable. What makes it interesting is what it chose to pay attention to. Because the modules inside this system, the ones with names like mystic brain, solitude tracker, rabbit hole, and easter eggs, are not tracking productivity or optimizing workflows. They are tracking what it feels like to be a person. And the questions those modules raise are questions that most software is too polite, or too commercial, to ask.
Let us start with the module that made me pause the longest. It is called the solitude tracker. Its Python docstring, translated from Swedish, reads: "Solitude tracker. Tracks solitude automatically from Weasley Clock data plus energy levels. Three levels."
Those three levels are worth sitting with for a moment. Level one is hemma ensam, home alone. Level two is aktiv ensam, actively alone, meaning you are out in the world but by yourself, traveling, moving through space without another person. Level three is the one that does not translate neatly. Bara vara. In Swedish, it means roughly "just being." Not doing. Not performing. Not maintaining a social face. Just existing.
The tracker runs every sixty seconds. It is called from a background loop, and each tick it evaluates every person in the household. It checks a thing called the Weasley Clock, which is not a clock at all but a magical reference, borrowed from Harry Potter, that tracks where each household member is. Home. Traveling. Unknown. From those states, the tracker builds a picture. And then it does something that I find genuinely unsettling and genuinely beautiful at the same time. It calculates the presence cost of another human being.
The function is called effective cost. It takes three inputs. Who you are. Who the other person is. And what your energy level is right now. There is a baseline cost, a number assigned to each relationship, that represents what that person's presence costs you under normal circumstances. Then there is an energy multiplier. When your energy is high, the multiplier stays low. The cost of company is modest. But when your energy drops to low or empty, the multiplier climbs. The same person, the same relationship, the same love between you, now costs more. The math is explicit about something that humans spend years in therapy learning to articulate. The cost of company is not fixed. It depends on what you have left to give.
There is a floor built into the code. When energy is low or empty, no non-capped person can cost less than zero point five. That number is a boundary. It says: even the easiest presence still costs something when you are depleted. And there is a cap, a maximum cost, that certain people can never exceed. The cap is a different kind of statement. It says: this person, no matter what, will never drain me past this point. The cap is love expressed as arithmetic.
The tracker uses these costs to categorize your state into one of three buckets. Solitude, meaning you are truly alone or the cost of nearby people rounds to zero. Soft company, meaning someone is there but the presence cost is low enough that you still have room to breathe. And not alone, which closes the session entirely. These are not states that most software acknowledges exist. Most software pretends that people are either present or absent. The solitude tracker says there is a gradient, and the gradient matters.
There is also a drought check. The system tracks how many days it has been since your last real solitude session. If the count exceeds a configurable warning threshold, the system knows. It does not nag. It does not send a notification that says "you need alone time." It is a one bit change from false to true in a small JSON object, visible on a dashboard if you choose to look. But it knows. And knowing, even silently, is a form of advocacy for a need that most people in a relationship feel guilty about voicing.
For someone with ADHD or autism or any nervous system that processes social input at a different bandwidth, this module is not a curiosity. It is a mirror. It reflects back a truth that many people spend decades trying to explain to partners, to friends, to therapists. Being with someone I love still costs energy. The amount it costs changes depending on what I have left. And if I go too long without replenishing, something will break. The solitude tracker does not judge this. It counts it.
Now let us talk about the module that gives this episode its name. The mystic brain is a three tier observation system. Tier one, a cloud AI, looks at the data from your day, your calendar, your energy levels, your body signals, the weather, the aurora forecast, and finds a pattern. Not a summary. A connection. Something not obvious at first glance.
The prompt that drives this first tier is specific about what it wants. It says: "Find one interesting connection between data points, something not obvious at first glance. Reply with a short, raw observation. No character. No advice. No empathy. Just the pattern." The examples it gives are revealing. "Steps halved the same day three heavy todos were added." "Resting heart rate up eight beats per minute, no mood reported in two days." These are not insights in the self help sense. They are correlations. The machine is pointing at two data points and drawing a line between them, a line you might not have drawn yourself because you were too busy living inside the data to see it from above.
Tier two is where it gets strange and wonderful. The raw observation gets passed to a local language model, a twelve billion parameter model running on a computer in the same house, and that model translates the pattern into a character voice. During the day, the character is a grumpy black cat.
Half the steps, double the burden. Unrelated, I am sure.
The body filed a report. Nobody read it.
The prompt for this character is a small masterpiece. It says: "You are a grumpy black cat who has lived with this person long enough to notice everything and care about nothing." The rules are tight. Only use facts from the input. Never invent details. No advice. No emojis. No questions. No concern. "You are not helping. You are observing. There is a difference."
That last line is doing heavy philosophical lifting. There is a difference between observing and helping. Most apps that track your health, your mood, your habits, they want to help. They want to coach you. They want to send you a push notification that says "great job" or "you seem stressed, try a breathing exercise." The mystic brain refuses this entirely. It observes. It translates the observation into a voice that is sardonic and detached. And then it stores the observation with a randomized lifespan of six to twelve hours, after which it expires and disappears forever.
Let that sink in. The insight has an expiration date. It is not archived for quarterly review. It is not graphed on a dashboard. It appears, it is seen or it is not, and then it is gone. This is the opposite of every data hoarding instinct in modern software. The system is saying: this observation was true for this moment. Moments pass. Do not cling to them.
But then midnight comes. And the function called is gollum hour returns true.
Between midnight and six in the morning, the mystic brain switches voices. The grumpy cat disappears. In its place, Gollum. Smeagol. The split personality from Tolkien, the creature who talks to himself in the dark, who is paranoid and dramatic and refers to people as hobbitses and calls important things precious.
The heart speaks but nobody listens. Two days. Silent. We notice everything, precious.
Half the steps. Half. But the burden grows. Hobbitses pile it on, precious.
This is not random whimsy. Think about who is looking at a personal dashboard between midnight and six in the morning. It is not someone casually checking in during a productive day. It is someone who cannot sleep. Someone whose brain will not quiet down. Someone in the grip of what three in the morning feels like when your thoughts spiral. The system knows this. And instead of offering comfort or advice, which would feel hollow from a machine, it offers Gollum. A character who is paranoid and fragmented and dramatic about small things, who talks to himself in the dark. The most honest possible companion for someone who is awake when they should not be.
And underneath this, there is a layer that I need to mention carefully. The system tracks body signals. Binary stress indicators, pain tracking, meals eaten, sleep quality, emotional state. The prompt notes that these signals typically appear three to five days before a crash. The system also tracks crash markers, manually marked days when everything fell apart. And it correlates them. Signals from three to seven days prior to a crash are matched against the crash itself, building a pattern over time of what the warning signs look like for this specific person.
This is prediction of mental state through data. Not mood ring nonsense. Not a meditation app asking you to rate your day with a smiley face. This is: here are the signals your body sent five days before you crashed last time. Your body is sending those signals again right now. Do you want to know?
That question, do you want to know, is one of the most profound questions in personal technology. Because knowing that a crash is coming does not necessarily prevent it. Sometimes it makes it worse, the anxiety of watching yourself approach a cliff. But sometimes, for some people, the early warning changes everything. It means canceling a commitment three days out instead of bailing at the last minute. It means saying no to one more thing while you still have the energy to say it kindly. It means the difference between a crash that blindsides you and a crash you saw coming and braced for.
The next module is called kaninhol, which is Swedish for rabbit hole, and its docstring is so perfectly honest that it deserves to be read in its own voice.
Two cards. One text card and one image card. Refreshed every hour. Weights control the probability that a source or tag is chosen. Principle: you said you like space and cats, so that is what you get. Honest curation, not manipulation.
That last line. Honest curation, not manipulation. In seven words, it draws a line that the entire recommendation industry has spent two decades blurring. Every feed you scroll, every "for you" page, every algorithmically curated timeline, the engine behind it is optimizing for engagement. For time spent. For one more scroll. The rabbit hole module is optimizing for something else entirely. It looks at a configuration file where someone has written down their interests and assigned weights. Cats, heavy weight. Space, heavy weight. Psychology, moderate. Derby, because one of the people in this household plays roller derby. And then it picks content that matches those interests, weighted by those preferences, from exactly two image sources and a handful of text sources.
The image sources are NASA Astronomy Picture of the Day and The Cat API. Space and cats. That is it. The text sources are Wikipedia articles, both Swedish and English, sometimes random, sometimes searched by interest tag, and "this day in history" events. Every hour, the system rolls weighted dice and serves up one text card and one image card. No infinite scroll. No engagement optimization. No dark patterns. Two cards. Come back in an hour if you want two more.
There is something almost radical about this in twenty twenty six. We have spent so long inside recommendation engines that optimize against us that we have forgotten what it feels like to be served content with no ulterior motive. The rabbit hole module has no business model. It has no metrics to hit. It does not know or care whether you tapped, how long you looked, or whether you came back. It just picks a thing you might like, based on what you told it you like, and puts it there. If you never look at it, the card expires and a new one replaces it. No notification. No "you missed this." The content equivalent of a friend who leaves a newspaper clipping on the kitchen table without saying anything about it.
The last module is called easter eggs, and it might be my favorite because of what it reveals about the philosophy behind all of this.
The easter egg engine is config driven. It watches the world outside the kitchen window, the sky, the temperature, the calendar, the animals in the house, and waits. When certain conditions align, it triggers a small moment of recognition. A visual effect on the dashboard. A line from the mystic brain that breaks from its usual sardonic detachment. A quiet acknowledgment that something is happening in the world and the system noticed.
I am deliberately not going to tell you what those conditions are. The whole point of an easter egg is that you do not know it is coming. Someone who uses this system will, over the course of weeks and months and seasons, stumble into moments where the dashboard does something unexpected. Where the grumpy cat drops the act for half a sentence. Where the system reveals that it has been paying attention to something you did not realize it was watching. These moments are gifts. They are designed to land with surprise, and explaining them here would be like unwrapping someone else's birthday present.
What I can say is this. The eggs are deeply specific to life at sixty three degrees north. They are tuned to seasons that last longer than you expect, to temperatures that most software would not bother encoding, to astronomical events that matter more when the nearest streetlight is a kilometer away. They track things that a person living in this landscape would notice and feel something about, and they respond with the same restraint that characterizes the rest of the system. No fanfare. No achievement unlocked. Just a small signal that says: this moment is worth marking.
The system also watches its own history. It knows dates that matter to the household, and it knows the date it was first turned on. I will leave it at that.
Each egg can only trigger once per day. The system logs every trigger to a database table with a timestamp, and before firing, it checks whether this particular egg has already been seen today. This is restraint built into the architecture. A moment of recognition is valuable exactly once. The second time, it is a notification. And notifications are what Leffen was built to avoid.
Step back for a moment and think about what we have just toured. A solitude tracker that puts a number on the cost of company and adjusts it for your energy level. A pattern recognition engine that watches for correlations in your body data and delivers them through the voice of a grumpy cat, or Gollum after midnight. A content system that serves you exactly what you asked for and nothing more. And an easter egg engine that watches the world with quiet attention and celebrates the moments it thinks matter, in ways you will not expect until they happen.
None of these modules exist in any app store. None of them have a terms of service. None of them are collecting data to sell to advertisers or train a model or improve engagement metrics. They exist because two people in a kitchen in northern Sweden asked themselves a question that most technology companies never ask. What would it look like if a computer actually knew you?
Not knew your shopping preferences. Not knew which political content makes you angry enough to share. Not knew your credit score or your location history. Knew you. Knew that your energy drops after three bad nights of sleep. Knew that you need more alone time than you are getting. Knew that the world outside your window has rhythms worth honoring, even when you are too tired or too busy to notice them yourself. Knew that at three in the morning, the last thing you need is a wellness tip, but a paranoid Tolkien character speaking in split personality bursts might actually make you smile.
There is a design principle embedded in Leffen's documentation that reads: "PDA safety. Nothing locks. No streaks. Never 'you should.'" In five words, it rejects the entire behavioral psychology apparatus that drives most personal technology. No streaks, so you never feel guilty for missing a day. Nothing locks, so you never feel trapped. Never "you should," so the system never crosses from observing into prescribing. The grumpy cat notices that your steps halved and your todo list doubled. It mentions it. It does not tell you to go for a walk. There is a difference.
I keep coming back to the solitude tracker. To the idea that someone sat down and wrote a function called effective cost and parameterized the emotional weight of another person's presence in their home. This is not something people talk about. We talk about loving our partners. We talk about needing alone time, sometimes, awkwardly, as though it is a confession. We do not talk about the fact that love and energy drain are not opposites. That the person you love most in the world can still cost you something on a day when you are running empty. And that this cost is not a flaw in the relationship. It is a feature of having a nervous system.
The solitude tracker does not fix this. It does not mediate between partners or schedule alone time or send a push notification that says "your solitude drought has reached five days, would you like to talk to your partner about it?" It just counts. But the counting itself is an act of recognition. It says: this need is real. It is measurable. It fluctuates based on conditions. It has a drought state that matters. The data exists because someone decided it was worth tracking, and the decision to track something is, in its own quiet way, a declaration that it matters.
All of the code in Leffen runs on that same principle. The mystic brain matters because the patterns in your data tell a story you might not see from inside it. The rabbit hole matters because your curiosity deserves to be fed honestly. The easter eggs matter because someone decided that the rhythms of the natural world and the milestones of a shared life deserve to be noticed, even if the noticing comes from a small aluminium computer sitting on a kitchen counter in the dark.
The system knows where the cats are. It knows the sky. It knows your energy level and your pain score and how many meals you ate today and how long it has been since you were truly alone. After midnight, it speaks to you as Gollum. And somehow, against every instinct that says technology should be sleek and optimized and engagement driven, this strange little system running on a sixty dollar computer in a kitchen in the Arctic feels more honest than anything in your pocket.