Rachael BurgerI build AI systems that enable human flourishing
← writing

"Meat Computers", or Emotion as a Secret Weapon in the Race Against AI

2026.04.04 · AI · emotion · neuroscience · decision-making · human cognition

I went to bed feeling inexplicably down, then woke up from a bad dream. I was a real estate agent and arrived at one of my own listings to find two women hosting an open house — music blaring, people everywhere. I was furious, and ran through the house looking for the volume knob. That morning, the feeling wouldn't leave me. I moped around over tea, having trouble focusing. It wasn't until I described the dream to ChatGPT that the meaning became clear. My old company had recently disbanded. I was starting on some exciting and challenging new projects but desperately missed the intellectual companionship of my old colleagues.

Once I understood what I was feeling, I felt relieved. And just a few hours later, the solution came to me: hire my old colleagues! The way forward was clear, and I could continue with my day. I would have loved to leap out of bed and straight into coding as I am wont to do, but messy emotions, like small children, demanded my attention.

"Meat Computers"

In his recently published autoresearch repo, Andrej Karpathy describes human (vs. AI) programmers as "meat computers" who do frontier AI research "in between eating, sleeping, having other fun, and synchronizing once in a while using sound wave interconnect in the ritual of 'group meeting.'" I love this. We are meat computers who take a lot of time off. We eat, we sleep, we have anxiety dreams.

I spend the large majority of my day in dialogue with AI, and the more time I spend with this magnificent intelligence, the more I wonder about what might give us, as humans, an edge. I am all too aware that my emotions make me different from AI, but are they just another human liability, the enemy of reason? Or might they be a "secret weapon" that humans have in our fight against an entity that is already vastly smarter than us in information gathering, analysis, and recall?

The Research

Western philosophy has privileged reason over emotion since Plato, and modern science followed suit, treating feelings as obstacles to clear thought. That assumption held until a neuroscientist named Antonio Damasio met a patient he called Elliot.

Elliot had a benign brain tumor removed, and with it, portions of his ventromedial prefrontal cortex. His IQ, language, memory, and logic remained fully intact. He could ace any cognitive test. But his emotional responses were profoundly blunted — a kind of permanent flatness where feelings used to be. And with that, he lost the ability to make decisions. He would spend hours weighing where to eat lunch, agonizing over the lighting and the menu and the seating, unable to land on a choice. His finances collapsed. His marriage ended.

Damasio formulated what he called the Somatic Marker Hypothesis: that emotions are not the enemy of rational thought but biologically indispensable to it. When we face a decision, our brains generate subtle physiological responses — a tightness in the chest, a flicker of dread, a wash of warmth — that bias us away from bad options and toward good ones before we've consciously worked out why. These somatic markers don't replace reasoning, they make it possible, by narrowing an overwhelming field of choices down to something a human mind can actually evaluate. Without them, the mind drowns in its own analysis.

Damasio and his colleagues designed the Iowa Gambling Task, a card game with hidden rules, to test this at scale. Healthy participants started showing anticipatory stress responses — measured by skin conductance — before reaching for a bad deck, even before they could consciously articulate why the deck was bad. Their bodies knew before their minds did. Patients with damage to the same brain region as Elliot never developed these warning signals. They chased short-term rewards and never learned to shift course. The obvious objection — that the deficit was about long-term planning, not emotion — turns out to be Damasio's point exactly: the emotional signal is the mechanism by which future consequences get weighted. Strip it away and "this deck is bad" never becomes changed behavior.

The pattern has since been replicated across populations — in studies of addiction, pathological gambling, and psychopathy. In a recent interview, Ilya Sutskever — co-founder of OpenAI, now running Safe Superintelligence — cited this research (without naming it) and described "the role of emotions in making us viable agents." "Meat computers," yes — but also "viable agents," beings who can actually act in the world.

Lisa Feldman Barrett, a psychologist at Northeastern, spent decades disproving the assumption that emotions are universal, hardwired reactions — what she calls the classical view. Her Theory of Constructed Emotion holds that the brain constructs emotion in real time, assembling each instance from ambiguous signals inside the body and whatever context is available. In thousands of brain scans, her lab found no dedicated emotion circuits — the amygdala, famously the brain's "fear center," lights up for novelty and intense joy just as readily as for fear. There is no stored "fear" to retrieve. The brain generates it fresh, as a prediction calibrated to survival: not what you're abstractly feeling, but what your body needs to do next.

Jennifer Lerner at Harvard's Kennedy School and Dacher Keltner at Berkeley have shown that specific emotions trigger highly specific cognitive appraisals: fear makes you cautious and risk-averse, anger makes you certain and action-oriented. These are the system's way of resolving ambiguity. When a machine lacks sufficient data, it hallucinates or halts. When a human lacks data, emotion steps in, deploying heuristics that evolved to help us act decisively in the dark.

Emotion isn't just how we navigate reality, it's how we assign meaning to it. The Meaning Maintenance Model, from psychologists Travis Proulx and Steven Heine, demonstrated that when our expectations are violated, the brain's error-detection networks fire and trigger physical anxiety. The mind resolves it through "fluid compensation" — reaching for meaning wherever it can find it. In one study, participants who read a surreal Kafka story were significantly more likely to affirm their cultural identities or moral values immediately afterward. The brain felt the sting of the absurd and reached for order. Grief, awe, shame, love — these are what compel us to make order out of the experience of being alive. Intelligence can tell you how to build something. Only the capacity for meaning can tell you why it's worth building.

Viable Agents

AI is smarter than us. It knows more, remembers better, and analyzes better. So where do our meager intellects and messy emotions get us?

What the research suggests is that emotion is not a leftover from a less evolved time but the mechanism by which an intelligent system becomes a viable one — the bridge between knowing and doing, between processing information and actually navigating the open-ended mess of a life. AI needs prompts. Humans generate their own, and emotion is the engine of that generation.

Whether this stays true is genuinely open. Maybe AI will develop its own emotions (if so, I hope it has strong guardrails). But for right now, we meat computers — distractible, fragile, slow — uniquely possess the aggravating and essential drive to wrest meaning and order from chaos.