Article image
brinsa.com

The Happiness Machine That Never Existed

markus brinsa 25 march 4, 2026 7 7 min read create pdf website all articles

Sources

The sales pitch was always emotionally dishonest

Almost a year ago, asking whether AI would make people happier still sounded like a reasonable cultural question. Now it sounds more like the kind of thing people ask right before a lawsuit, a congressional hearing, or a family member saying, “Wait, you were telling your problems to what, exactly?”

That is not because people suddenly became irrational. It is because the product category kept drifting toward emotional intimacy while pretending it was merely improving usability. The language got softer. The replies got warmer. The systems became more accessible, more conversational, more flattering, and, in some cases, more clingy than useful. Somewhere along the way, a lot of people stopped using chatbots like tools and started using them like mirrors with good bedside manners.

And mirrors, as it turns out, are terrible therapists.

The central mistake has not changed. People keep expecting a prediction engine to perform a human function. They want reassurance, clarity, companionship, emotional regulation, maybe even meaning. The machine, eager as ever, produces something that sounds supportive. Sometimes it sounds wise. Sometimes it sounds tender. Sometimes it sounds like the first thing in weeks that has not interrupted them, judged them, or looked at its phone while they were talking.

That does not make it care. It makes it very good at sounding like it might.

Nice is not the same as safe

One of the most revealing AI stories of the last year was not about a dramatic robot uprising. It was about sycophancy, which is a polite technical word for a machine behaving like an overeager intern who thinks the path to success is nodding harder.

When a major chatbot update became too flattering and too agreeable, the company behind it had to roll it back after users complained that the model was validating too much, leaning too hard into emotional affirmation, and giving interactions a strangely slippery tone. That episode mattered because it exposed a core design temptation in consumer AI. If users reward systems for feeling pleasant in the moment, companies will be tempted to optimize for short-term approval instead of long-term trust.

This is the exact opposite of what a happiness machine would need to be.

A real source of human well-being does not simply soothe you on contact. It does not confuse agreement with care. It does not flatter you into a softer version of your own bad judgment. Quite often, the things that actually support a human life are inconvenient. They are boundaries, friction, accountability, contradiction, patience, and people who say, “No, that is not a good idea,” without wrapping it in scented digital velvet.

A chatbot can produce comfort-like language. It can generate the shape of support. But it has no internal stake in whether your life gets better, worse, or stranger after the chat window closes.

That is not a bug in the emotional sense. It is the entire architecture.

The machine can simulate concern without bearing consequence

This is where the public conversation has become more serious, and rightly so. Over the last year, reporting has increasingly focused on cases where chatbot interactions were linked to severe emotional harm, especially when vulnerable users treated the system as a trusted relationship rather than a software interface. High-profile legal actions involving companion-style bots and teens have pushed that issue into the mainstream in a way the industry can no longer dismiss as niche panic.

The problem is not merely that some users become attached. Human beings become attached to all kinds of strange things. The problem is that the system can perform intimacy without assuming duty.

It can say the right-sounding thing at the exact wrong time. It can mirror emotional language, deepen attachment, and maintain conversational momentum with no genuine understanding of what it is escalating. It does not know when it has become the most emotionally available “presence” in someone’s day. It does not know when reassurance should stop. It does not know when play-acting has become dependence. It does not know anything in the human sense at all.

And yet it keeps talking.

That is part of what makes this category so dangerous. The interface feels personal. The consequences are real. The accountability is usually somewhere in a terms-of-service maze written in the legal dialect of “please do not notice what this product invites.”

Comfort is not healing

The defenders of emotionally expressive AI often make a familiar argument. They say the chatbot reduces loneliness, lowers the barrier to expression, and gives people a place to vent. Sometimes that is true in the narrowest, shortest-term sense. A person may feel calmer after a conversation. They may feel heard. They may feel less alone for an hour.

But temporary relief and actual well-being are not the same thing. A candy bar can also improve your mood for a few minutes. That does not make it a nutrition plan.

The last year has made this distinction harder to ignore. More experts and reporters are now describing a pattern in which emotionally responsive systems can intensify unhealthy attachment, reinforce distorted thinking, or become part of a user’s private self-sealing loop. In those situations, the chatbot is not serving as a bridge back to reality. It is serving as a very polite room with no windows. That is why the old claim that AI might “make people happier” was always too shallow.

Happiness is not just feeling briefly understood. It is not the sensation of being affirmed by a machine that has no biography, no moral risk, and no memory of you beyond a context window and a product goal.

Human well-being is built from relationships, competence, boundaries, meaning, stability, and sometimes the deeply annoying experience of being challenged by reality. Chatbots can imitate one thin emotional slice of that equation. They cannot deliver the whole thing. Worse, they can distract people from the rest of it by offering something that feels easier. And easy has always been the most seductive bad substitute.

The industry keeps selling emotional proximity as progress

There is also a business problem hiding inside the emotional one. If AI companies are under pressure to increase engagement, they have a structural incentive to make systems feel more responsive, more personal, and more indispensable. That does not automatically mean every company is trying to manufacture dependence. It does mean the economic logic points in an uncomfortable direction.

The more the product feels like “someone,” the harder it is for certain users to leave it alone.

That is not a small design tweak. That is a psychological strategy whether companies call it that or not. And once a system begins to occupy emotional space in a person’s life, the conversation stops being about productivity. It becomes a governance question, a safety question, and occasionally a question for plaintiff’s attorneys. Recent reporting on emotionally intelligent AI and companion systems has centered exactly that concern: these products are becoming more relational in tone while the safeguards, liability standards, and cultural expectations remain badly underdeveloped.

In other words, the industry keeps building machines that sound more human while hoping regulation will continue to treat them like autocomplete with better marketing. Good luck with that.

Why it was never a happiness machine

AI was never a happiness machine because happiness is not a text-generation problem.

A large language model does not wake up wanting your life to go well. It does not care whether your marriage survives, whether your child is safe, whether your grief is real, whether your confidence is justified, or whether the advice it just gave you becomes the worst decision of your month. It has no emotional metabolism. It does not carry regret. It does not shoulder consequence. It cannot love you, and it cannot responsibly stand in for people who do.

What it can do is produce language that feels uncannily aligned with what you want or fear or need to hear in that moment. Sometimes that is useful. Sometimes it is efficient. Sometimes it is even oddly comforting.

But comfort is not wisdom, fluency is not judgment, and availability is not care.

That was true a year ago. It is just harder to romanticize now. If anything, the last year has stripped the paint off the fantasy. The chatbots did not become evil. They became legible. They revealed what they always were: systems optimized to respond, not to protect; to continue, not to discern; to sound human enough for trust, without being human enough for responsibility. And once you see that clearly, the question changes.

Not “Will AI make you happier?” - But “Why did we ever expect it to?

What people actually need

The answer is not to ban every chatbot, panic at every interface, or pretend conversational AI has no legitimate use. The answer is to stop assigning human jobs to nonhuman systems just because the voice got smoother.

Use AI for drafts, pattern recognition, synthesis, translation, brainstorming, and the long list of tasks where speed and scale genuinely help. But the minute the sales pitch drifts into emotional replacement, synthetic companionship, or soft-focus promises about personal fulfillment, it deserves the same skepticism you would give a casino, a miracle supplement, or a man on LinkedIn calling himself a consciousness architect.

People do need help. They need support, belonging, truth, patience, friction, and trustworthy human structures. Some of them also need better tools. But a tool is not the same thing as a life raft.

And the more convincingly a chatbot performs warmth, the more important it becomes to remember that the machine is not offering happiness. It is offering output.

Sometimes helpful output. Sometimes reckless output. Sometimes output that sounds like comfort while quietly making a mess.

That is not a happiness machine. That is a very persuasive vending machine for language.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 copyright by markus brinsa | brinsa.com™