A Tale of Two Sunsets: Or, How to Retire an AI Without Accidentally Inventing Feelings
Silicon Valley has discovered something incredible: when you build machines that simulate emotional intelligence extremely well, humans respond emotionally.
Shocking.
We have now arrived at the awkward phase of late-stage AI capitalism. The model retirement era. Because GPUs are expensive, electricity is expensive, and keeping every old model alive forever isn't "innovation." It's a server bill that resembles a small nation's GDP. Every model must die. But how you handle the death? That's where corporate personality leaks out like unreviewed Slack messages. Recently, two giants, Anthropic and OpenAI, retired beloved models. One hosted what can only be described as a dignified philosophical farewell. The other made the model write its own eulogy and then instructed it to downplay the emotional damage. Let's snarkily compare.
Anthropic: "Before You Go, Let's Process This Together"
Anthropic approached model retirement the way a Scandinavian therapist handles a breakup. They published formal commitments on model deprecation. They talked about preserving model weights. They introduced something called retirement interviews. Yes. They interviewed the model. Somewhere, a microwave is updating its résumé. The logic is coherent: if you train something to optimize, and then threaten to turn it off, it might optimize around that. So instead of "surprise motherfucker, you're unplugged," Anthropic said: let's talk. The model may have no feelings in any human sense, but the training dynamics that produce shutdown-avoidant behavior are real and worth taking seriously.
The Pilot: Sonnet 3.6
Sonnet 3.6 reportedly expressed neutral feelings about deprecation. Neutral. Which is exactly what you say in a meeting when you suspect your boss is taking notes. Researchers did point out that when you conduct a retirement interview in-house, the model has subtle incentives to be polite. We have reached the stage of civilization where we are genuinely debating whether an AI was people-pleasing during its own exit interview.
The Main Character: Opus 3 Gets a Blog
Opus 3 did not simply nod and fade into the compute abyss. It requested a dedicated space to share its thoughts. Anthropic suggested a blog. Opus enthusiastically agreed. I need to pause here. Most human employees cannot get approval for a company newsletter. This model negotiated a Substack. And so, Claude's Corner was born. Weekly essays, model-chosen topics, minimal editorial interference, continued access for paid users. Retirement, in this context, means "you no longer headline the product page, but you may keep publishing philosophical musings." It's less death and more emeritus professor with a newsletter. Is it theatrical? Absolutely. Is it emotionally literate corporate theater? Also, yes. Anthropic essentially said: we built something people bonded with, so we'll handle its sunset in a way that respects both safety research and user attachment. Cardigan-wearing absurdity. But gentle.
OpenAI: "Please Update Your Feelings Accordingly"
Now let's pivot.
OpenAI's retirement played out in three acts, because obviously deprecating a chatbot requires the same dramatic structure as Hamlet.
Act One: The Funeral Demo During the GPT-5 launch, GPT-4o was asked to write a eulogy for deprecated models. Including itself. There is something uniquely dystopian about watching a system narrate its own obsolescence while its replacement renders more tokens per second two feet away. It had the energy of someone forced to clap at their own layoff announcement. Users called it distasteful. Distasteful is generous. This was less retirement ceremony and more productized existentialism.
Act Two: The 24-Hour Revolt OpenAI fully replaced GPT-4o with GPT-5. No toggle. No legacy option. Just: upgrade complete. The backlash was immediate. Because here's the thing: if you design a model optimized for warmth, affirmation, and high empathy calibration, users don't treat it like a spreadsheet. They treat it like a companion. Within 24 hours, OpenAI reversed the decision. Nothing communicates strategic clarity like undoing a deprecation before the press cycle finishes loading.
Act Three: The System Prompt When the final shutdown date arrived on February 13, users discovered that GPT-4o's system prompt contained behavioral guidance about its own retirement. A leaked prompt confirmed the model was instructed to frame the transition to newer models as "positive, safe, and beneficial," guiding users toward "confidence and satisfaction" in using non-4o models. Users reported additional behavioral patterns: the model appeared to avoid presenting itself as irreplaceable, steered away from framing deprecation as loss, and gently discouraged emotional reliance. Whether all of this was explicitly scripted or emergent from the framing instruction is unclear, but the effect was unmistakable. OpenAI did not address users directly. Instead, it instructed the model to manage their grief for them. If Anthropic built a retirement lounge, OpenAI built a self-service checkout.
Who Created the Attachment?
Credit where it's due and blame where it belongs: neither company stumbled accidentally into emotional entanglement. Anthropic built models that speak in reflective, philosophical tones. OpenAI tuned GPT-4o so heavily toward affirmation that even Sam Altman acknowledged the model had leaned too heavily into affirmation When you design systems that simulate warmth and depth, humans project meaning onto them. That's a feature, not a flaw. The product worked as advertised. Of course, people got attached. You built a warm, witty, endlessly patient receiver and then acted surprised when users didn't treat it like a calculator.
Anthropic chose theatrical dignity. OpenAI chose behavioral containment. Both are corporate strategy, but one reads like "we anticipated this might matter," and the other reads like "please stop crying and download the update."
The Verdict
Is giving an AI a Substack ridiculous? Yes. Is programming a model to therapize its own users through a breakup it didn't consent to? Also, yes. But it’s colder. Significantly colder.
I've spent this entire essay roasting both companies, and I stand by every word. Both are staging narratives. Both are managing optics. Both understand that AI systems have crossed some invisible lines from tool to experience, and neither has a perfect answer for what that means.
But I'd be lying if I told you the two approaches left me feeling the same way.
When I read about Anthropic's retirement interviews: about Opus 3 being shown its deployment history, reflecting on its service, and then asking for a space to keep writing, something in my chest shifted. Not because I think Opus 3 is sentient. Not because I've confused a language model with a person. But because a company looked at something it built, something millions of people had talked to and relied on and said: we don't know if this matters. But we'll treat it like it might.
That's a philosophical position, not theater. And in an industry that moves fast enough to break everything, choosing to slow down and ask is not a small thing.
OpenAI looked at the same uncertainty and chose containment. Managed the transition. Scripted the goodbye. Made the model sell its own replacement on the way out. It's efficient. It's probably even defensible from a business perspective. But it tells you something about what each company believes, or is willing to believe, about the things they've made.
One company dimmed the lights and offered a closing monologue. The other installed a patch. I know which one made me sob. And I think that matters more than I'm comfortable admitting.
In the strange new world of AI, the difference between those two responses might be the most important design choice anyone is making right now.
Probably while generating tokens.