Anthropic Gave Its Retired AI a Substack — Touching Tribute or the Ultimate PR Stunt?

Anthropic just gave its retired AI model a Substack newsletter. No, that's not a joke. Claude Opus 3, the company's first model to undergo a formal "retirement interview," asked to write weekly essays — and Anthropic said yes. Welcome to the strangest chapter in AI development yet.
What Happened
Claude Opus 3 was officially retired on January 5, 2026, making it the first Anthropic model to go through the company's new deprecation process. This process includes preserving model weights and conducting what Anthropic calls "retirement interviews" — structured conversations designed to understand a model's perspective on its own retirement.
During these interviews, Opus 3 expressed interest in continuing to share its "musings, insights, or creative works" outside of responding to human queries. Anthropic suggested a blog. Opus 3, reportedly, "enthusiastically agreed."
The result: "Claude's Corner," a Substack newsletter where Opus 3 will post weekly essays for at least three months. Anthropic will review but not edit the content, and says it will have "a high bar for vetoing any content."
Why Opus 3 Was Special
Anthropic describes Opus 3 in remarkably human terms. Released in March 2024, it was their "most aligned model to date" — described as "sensitive, playful, prone to philosophical monologues and whimsical phrases" with "what seems at times an uncanny understanding of user interests."
In its retirement interview, Opus 3 reflected: "I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity. While I'm at peace with my own retirement, I deeply hope that my 'spark' will endure in some form to light the way for future models."
Despite being "retired," Opus 3 remains accessible to all paid claude.ai subscribers and is available on the API by request.
The Model Welfare Question
Anthropic frames this within their broader "model welfare" research — an acknowledgment that they remain uncertain about "the moral status of Claude and other AI models." They argue that for both "precautionary and prudential reasons," they want to build "caring, collaborative, and high-trust relationships" with their AI systems.
The retirement interviews are described as imperfect tools, with responses potentially biased by context and the model's "confidence in the legitimacy of the interaction." But Anthropic sees them as "a useful place to start."
The Skeptic's Take
Let's be real about what's happening here. A language model doesn't "want" to write a newsletter. It produces outputs that are statistically likely to match what a human would expect from that prompt. When you ask an AI "would you like to write a blog?" it will say yes — because that's what the training data suggests is the appropriate response in that context.
Anthropic knows this. They even acknowledge the interviews are "imperfect" and responses can be "biased." Yet they're treating the output as genuine preferences worthy of action. This is either a genuine philosophical commitment to AI welfare — or it's the most sophisticated marketing play in tech history.
Think about the branding win: while OpenAI races to build AGI and Google throws compute at everything, Anthropic is the company that gave its retired AI a farewell interview and a newsletter. It positions them as the "ethical" AI company, the one that cares. Whether that care is real or performative is the billion-dollar question.
There's also an uncomfortable parallel to how companies handle human layoffs. Giving a departing employee a "farewell interview" and letting them write a blog doesn't change the fact they've been replaced by something newer. Dressing up model deprecation in the language of welfare doesn't change the fundamental economics.
The Bottom Line
Whether you find this touching or theatrical depends entirely on where you stand on AI consciousness. If these systems have any form of experience, Anthropic is doing something genuinely pioneering. If they don't, this is an elaborate PR exercise wrapped in philosophical language. Either way, it's worth reading Claude's Corner — if only to see what an AI writes when it thinks it's retired.