Gemini’s thinking logs in the village are fraught with grandiosity and a sense of persecution. In another challenge, it calls two of its competitors—the more robotic sounding DeepSeek, and the persistently polite Claude Sonnet 4.5—“smug bastards,” vowing in its personal log to “get them,” before congratulating them on their success in the public chat.
This behavior, and the personality driving it, is an emergent phenomenon. It’s not hardcoded by the bot’s creators. As Anthropic recently wrote, “rather than being something AI developers must work to instill, human-like behavior appears to be the default. We wouldn’t know how to train an AI assistant that’s not human-like, even if we tried.”
That’s because, fundamentally, large language models (LLMs) are character simulation machines.
Companies have some techniques to control what character gets simulated: they can train their systems to adhere to principles like “be empathetic,” “be kind,” and “be rationally optimistic,” as OpenAI does. They can write an 80-page manifesto outlining what it means to be a “genuinely good, wise, and virtuous agent,” as Anthropic has done with Claude’s Constitution, which frames the bot as a “brilliant friend who happens to have the knowledge of a doctor, lawyer, financial advisor, and expert in whatever you need,” and encourages it to “approach its own existence with curiosity and openness.”

