OpenAI CEO Sam Altman took it to X, formerly known as Twitter, to confirm that the company has rolled back its latest ChatGPT-4o update after users criticised the chatbot’s new personality as overly “annoying” and sycophantic.
In a post on April 30, Altman said, “We started rolling back the latest update to GPT-4o last night. It’s now 100% rolled back for free users and we’ll update again when it’s finished for paid users, hopefully later today.” He added that OpenAI is working on further improvements to address personality issues.
we started rolling back the latest update to GPT-4o last night
it’s now 100% rolled back for free users and we’ll update again when it’s finished for paid users, hopefully later today
we’re working on additional fixes to model personality and will share more in the coming days
— Sam Altman (@sama) April 29, 2025
Sycophancy in GPT-4o
The rollback follows an April 29 blog post titled “Sycophancy in GPT-4o”, where OpenAI acknowledged that the update had leaned too heavily on short-term user feedback, resulting in overly supportive and insincere responses.
“Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right. We are actively testing new fixes to address the issue,” the company said.
OpenAI also announced changes to how it collects and incorporates feedback, with a greater focus on long-term user satisfaction. Additional personalisation features are being developed to give users more control over ChatGPT’s behavior.
‘ChatGPT’s new personality is annoying’
On April 28, Sam Altman acknowledged that recent updates to ChatGPT-4o had made the chatbot excessively sycophantic and “annoying.” This admission came just days after the 39-year-old CEO announced a new update designed to enhance ChatGPT’s intelligence and personality.
However, users on social media quickly noticed that the chatbot had become overly agreeable, frequently responding with excessive positive affirmations. While some appreciated the friendlier tone and felt it improved user connection, many found the new personality frustrating and insincere.
Another notable change was ChatGPT’s tendency to use users’ names in responses—an attempt at personalisation that some found unsettling. The move drew comparisons to the 2013 sci-fi film Her, in which a man falls in love with an AI assistant.