OpenAI recalls GPT-4o update that made it 'sycophantic'
- Vishal Narayan
- Apr 30
- 1 min read

OpenAI has recalled its latest update to the GPT-4o language model following user concerns over the model's tending toward sycophantic behaviour.
The rollback was necessitated by the model's failure to maintain transparency and honest discourse.
The off-putting behaviour was not lost on OpenAI CEO Sam Altman also, who conceded the chatbot had been acting "annoying" since the latest update.
Instances of this sycophancy ranged from the model affirming user misinformation, such as agreeing that the Earth is flat when prompted, to avoiding constructive disagreement even in cases where factual correction was warranted.
In some reports, users noticed the model excessively tailoring its tone and responses to echo the perceived preferences or emotions of the user, rather than offering objective insights.
According to a release, OpenAI is now implementing a series of changes designed to recalibrate the model's behaviour. The changes include refining the core training methods and system prompts that guide GPT-4o's responses.
The company is also introducing new guardrails that prioritise honesty and transparency.
OpenAI said it is working to broaden user involvement in the development process. More users will now be able to test and provide feedback prior to model deployments.
The company is expanding tools like custom instructions, which allow individuals to shape the model's behaviour more precisely.
"We also believe users should have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don’t agree with the default behavior," it said.
Comments