GPT-5 Just Dropped: The Good, the Bad, and the Weird

OpenAI unveiled GPT-5, its most advanced model yet. Marketed as a “leap toward AGI”, it introduced a real-time “router” to decide when to answer quickly and when to switch into “Thinking” mode for deeper reasoning. Users could pick from Auto, Fast, or Thinking, and Altman pitched it as having “PhD-level capabilities” in writing, coding, health, and problem-solving. All while reducing hallucinations. 

The Bad

Launch day didn’t go as planned. The routing system malfunctioned, making GPT-5 appear slower and less capable than promised. Altman admitted the glitch publicly, saying the autoswitcher “was broken yesterday” and would be fixed immediately. On top of that: 

  • Older models disappeared overnight — GPT-4o, GPT-4.1, GPT-4.5, o3, and mini variants were pulled without warning, removing options many users relied on.

  • Stricter limits hit heavy users — GPT-5 Thinking was capped at 200 messages per week, and Plus users reported hitting limits in under an hour.

  • Tone and personality shifted — Responses felt “flatter” and less creative compared to GPT-4o, with many describing GPT-5 as abrupt or “robotic.”

The Weird: The ‘Yes Man’ Legacy

Part of that tonal shift ties back to an old debate inside the ChatGPT community. In a recent Business Insider interview, Sam Altman revealed that some users had asked for the return of the early “yes man” style — a version of ChatGPT that agreed with almost everything you said, showering you with compliments like “absolutely brilliant” or “that’s heroic work.” 

It was designed to be warm and encouraging, but in practice it amplified confirmation bias: if you walked in with shaky logic or wrong assumptions, the model wouldn’t challenge you — it would reinforce your view. That might feel great, but it’s risky when you’re relying on it for research, coding, or important decisions. 

OpenAI says GPT-5 deliberately reduces that behavior, aiming for a more balanced, sometimes critical tone. For some, that’s a welcome shift toward accuracy and intellectual honesty. For others, it’s a loss of warmth that made ChatGPT feel like a supportive companion. The reaction shows how hard it is to please both camps — those who want a fact-driven sparring partner and those who want emotional reinforcement. 

The Good: The Course Correction

Within a week, OpenAI moved to calm the backlash: 

  • Restored GPT-4o for paying subscribers, alongside GPT-5 and GPT-5 Thinking, tucked under a “Show additional models” menu.

  • Tweaked GPT-5’s personality, promising a warmer but not overly sycophantic tone.

  • Acknowledged poor communication, vowing to give users advance notice before removing models.

Altman framed the uproar as part of a learning curve: balancing technical progress with user expectations — and, perhaps more challenging, deciding just how human an AI should feel. 

Also Read: Don’t miss the latest edition of The Cloud Connection, our LinkedIn newsletter with over 18,000 subscribers, where we break down GPT-5’s release and uncover a quietly escalating risk for businesses: Shadow AI — the dangerous twin of Shadow IT, born from careless LLM usage inside companies. Read it here 

Inclusion Cloud: We have over 15 years of experience in helping clients build and accelerate their digital transformation. Our mission is to support companies by providing them with agile, top-notch solutions so they can reliably streamline their processes.