Speaking Tree Live

OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'

Newspoint
Nowadays, many of us have turned AI platforms into a quick guidance source for everything from code to personal advice. But as artificial intelligence becomes a greater part of our emotional lives, companies are becoming aware of the risks of over-reliance on it. But can a chatbot truly understand matters of the heart and emotions?

With growing concerns about how AI might affect mental well‑being, OpenAI is making a thoughtful shift in how ChatGPT handles sensitive personal topics. Rather than giving direct solutions to tough emotional questions, the AI will now help users reflect on their feelings and come to their own conclusions.
Hero Image

OpenAI has come up with significant changes

OpenAI has announced a significant change to how ChatGPT handles relationship questions. Instead of offering direct answers like “Yes, break up,” the AI will now help users think through their dilemmas by giving self-reflection and weighing pros and cons, particularly for high-stakes personal issues.

This comes as there have been several issues over AI getting too direct in emotionally sensitive areas. According to reports from The Guardian, OpenAI stated, “When you ask something like: ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer. It should help you think it through—asking questions, weighing pros and cons.”

Newspoint
The company has also said that “new behaviour for high‑stakes personal decisions is rolling out soon. We’ll keep tuning when and how they show up so they feel natural and helpful,” according to OpenAI’s statement via The Guardian.

To ensure this isn’t just window dressing, OpenAI is gathering an advisory group of experts in human-computer interaction, youth development, and mental health. The company said in a blog post, “We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal ‘yes’ is our work”

OpenAI CEO says that the new update is....
Newspoint

This change follows user complaints about ChatGPT’s earlier personality tweaks. According to the Guardian, CEO Sam Altman admitted that recent updates made the bot “too sycophant‑y and annoying.” He said, “The last couple of GPT‑4o updates have made the personality too sycophant‑y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.” Altman also teased future options for users to choose different personality modes

OpenAI is also implementing mental health safeguards. Updates will include screen-time reminders during long sessions, better detection of emotional distress, and links to trusted support when needed