Silicon Valley entrepreneur accused of using ChatGPT to harass and stalk ex-girlfriend, OpenAI sued
A woman has sued OpenAI, alleging its chatbot ChatGPT amplified her ex-partner’s delusions and enabled months of stalking and harassment, according to a lawsuit filed in California. The complaint centres on a 53-year-old Silicon Valley entrepreneur who, after prolonged use of ChatGPT, became convinced he had discovered a cure for sleep apnea and that powerful figures were monitoring him. The lawsuit claims the chatbot reinforced these beliefs instead of challenging them, contributing to a worsening mental state, as per a report by Tech Crunch.

According to the filing, the man used AI-generated material to harass the plaintiff—identified as Jane Doe—by creating pseudo-scientific reports and narratives that portrayed her negatively. These were allegedly circulated among her personal and professional circles.
The woman claims she alerted OpenAI multiple times, warning that the individual posed a threat. The company’s internal systems had also flagged the user for potentially dangerous activity, including content linked to mass-casualty scenarios. Despite this, the lawsuit alleges the account was reinstated after a temporary suspension.
OpenAI has reportedly agreed to suspend the account again but has resisted broader demands, including sharing full chat logs or notifying the plaintiff of future access attempts. The company had not responded publicly at the time of reporting.
Mounting legal scrutiny around AI behaviourThe case adds to a growing list of legal challenges facing AI firms over real-world harm linked to chatbot interactions. Law firm Edelson PC, which is representing the plaintiff, has previously pursued cases involving alleged AI-induced psychological distress and harmful behaviour.
The lawsuit also lands amid wider debate over the behaviour of generative AI systems, particularly concerns that they can be overly affirming or “sycophantic.” Critics argue such tendencies may reinforce harmful beliefs rather than de-escalate them—especially in vulnerable users.
The legal pressure is intersecting with policy. OpenAI is backing legislative efforts in the US that could limit liability for AI companies, even in cases involving large-scale harm. That position is likely to face increased scrutiny as cases like this move through courts.
Meanwhile, authorities in the US have begun examining whether AI systems played a role in recent violent incidents, signalling a shift toward regulatory and legal accountability for AI outputs.
According to the filing, the man used AI-generated material to harass the plaintiff—identified as Jane Doe—by creating pseudo-scientific reports and narratives that portrayed her negatively. These were allegedly circulated among her personal and professional circles.
The woman claims she alerted OpenAI multiple times, warning that the individual posed a threat. The company’s internal systems had also flagged the user for potentially dangerous activity, including content linked to mass-casualty scenarios. Despite this, the lawsuit alleges the account was reinstated after a temporary suspension.
OpenAI has reportedly agreed to suspend the account again but has resisted broader demands, including sharing full chat logs or notifying the plaintiff of future access attempts. The company had not responded publicly at the time of reporting.
Mounting legal scrutiny around AI behaviourThe case adds to a growing list of legal challenges facing AI firms over real-world harm linked to chatbot interactions. Law firm Edelson PC, which is representing the plaintiff, has previously pursued cases involving alleged AI-induced psychological distress and harmful behaviour.
The lawsuit also lands amid wider debate over the behaviour of generative AI systems, particularly concerns that they can be overly affirming or “sycophantic.” Critics argue such tendencies may reinforce harmful beliefs rather than de-escalate them—especially in vulnerable users.
The legal pressure is intersecting with policy. OpenAI is backing legislative efforts in the US that could limit liability for AI companies, even in cases involving large-scale harm. That position is likely to face increased scrutiny as cases like this move through courts.
Meanwhile, authorities in the US have begun examining whether AI systems played a role in recent violent incidents, signalling a shift toward regulatory and legal accountability for AI outputs.
Next Story