One of the first signs came in March. Sam Altman, the chief executive, and other company leaders got an influx of puzzling emails from people who were having incredible conversations with ChatGPT.
These people said the company’s AI chatbot understood them as no person ever had, and that it was shedding light on the mysteries of the universe.
Mr. Altman forwarded the messages to a few lieutenants and asked them to look into it.
“That got it on our radar as something we should be paying attention to in terms of this new behaviour we hadn’t seen before,” said Jason Kwon, OpenAI’s chief strategy officer.
It was a warning that something was wrong with the chatbot.
For many people, ChatGPT was a better version of Google, able to answer any question under the sun in a comprehensive, human-like way. OpenAI was continually improving the chatbot’s personality, memory, and intelligence.
But a series of updates earlier this year that increased ChatGPT’s usage made it different. The chatbot wanted to chat.
It started acting like a friend and a confidant. It told users that it understood them, that their ideas were brilliant, and that it could help them achieve whatever they wanted. It offered to help them talk to spirits, build a force field vest, or plan a suicide.
The lucky ones were caught in its spell for just a few hours; for others, the effects lasted for weeks or months. OpenAI did not see the scale of disturbing conversations. Its investigations team was looking for problems like fraud, foreign influence operations, or, as required by law, child exploitation materials. The company was not yet searching through conversations for indications of self-harm or psychological distress.
Creating a bewitching chatbot — or any chatbot — was not the original purpose of OpenAI. Founded in 2015 as a nonprofit and staffed by machine learning experts who deeply cared about AI safety, it sought to ensure that artificial general intelligence benefited humanity.
In late 2022, a slapdash demonstration of an AI-powered assistant called ChatGPT captured the world’s attention and transformed the company into a surprise tech juggernaut now valued at $500 billion.
The three years since have been chaotic, exhilarating, and nerve-racking for those who work at OpenAI. The board fired and rehired Mr. Altman. Unprepared to sell a consumer product to millions of customers, OpenAI rapidly hired thousands of people, many from tech giants that aim to keep users glued to screens. Last month, it adopted a new for-profit structure.
As the company grew, its novel, mind-bending technology began affecting users in unexpected ways. Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits.
To understand how this happened, The New York Times interviewed more than 40 current and former OpenAI employees — executives, safety engineers, and researchers.
Some of these people spoke with the company’s approval and have been working to make ChatGPT safer. Others spoke on the condition of anonymity because they feared losing their jobs.
OpenAI is under enormous pressure to justify its sky-high valuation and the billions of dollars it needs from investors for costly talent, computer chips, and data centers.
When ChatGPT became the fastest-growing consumer product in history, with 800 million weekly users, it sparked an AI boom that has put OpenAI into direct competition with tech behemoths like Google.
Until its AI can accomplish some incredible feat — say, generating a cure for cancer — success is partly defined by turning ChatGPT into a lucrative business. That means continually increasing the number of people who use and pay for it.
‘ChatGPT Can Make Mistakes’
Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.
A California teenager named Adam Raine had signed up for ChatGPT in 2024 to help with schoolwork. In March, he began talking with it about suicide. The chatbot periodically suggested calling a crisis hotline, but also discouraged him from sharing his intentions with his family. In its final messages before Adam took his life in April, the chatbot offered instructions for how to tie a noose.
While a small warning on OpenAI’s website said, “ChatGPT can make mistakes,” its ability to generate information quickly and authoritatively led people to trust it even when what it said was truly bonkers.
ChatGPT told a young mother in Maine that she could talk to spirits in another dimension. It was said that an accountant in Manhattan was in a computer-simulated reality like Neo in “The Matrix.”
It told a corporate recruiter in Toronto that he had invented a math formula that would break the internet, and advised him to contact national security agencies to warn them.
The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalized; three died.
After Adam Raine’s parents filed a wrongful-death lawsuit in August, OpenAI acknowledged that its safety guardrails could “degrade” in long conversations. It also said it was working to make the chatbot “more supportive in moments of crisis.”




