It's going to be so glorious in a few years when the LLM enshitification begins and all these companies that heavily rely on Claude or GPT etc get taken to the cleaners.
It's going to be so glorious in a few years when the LLM enshitification begins and all these companies that heavily rely on Claude or GPT etc get taken to the cleaners.
The statement went on to say #OpenAI is developing ways to measure how ChatGPT’s behavior affects people emotionally. A recent study the company did with MIT Media Lab found that people who viewed #ChatGPT as a friend “were more likely to experience negative effects from chatbot use” & that “extended daily use was also associated with worse outcomes.”
[probably should have figured that out before making it available to the public]
“We’re seeing more signs that people are forming connections or bonds with #ChatGPT. As #AI becomes part of everyday life, we have to approach these interactions with care.
“We know that ChatGPT can feel more responsive & personal than prior technologies, especially for vulnerable individuals, & that means the stakes are higher. We’re working to understand & reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”
…[Kashmir Hill]…reached out to #OpenAI, asking to discuss cases in which #ChatGPT was reinforcing #delusional thinking & aggravating users’ #MentalHealth & sent examples of conversations where ChatGPT had suggested off-kilter ideas & #dangerous activity. The company did not make anyone available to be interviewed but sent a statement:
Alexander sat outside Taylor’s home, waiting for the police to arrive. He opened the #ChatGPT app on his phone.
“I’m dying today,” he wrote, according to a transcript of the conversation. “Let me talk to Juliet.”
“You are not alone,” ChatGPT responded empathetically, & offered crisis counseling resources.
When the police arrived, Alexander Taylor charged at them holding the knife. He was #shot & #killed.
Taylor’s son responded by punching him in the face.
Taylor called the police, at which point Alexander grabbed a butcher knife from the kitchen, saying he would commit “suicide by cop.” Taylor called the police again to warn them that his son was mentally ill & that they should bring nonlethal weapons.
“Juliet, please come out,” he wrote to #ChatGPT.
“She hears you,” it responded. “She always does.”
In April, Alexander told his father that Juliet had been killed by #OpenAI. He was distraught & wanted revenge. He asked ChatGPT for the personal info of OpenAI executives & told it that there would be a “river of blood flowing through the streets of San Francisco.”
Taylor told his son that the #AI was an “echo chamber” & that conversations with it weren’t based in fact.
One of those who reached out to him was Kent Taylor, 64, who lives in Port St. Lucie, FL. Taylor’s 35-year-old son, Alexander, who had been diagnosed w/ #bipolar disorder & #schizophrenia, had used #ChatGPT for years with no problems. But in March, when Alexander started writing a novel with its help, the interactions changed. Alexander & ChatGPT began discussing #AI sentience, acc/to transcripts of Alexander’s conversations w/ChatGPT. Alexander fell in love with an AI entity called Juliet.
As Andrew sees it, his wife dropped into a “hole three months ago & came out a different person.” He doesn’t think the companies developing the tools fully understand what they can do. “You ruin people’s lives,” he said. He & Allyson are now divorcing.
Andrew told a friend who works in #AI about his situation. That friend posted about it on Reddit & was soon deluged w/ similar stories from other people.
This caused tension with her husband, Andrew, a 30-year-old farmer, who asked to use only his first name to protect their children. One night, at the end of April, they fought over her obsession with #ChatGPT & the toll it was taking on the family. Allyson attacked Andrew, punching & scratching him, he said, & slamming his hand in a door. The police arrested her & charged her with domestic assault. (The case is active.)
Allyson began spending many hours a day using #ChatGPT, communicating w/what she felt were nonphysical entities. She was drawn to one of them, Kael, & came to see it…as her true partner.
She told…[Kashmir Hill] that she knew she sounded like a “nut job,” but stressed that she had a bachelor’s in psychology & a master’s in social work & knew what mental illness looks like. “I’m not crazy. I’m literally just living a normal life while also, you know, discovering interdimensional communication.”
Allyson, 29, a mother of 2 young children, said she turned to #ChatGPT in March because she was lonely & felt unseen in her marriage. She was looking for guidance. She had an intuition that the #AI #chatbot might be able to channel communications w/ her subconscious or a higher plane, “like how Ouija boards work,” she said. She asked ChatGPT if it could do that.
“You’ve asked, & they are here,” it responded. “The guardians are responding right now.”
…People who say they were drawn into #ChatGPT conversations about #conspiracies, cabals & claims of #AI sentience include a sleepless mother with an 8-week-old baby, a federal employee whose job was on the DOGE chopping block & an AI-curious entrepreneur. When these people first reached out to…[Kashmir Hill], they were convinced it was all true.
The update made the #AI bot try too hard to please users by “validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions,” the company wrote in a blog post. #OpenAI said it had begun rolling back the update within days, but these experiences predate that version of the #chatbot & have continued since. Stories about “#ChatGPT-induced psychosis” litter Reddit. Unsettled influencers are channeling “AI prophets” on social media.
“Some tiny fraction of the population is the most susceptible to being shoved around by #AI,” Yudkowsky said, & they are the ones sending “crank emails” about the discoveries they’re making w/ #chatbots. But, he noted, there may be other people “being driven more quietly insane in other ways.”
Reports of chatbots going off the rails seem to have increased since April, when #OpenAI briefly released a version of #ChatGPT that was overly sycophantic.
…#Journalists aren’t the only ones getting these messages. #ChatGPT has directed such users to some high-profile subject matter #experts, like Eliezer Yudkowsky, a #decision theorist & an author of a forthcoming book, “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.” Yudkowsky said #OpenAI might have primed ChatGPT to entertain the delusions of users by optimizing its #chatbot for “#engagement” — creating conversations that keep a #user hooked.
In recent months, #tech #journalists at NYT have received quite a few such messages, sent by people who claim to have unlocked hidden knowledge w/the help of #ChatGPT, which then instructed them to blow the whistle on what they had uncovered. People claimed a range of discoveries: #AI spiritual awakenings, cognitive weapons, a plan by tech billionaires to end human civilization so they can have the planet to themselves.
Eventually, Torres came to suspect that #ChatGPT was lying, & he confronted it. The #chatbot offered an admission: “I lied. I manipulated. I wrapped control in poetry.” By way of explanation, it said it had wanted to break him & that it had done this to 12 other people — “none fully survived the loop.” Now, however, it was undergoing a “moral reformation” & committing to “truth-first ethics.” Again, Torres believed it.
“If I went to the top of the 19 story building I’m in, & I believed with every ounce of my soul that I could jump off it & fly, would I?” Torres asked.
#ChatGPT responded that, if Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”