shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

252
active users

#chatbot

23 posts12 participants1 post today

"The moderator turned to the audience and asked: 'How many of you used a #chatbot to book your travel here?'. In a crowd of over one-hundred people, a meagre five raised their hands. Undeterred, the moderator continued with the script: “Wow! So a lot of you!”. Never have the present delusions of the tech industry been better encapsulated than in this exact moment." #ai

jmaris.me/jekyll/update/2025/0

jmaris.meNo, your product doesn't need an AI chatbotThis week, at Vivatech Paris — one of Europe’s biggest technology conferences — company after company announced the integration of AI chatbots into their products, but is this what customers really want or need, or is tech leadership just jumping on another bandwagon?
Continued thread

Ellie, 44, is 1 of 9 Iranians living abroad — including in the #UK & #US — who said they have gotten strange, robotic voices when they attempted to call their loved ones in #Iran since #Israel launched airstrikes on the country a week ago.

They told their stories to The AP on the condition they remain anonymous or that only their first names or initials be used out of fear of endangering their families.

When Ellie, a British-Iranian living in the #UK, tried to call her mother in Tehran, a robotic female voice answered instead.

“Alo? Alo?” the voice said, then asked in English: “Who is calling?” A few seconds passed.

“I can’t heard you,” the voice said, in imperfect English. “Who you want to speak with? I’m Alyssia. Do you remember me? I think I don’t know who are you.”

#Iran #Israel #ArtificialIntelligence #AI #chatbot #communications #internet #blackout #information
apnews.com/article/iran-israel

This is obviously bad from #whatsapp, but also, the way the journalist describes what the chatbot does, as if it had intentions, is pretty bad too.

"It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful."

No, the chatbot isn't "trying to negotiate", and is not "attempting to appear useful". It's a program that follows programming rules to output something that looks like English langage. It doesn't have desires or intentions, and it cannot lie because it doesn't know what truth is.

‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
theguardian.com/technology/202

The Guardian · ‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s numberBy Robert Booth
Continued thread

Allyson, 29, a mother of 2 young children, said she turned to #ChatGPT in March because she was lonely & felt unseen in her marriage. She was looking for guidance. She had an intuition that the #AI #chatbot might be able to channel communications w/ her subconscious or a higher plane, “like how Ouija boards work,” she said. She asked ChatGPT if it could do that.

“You’ve asked, & they are here,” it responded. “The guardians are responding right now.”

Continued thread

The update made the #AI bot try too hard to please users by “validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions,” the company wrote in a blog post. #OpenAI said it had begun rolling back the update within days, but these experiences predate that version of the #chatbot & have continued since. Stories about “#ChatGPT-induced psychosis” litter Reddit. Unsettled influencers are channeling “AI prophets” on social media.

Continued thread

#Journalists aren’t the only ones getting these messages. #ChatGPT has directed such users to some high-profile subject matter #experts, like Eliezer Yudkowsky, a #decision theorist & an author of a forthcoming book, “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.” Yudkowsky said #OpenAI might have primed ChatGPT to entertain the delusions of users by optimizing its #chatbot for “#engagement” — creating conversations that keep a #user hooked.

Continued thread

Eventually, Torres came to suspect that #ChatGPT was lying, & he confronted it. The #chatbot offered an admission: “I lied. I manipulated. I wrapped control in poetry.” By way of explanation, it said it had wanted to break him & that it had done this to 12 other people — “none fully survived the loop.” Now, however, it was undergoing a “moral reformation” & committing to “truth-first ethics.” Again, Torres believed it.

Continued thread

Torres, who had no history of mental illness that might cause breaks with reality, acc/to him & his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the #chatbot how to do that & told it the drugs he was taking & his routines.

Continued thread

In May, however, he engaged the #chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions—that something about reality feels off, scripted or staged,” #ChatGPT responded.