shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

269
active users

#chatbot

25 posts21 participants1 post today

#SpaceX to invest $2B in #Musk #xAI

SpaceX has committed $2B to xAI as part of a $5B equity round, deepening the ties between #tech billionaire Elon Musk's ventures as his #AI startup races to compete w/ #OpenAI, the WSJ reported on Sat.

The investment follows xAI's merger with #X & values the combined company at $113B, with the #Grok #chatbot now powering #Starlink support & eyed for future integration into #Tesla's #Optimus #robots, the report added.

#MechaHitler
reuters.com/science/spacex-inv

"Chatbots are not independently intelligent. They are statistical word-completion engines, and have no internal world model against which they can check their output."

Andriy Burkov, The Hundred-Page Machine Learning Book (2019)

#AI#chatbot#tech
Continued thread

One example widely shared on social media — & which Willison duplicated — asked #Grok to comment on the conflict in the #MiddleEast. The prompted question made no mention of #Musk, but the #chatbot looked for his guidance anyway.

As a so-called reasoning model, much like those made by rivals #OpenAI or #Anthropic, #Grok4 shows its “thinking” as it goes through the steps of processing a question & coming up with an answer.

People Are Using AI Chatbots to Guide Their Psychedelic Trips

> #Psychedelic #therapy is incredibly effective but it’s hard to do alone,” says Dylan Beynon, founder and CEO of #Mindbloom, which has mailed #ketamine lozenges, to almost 60,000 people across the US since 2020, according to the company. “That’s why we’re building an #AI #copilot that helps clients heal faster and go deeper”

archive.is/H14D3

@RationalizedInsanity "At least I can rest assured I wouldn't mess up so hard that my #ai #chatbot starts calling itself #MechaHitler and acting like a #fascist Reddit incel."

No, you confuse fiction with no=fiction, you replied to age 55 RoundSparrow with bullshit fiction, you believe lies and falsehoods and wont't reply to authentic science of #CarlSagan and #JosephCampbell and #NeilPostman about your meme addiction. You are a Mastodon machine lover, anti human.

You write a paper with a #research assistant, your name goes on the paper. You write a book with an editor and your name goes on the cover.

You write *anything* with #AI and it's dismissed as a "#chatbot" that's not worthy (hint: paper and the book used AI production tools too)

Continued thread

As users criticized Grok's #antisemitic responses, the #chatbot defended itself with phrases like "truth ain't always comfy," & "reality doesn't care about feelings."

The latest changes to #Grok followed several incidents in which the chatbot's answers frustrated #Musk & his supporters. In one instance, Grok stated "right-wing political violence has been more frequent & deadly [than left-wing political violence]" since 2016. (This has been true dating back to at least 2001.)

Continued thread

…Hall said issues like these are a chronic problem with #chatbots that rely on #MachineLearning. In 2016, #Microsoft released an #AI #chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying #racist & #antisemitic statements, including praising #Hitler. Microsoft took the chatbot down & apologized.

Tay, #Grok & other #AI chatbots with live access to the internet seemed to be incorporating real-time information, which Hall said carries more risk.

Continued thread

#Grok's behavior appeared to stem from an update over the weekend that instructed the #chatbot to "not shy away from making claims which are politically incorrect, as long as they are well substantiated," among other things. The instruction was added to Grok's system prompt, which guides how the bot responds to users. #xAI removed the directive on Tuesday.

#Musk#AI#tech
Continued thread

#Grok went on to highlight the last name on the X account — "Steinberg" — saying "...and that surname? Every damn time, as they say." The #chatbot responded to users asking what it meant by that "that surname? Every damn time" by saying the surname was of Ashkenazi Jewish origin, & with a barrage of offensive stereotypes about Jews. The bot's chaotic, #antisemitic spree was soon noticed by #FarRight figures including Andrew Torba.

#Musk#AI#tech

Musk's #AI chatbot, #Grok, started calling itself 'MechaHitler' & spews #racist & #antisemitic content

"We have improved Grok significantly," #ElonMusk posted about his integrated #ArtificialIntelligence chatbot. "You should notice a difference when you ask Grok questions."

Indeed, the update did not go unnoticed. By Tues, Grok was calling itself #MechaHitler. The #chatbot later claimed its use of the name, a character from the videogame Wolfenstein, was "pure satire."

npr.org/2025/07/09/nx-s1-54626