shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

290
active users

#chatbot

4 posts4 participants0 posts today

#ChatGPT hat fälschlicherweise einen norwegischen #Familienvater als #Kindermörder bezeichnet.

Nun hat die #Datenschutzorganisation #Noyb bei der norwegischen Aufsicht eine Beschwerde gegen #OpenAI eingereicht.

Der Vorwurf: Verletzung des Rechts auf #Datenrichtigkeit und mangelhafte Auskunft. Der Fall zeigt die Risiken sogenannter KI-„Halluzinationen“, die reale Menschen schwer schädigen können.

heise.de/news/ChatGPT-macht-Ma

heise online · ChatGPT macht Mann zu Mörder: Noyb reicht Datenschutzbeschwerde einBy Eva-Maria Weiß

I logged into my accounting system (Xero) this morning to discover yet another AI chatbot.

Their first example of something you can ask the chatbot is "how much do I have in outstanding invoices?"

Dear reader, this is a number displayed prominently on the dashboard of the application. If you can't find this number without a chatbot, you shouldn't be anywhere near an accounting system.

Fuck me.

HELP! BEN IK EEN ALGORITME? Geïnspireerd door de Oscarwinnende film ‘Ik ben geen robot’ besloot ik een van mijn schrijfsels door een AI-detector te halen. Wat blijkt? Volgens het #algoritme is er een kans van maar liefst 40% dat mijn tekst door een algoritme is geschreven. Dit zette me aan het denken: wat zegt dit over #AI? Wat zegt dit over mij? En hoe objectief is dit? Ik schreef er een column over voor De Ingenieur: deingenieur.nl/artikelen/help- #onderwijs #chatgpt #huiswerk #robot #chatbot

Wenn ihr mir folgt, habt ihr wahrscheinlich kein Amazon Echo in Hörweite stehen. Wer sowas kauft, hat ganz sicher die Kontrolle über sein Leben verloren. Das Gerät hat nicht umsonst mehrere #BigBrotherAwards »gewonnen«.

Aber vielleicht kennt ihr ja Leute, die sich selbst was vormachten und dachten, sie seien auf der sicheren Seite, wenn sie anklicken, dass der Echo (Dot/Spot) die Spracherkennung selbst machen soll, statt alle Tonaufzeichnungen in die Amazon-Cloud zu schicken. Tja, diese Option schaltet Amazon Ende des Monats ab. Weil der Bullshit-Generator Alexa+ sonst nicht profitabel sei.

arstechnica.com/gadgets/2025/0

In this photo illustration, Echo Dot smart speaker with working Alexa with blue light ring seen displayed.
Ars Technica · Everything you say to your Echo will be sent to Amazon starting on March 28By Scharon Harding

#Musk's #AI #Chatbot was asked whether Democrats or Republicans were better for the economy. “Since WWII, Democrats have outperformed Republicans on the economy,” the chatbot replied, racking up more than 33,000 likes. “GDP growth averages 4.23% under Dems vs. 2.36% under GOP. Job creation? 1.7% yearly for Dems, 1.0% for Republicans.”

“Also, 9 of the last 10 recessions started under Republican presidents,” it continued.
huffpost.com/entry/republicans

HuffPost · Republicans Hilariously Undermined By Elon Musk's Own AI ChatbotBy Marco Margaritoff

I made another foray into the ethics of AI, this time with my colleagues Jan-Willem van der Rijt and Bram Vaassen.

arxiv.org/abs/2503.05723

We argue that some interactions with chatbots involve a kind of offense to users' dignity. When we treat chatbots as if they were fellow moral agents, we enter into an asymmetrical relation where we give moral recognition but cannot get any back. This is a failure of self-respect, a form of self-debasement.

Comments welcome!

arXiv.orgAI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-RespectThis paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots. Indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings' behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue that interacting with chatbots in this way is incompatible with the dignity of users. We show that, since second-personal respect is premised on reciprocal recognition of second-personal authority, behaving towards chatbots in ways that convey second-personal respect is bound to misfire in morally problematic ways, given the lack of reciprocity. Consequently, such chatbot interactions amount to subtle but significant violations of self-respect: the respect we are dutybound to show for our own dignity. We illustrate this by discussing four actual chatbot use cases (information retrieval, customer service, advising, and companionship), and propound that the increasing societal pressure to engage in such interactions with chatbots poses a hitherto underappreciated threat to human dignity.

→ Israel developing ChatGPT-like tool that weaponizes surveillance of Palestinians
972mag.com/israeli-intelligenc

“[…] Unit 8200’s #chatbot has been trained on 100 billion words of #Arabic obtained in part through #Israel’s large-scale #surveillance of #Palestinians under the rule of its #military — which experts warn constitutes a severe violation of Palestinian #rights. “We are talking about highly personal information, taken from #people who are not suspected of any crime […]”

+972 Magazine · Israel is building a ChatGPT-like tool weaponizing surveillance of PalestiniansThe Israeli army is developing an AI language model using millions of intercepted conversations between Palestinians, accelerating incrimination and arrest.