shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

290
active users

#gpt

2 posts2 participants0 posts today

In the spirit of calling Republican bigots "Mrs Cruz" and "Mrs Graham" I wonder what it's like to dehumanise extreme MAGA people like the Whitehouse Press Secretary - I guess it's an aberrant mole-rat now since it seems only fair to do the same to it as it does to others.

The picture of it that AI generated for me isn't pleasant, so I marked it as sensitive.

Oh, joy. Lee Zeldin the new EPA head:

"I've been told the Endangerment Finding is considered the Holy Grail of the Climate Change religion. For me, the US Constitution, and the laws of this nation will be strictly interpreted and followed. No exceptions".

To start with, how are the two sentences linked?

"I have been told that a wooden cup, that used to be in a gilded metal box along with the shin bone of St Benedict, found in a field in Berkshire, is the Holy Grail of the Christian religion. For me, the US Constitution, and the laws of this nation will be strictly interpreted and followed. No exceptions".

Also, the snivelly little arse-nugget knows full well that Climate Change is a real, scientific fact, that people can see, smell, touch and taste but for now he's rich, and he will be dead in 30 years, so he doesn't give a toss when he kills potentially millions of people.

This takes us back to the whole idea that political decision-makers in his position who are deliberately spreading and trading in lies they know will kill people, should be charged with 2nd-degree murder for their deliberate actions.

#EPA #Zeldin #Arsenugget
#Murder #ClimateChange #Lies #Misinformation #Propaganda #USPol #Politics #Sorry #Greta #Trump #HolyGrail #Weaponised #Religion #GPT #AI

Revealed: How the UK tech secretary uses ChatGPT for policy advice
New Scientist has used freedom of information laws to obtain the ChatGPT records of Peter Kyle, the UK's technology secretary, in what is believed to be a world-first use of such legislation
From https://www.newscientist.com/article/2472068-revealed-how-the-uk-tech-secretary-uses-chatgpt-for-policy-advice/
These records show that Kyle asked ChatGPT to explain why the UK’s small and medium business (SMB) community has been so slow to adopt AI. ChatGPT returned a 10-point list of problems hindering adoption, including sections on “Limited Awareness and Understanding”, “Regulatory and Ethical Concerns” and “Lack of Government or Institutional Support”.
Apparently it didn't say "because it's unhelpful and probably harmful to most SMB problems" or "what on earth are you doing asking a computer this you fool?".

#GPT #ChatGPT #AI #UKPol #policy #BadIdeas
New Scientist · Revealed: How the UK tech secretary uses ChatGPT for policy adviceBy Chris Stokel-Walker

Hi #Admins 👋,

Can you give me quotes that explain your fight against #KIScraping? I'm looking for (verbal) images, metaphors, comparisons, etc. that explain to non-techies what's going on. (efforts, goals, resources...)

I intend to publish your quotes in a text on @campact 's blog¹ (DE, German NGO).

The quotes should make your work🙏 visible in a generally understandable way.

¹ blog.campact.de/author/friedem

Campact BlogFriedemann EbeltFriedemann Ebelt engagiert sich für digitale Grundrechte. Im Campact-Blog schreibt er darüber, wie Digitalisierung fair, frei und nachhaltig gelingen kann. Er hat Ethnologie und Kommunikationswissenschaften studiert und interessiert sich für alles, was zwischen Politik, Technik, und Gesellschaft passiert. Sein vorläufiges Fazit: Wir müssen uns besser digitalisieren!

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation misinforeview.hks.harvard.edu/
"Roughly two-thirds of the retrieved papers were found to have been produced, at least in part, through undisclosed, potentially deceptive use of GPT. The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations. Most were available in several copies on different domains (e.g., social media, archives, and repositories).
Two main risks arise from the increasingly common use of #GPT to (mass-)produce #fake, scientific #publications. First, the abundance of fabricated “studies” seeping into all areas of the #research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with #AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly #GoogleScholar. However small, this possibility and awareness of it risks undermining the basis for #trust in #scientificKnowledge and poses serious societal risks."
#science #AIEthics

Misinformation Review · GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation | HKS Misinformation ReviewAcademic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of