shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

285
active users

#aisystems

0 posts0 participants0 posts today

As #Israel uses #US-made #AI models in #war, concerns arise about tech’s role in who lives & who dies
"an #AP #investigation revealed new details of how #AIsystems select targets & ways they can go wrong, incl'g faulty data or #flawed #algorithms.. based on internal docs, data & exclusive interviews w curr & fmr #Israeli #officials & co employees.. is the 1st confirmation tt #commercialAI models r directly used in #warfare.. enabling tis type of unethical & #unlawful war”
apnews.com/article/israel-pale

"Even the most impressive #AIchatbots require thousands of human work hours to behave in a way their creators want them to, and even then they do it unreliably. The work can be brutal and upsetting, as we will hear this week when the ACM Conference on #Fairness, #Accountability, and #Transparency (#FAccT) gets underway. It’s a conference that brings together research on things I like to write about, such as how to make #AISystems more #accountable and #ethical." technologyreview.com/2023/06/1

MIT Technology Review · We are all AI’s free data workersBy Melissa Heikkilä

As part of pre-release safety testing for its new #GPT4 #AI model, launched Tuesday, #OpenAI allowed an AI testing group to assess the potential #risks of the model's #emergent #capabilities—including #PowerSeeking #behavior #SelfReplication, and #SelfImprovement. While the testing group found that GPT-4 was "ineffective at the autonomous replication task," the nature of the experiments raises eye-opening questions about the safety of #future #AISystems.arstechnica.com/information-te #AI #Risk #safety

Ars TechnicaOpenAI checked to see whether GPT-4 could take over the world"ARC's evaluation has much lower probability of leading to an AI takeover than the deployment itself."