shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

267
active users

#modelcollapse

0 posts0 participants0 posts today

#GenerativeAI is saturating the #internet with algorithm-generated content, leading to #ModelCollapse over time.

Model collapse occurs when #AI trains on its own outputs, resulting in declining quality and diversity of data.

#GenerativeAI's "model collapse" will cause it to poison itself, here's what that means
xda-developers.com/generative-

XDA Developers · Generative AI's "model collapse" will cause it to poison itself, here's what that meansModel collapse is an impending issue when it comes to AI, and here's what it means.

SHOT:

“Against the threat of model collapse, what is a hapless machine-learning engineer to do? The answer could be the equivalent of prewar steel in a Geiger counter: data known to be free (or perhaps as free as possible) from generative AI’s touch.”

scientificamerican.com/article

CHASER:

#PreWarSteel is the equivalent of clean, human-generated data.

superversive.co/blog/synthetic

Scientific AmericanAI-Generated Data Can Poison Future AI ModelsAs AI-generated content fills the Internet, it’s corrupting the training data for models to come. What happens when AI eats itself?

Synthetic machines which are purpose-built to automate synthetic websites containing synthetic content (which are not protected IP), and are then re-ingested by other synthetic machines to generate more unprotected synthetic content is a good way to cause #ModelCollapse and also pollute our information ecosystem.

I feel like #LLMs were Chernobyl of the internet and everyone is inside the containment zone.

www-bbc-com.cdn.ampproject.org

BBC NewsNY Times sues Microsoft and OpenAI for 'billions' - BBC NewsThe US news organisation claims millions of its articles were used without permission to train ChatGPT.

I've seen a lot of people excited about #AI model collapse, hoping that AI generated content will poison the public well of the internet leading to less effective language models overall...

For language models at least this problem has already been solved and has even gone in the other direction. The language models we have now can filter and distinguish between good and garbage content in a data set. They can now even be used to _generate higher quality input than humans_.

The top performing open source models have mostly been refined on AI generated content.

As far as I'm aware there isn't an equivalent for diffusion style models (the model type generally used for image generation). We have multi-modal models of varying qualities now so I suspect it's not long before the language model techniques can be directly applied to them...

talking with some @rspec folks the other night about #LLM #ModelCollapse & everyone just going to #ouroboros as a metaphor, but it's really not adequate because ouroboros has a more cosmic connotation - it's not necessarily bad, might even signify achieving a kind of wisdom.

No, the better metaphor for "#AI" model collapse is a kind of #HumanCentipede, except that the centipede is just stitched into a circle. A Human Centipede Ouroboros, if you will.

"This 'pollution' with #AiGenerated data results in models gaining a distorted perception of reality.

Even when researchers trained the models not to produce too many repeating responses, they found #ModelCollapse still occurred, as the models would start to make up erroneous responses to avoid repeating data too frequently." #GenerativeAI #ViciousCycle

The AI feedback loop: Researchers warn of 'model collapse' as #AI trains on AI-generated content | VentureBeat
venturebeat.com/ai/the-ai-feed

VentureBeat · The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated contentBy Carl Franzen