e516 with Michael and Michael - #AI #prompts, #browsers, #ModelCollapse & #automation along with a teeny tiny #PicoMacNano and so much more.
e516 with Michael and Michael - #AI #prompts, #browsers, #ModelCollapse & #automation along with a teeny tiny #PicoMacNano and so much more.
NEW: Rage Against The Machine Learning (deluxe edition)
https://martinh.bandcamp.com/
Here's what happens when you ask a hot new "AI" music generator to write some songs about deceptive AIs, lamenting billionaires, and catgirl hackers with their ThinkPads and geodesic domes.
#GenerativeAI is saturating the #internet with algorithm-generated content, leading to #ModelCollapse over time.
Model collapse occurs when #AI trains on its own outputs, resulting in declining quality and diversity of data.
#GenerativeAI's "model collapse" will cause it to poison itself, here's what that means
https://www.xda-developers.com/generative-ai-poison-itself/
@jamiemccarthy @waldoj
It will be a very expensive #ModelCollapse.
More botshit, than AGI.
https://www.superversive.co/blog/synthetic-data-is-fentanyl-for-ai
SHOT:
“Against the threat of model collapse, what is a hapless machine-learning engineer to do? The answer could be the equivalent of prewar steel in a Geiger counter: data known to be free (or perhaps as free as possible) from generative AI’s touch.”
https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/
CHASER:
#PreWarSteel is the equivalent of clean, human-generated data.
/imagine salt-and-thorium mini-reactors designed by #cyberpunk and #solarpunk at #Microsoft.
Widespread LLM usage was Chernobyl of the internet.
Synthetic machines which are purpose-built to automate synthetic websites containing synthetic content (which are not protected IP), and are then re-ingested by other synthetic machines to generate more unprotected synthetic content is a good way to cause #ModelCollapse and also pollute our information ecosystem.
I feel like #LLMs were Chernobyl of the internet and everyone is inside the containment zone.
https://www-bbc-com.cdn.ampproject.org/c/s/www.bbc.com/news/technology-67826601.amp
I wrote about model collapse in August like it was a thing that would take years. It’s occurred in four months: https://www.fastcompany.com/90998360/grok-openai-model-collapse
#ModelCollapse begs for new coinages. #AICentipede, maybe?
https://mastodon.social/@FeralRobots/111023640006528981
My predicted Word of the Year for 2024: #ModelCollapse
#DataPoisoning
https://mas.to/@carnage4life/111556407042548417
#ai people need to be saying #AIncest instead of #modelcollapse to describe deteriorations when #LLM models are fed AI-generated content.
I've seen a lot of people excited about #AI model collapse, hoping that AI generated content will poison the public well of the internet leading to less effective language models overall...
For language models at least this problem has already been solved and has even gone in the other direction. The language models we have now can filter and distinguish between good and garbage content in a data set. They can now even be used to _generate higher quality input than humans_.
The top performing open source models have mostly been refined on AI generated content.
As far as I'm aware there isn't an equivalent for diffusion style models (the model type generally used for image generation). We have multi-modal models of varying qualities now so I suspect it's not long before the language model techniques can be directly applied to them...
talking with some @rspec folks the other night about #LLM #ModelCollapse & everyone just going to #ouroboros as a metaphor, but it's really not adequate because ouroboros has a more cosmic connotation - it's not necessarily bad, might even signify achieving a kind of wisdom.
No, the better metaphor for "#AI" model collapse is a kind of #HumanCentipede, except that the centipede is just stitched into a circle. A Human Centipede Ouroboros, if you will.
"This 'pollution' with #AiGenerated data results in models gaining a distorted perception of reality.
Even when researchers trained the models not to produce too many repeating responses, they found #ModelCollapse still occurred, as the models would start to make up erroneous responses to avoid repeating data too frequently." #GenerativeAI #ViciousCycle
The AI feedback loop: Researchers warn of 'model collapse' as #AI trains on AI-generated content | VentureBeat
https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/