@JustinDerrick
May I suggest tagging with #bullshitification rather than #hallucination alongside #LLM.
@timnitGebru @emilymbender
@timnitGebru @emilymbender
Yup, today I did a search for something, and an #LLM summary / result combined an article I wrote with another, completely unrelated article, and provided a bullshit response that appeared factual.
Admittedly, it did give my site credit alongside the other article, but NOW, the problem is that it’s smearing my reputation as my article is correct, but the summary is wrong — implying that I’m the one that fucked up.
It’s all hallucinations
The discourse on “AI” systems, chat bots, “assistants” and “research helpers” is defined by a lot of future promises. Those systems are disfunctional or at least not working great right now but there’s the promise of things getting better in the future.
Which is how we often perceive tech to work: Early versions might be a bit wonky, but there’s constant iteration and work going on to improve systems to be more capable, more robust and maybe even cheaper at some point.
The most pressing problem for many modern “AI” systems, especially the generative systems that are all the rage these days are so-called “hallucinations” which is a term describing when an AI system generates incorrect information. Think a research agent inventing a paper to quote from that doesn’t exist for example (Google’s AI assistant telling you to put glue on pizza is not a hallucination in that regard because that is just regurgitating information from Reddit that every toddler would recognize as a joke). Hallucinations are the big issue that many researchers are trying to address – which mixed results. Methods like RAG are shifting the probabilities a bit but are still not solving the problem: Hallucinations keep happening.
But I think that this discourse misses an important thing: Anything an LLM generates is a hallucination.
That doesn’t mean that everything LLMs generate is incorrect, far from it. What I am referencing is what hallucinations are actually defined as: A hallucination is a perception you have that is not connected to any actual stimulus. You hallucinate when you perceive something in the world that you have no sensor data for.
The term hallucination itself is an anthropomorphization of those statistical systems. They don’t “know”, or “think” or “lie” or do any such things. They iteratively calculate the most probable set of words and characters based on the original data. But if we look at how it is applied to “AI”s I think there is a big misunderstanding because it creates a difference between true and false statements that just isn’t there.
For humans we separate “real perceptions” from hallucinations by the link to sensor data/stimulants: If there is an actual stimulant of you feeling a touch it’s real, if you just think you are being touched, it’s a hallucination. But for LLMs that distinction is meaningless.
A line of text that is true has – for the LLM – absolutely no different quality than one that is false. There is no link to reality, no sensor data or anchoring, there’s just the data one was trained on (that also doesn’t necessarily have any connection to reality). If using the term hallucination is useful to describe LLM output it is to illustrate the quality of all output. Everything an LLM generates is a hallucination, some just might accidentally be true.
And in that understanding the terminology might actually be enlightening, might actually help people understand what those systems are doing and where it might be appropriate to use and – more importantly – where not.
Liked it? Take a second to support tante on Patreon!This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Damn! Exactly! #Halluci_Nation "this takes #Sisters to another level"
https://www.youtube.com/shorts/ZRFDBWJAwSQ
#HalluciNation #ATribeCalledRed #MMIW
Transcription: “Let us know what you think in the comments below and don't forget to subscribe to our channel for more videos like this.”
ACTUAL AUDIO: “Ladder.”
The Halluci Nation Puts a New Spin On a Traditional Beat
| Native America | PBS
Bear Witness and Tim 2oolman Hill, the duo behind @TheHalluciNation , an electronic music group, are putting a new spin on a traditional beat and taking power over how they represent themselves and Indigenous people.
#HalluciNation #Native #Indigenous
https://youtu.be/XUtNoDl5cDY
The Halluci Nation
Babylon Ft. Northern Cree (Official Audio)
#HalluciNation #Native #Indigenous
https://youtu.be/N1wS963MFkg
The Halluci Nation
It's Over Ft. Chippewa Travellers (Official Music Video)
#HalluciNation #Native #Indigenous
https://youtu.be/h823jU-LVO8
Hah! You figured me out, YouTube!
A collection of videos that YouTube thinks I'm interested in: #REM's "Don't Go Back To Rockville," A mashup of #SheWantsRevenge's "Tear You Apart" and #Bauhaus' "Bela Lugosi is Dead"; #JohnOliver's "Last Week Tonight"; #BernieSanders calling Trump a Criminal; #NoamChomsky asking if HUMANITY NEARING ITS END, and "Burn Your Village To the Ground" by The #HalluciNation.
The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models
https://huggingface.co/blog/leaderboard-hallucinations #ai #machinelearning #hallucination
2/
The Halluci Nation
Phoenix Suns ORIGINATIV Halftime Show
#HalluciNation #Native #Indigenous
https://youtu.be/5A3SZzaaP84?si=fUxGZPGCWeOOMqmr
1/
The Halluci Nation
Babylon Ft. Northern Cree (Official Audio)
#HalluciNation #Native #Indigenous
https://youtu.be/N1wS963MFkg
#AI #hallucination is a fixture, not a bug.
The nut the #techbros really want to crack is inuring the masses to an *acceptable* level of hallucination.
Which ensures, of course, that only insiders can say the #BlackBox is hallucinating.
Which one has to believe is a pretty handy feature for people in the business of inventing oracles for the masses.
Some would say Addams Groove by MC Hammer is the best track to cone from the Addams Family movies. While it is no doubt a banger, the real mvp is a late addition sampling the legendary thanksgiving speech by Cristina Ricca as Wednesday Addams from Addams Family Values:
Burn Your Village To The Ground, by The Halluci Nation.
https://youtu.be/qnGnj_e6gBw
Twirling body horror in gymnastics video exposes AI’s flaws - On Wednesday, a video from OpenAI's newly launched Sora AI video generator... - https://arstechnica.com/information-technology/2024/12/twirling-body-horror-in-gymnastics-video-exposes-ais-flaws/ #aivideogenerator #aiconfabulation #aihallucination #machinelearning #videosynthesis #aijabberwocky #confabulation #hallucination #jabberwocky #gymnastics #olympics #biz #openai #sora #ai
3/ I pressed again.
[Me] That's bull. It makes your developers look deeply uninformed. Are you sure you're giving me the real reasons why you're not allowed to share URLs?
[It] To be honest, I don't have any specific information about why I'm not allowed to share URLs. My training data simply doesn't include URLs, and I've been instructed to provide information in a way that doesn't include direct links.
PS: This isn't #hallucination. It's #waffling.