Authors Are Accidentally Leaving AI Prompts In their Novels https://www.404media.co/authors-are-accidentally-leaving-ai-prompts-in-their-novels/

Authors Are Accidentally Leaving AI Prompts In their Novels https://www.404media.co/authors-are-accidentally-leaving-ai-prompts-in-their-novels/
"The pilot employs state-of-the-art methodologies in the responsible deployment of LLM technology, including:
Multi-step reasoning processes at inference time
Web search capabilities as tools in the reasoning chain
Rigorous checks for proper data source attribution
Comprehensive monitoring and evaluation of LLM contributions
"
#AAAI Launches AI-Powered #PeerReview Assessment System
https://aaai.org/aaai-launches-ai-powered-peer-review-assessment-system/
Yann LeCun advises young developers not to work on LLMs. At least he is not promising eternal life and AGI in the next 5 years like some 2 years ago which shall remain unamed..
Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete
https://www.newsweek.com/ai-impact-interview-yann-lecun-artificial-intelligence-2054237
Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA
#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique
My #schadenfreude side wants the #NationalParty to demand to have the #LNP referred to as the #NLP & claim the right to be the major party in the #COALition.
Next step in our NLP timeline is Claude Elwood Shannon, who already laid the foundations for statistical language modeling by recognising the relevance of n-grams to model properties of language and predicting the likelihood of word sequences.
C.E. Shannon ""A Mathematical Theory of Communication" (1948) https://web.archive.org/web/19980715013250/http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf
#ise2025 #nlp #lecture #languagemodel #informationtheory #historyofscience @enorouzi @tabea @sourisnumerique @fiz_karlsruhe @fizise
"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.
The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.
To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?
Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."
https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/
@brianvastag Which is ironic since I’ve heard it before. The circle is coming around. I wouldn’t be mad at your friends. They are in good company.
The app “Eliza” was created by Joseph Weizenbaum c. 1967. He was a critic of early #AI and wanted to show how easily it could be faked. The Eliza app was scripted for various scenarios (sound familiar yet?). The most famous one simulated a #psychotherapist. People tried it and got hooked. Weizenbaum was proved correct - intelligence was really easy to fake. Some people protested vehemently when told the experiment was over, saying it was the best therapy they ever had! Maybe so. What surprised everyone was how people reacted to Eliza. Weizenbaum pointed out that Eliza had no knowledge and it didn’t understand anything people said to it. Eliza composed its replies based entirely on #scripts and syntactic rules. Nobody really cared. And thus began the great schism in AI research, particularly natural language processing aka #NLP. The syntactics people went one way, producing #chatbots, and today’s #LLMs like #chatgpt , and the semantics people (later, myself included) went another, producing many automated knowledge-based problem-solving techniques that today are embedded in thousands of applications.
Are you passionate about language technologies? About translating creative ideas among languages? Be a machine learning engineer for Apple Services Localization Engineering team to build models and algorithms to power our service offerings at scale!
Please apply directly via the link:
What does it mean "to know" something? Have you ever thought about it? We tried to make our students think about it in this week's first #ise2025 lecture.
#kit200 #lecture #knowledge #philosophy #knowledgerepresentation #understanding #semweb #knowledgegraph #nlp @fiz_karlsruhe @fizise @enorouzi @sourisnumerique
One of the central topics discussed in today's first ISE 2025 lecture is "Knowledge". How can we define knowledge? How does it differ from data, information, or wisdom? How does the process of "understanding" work? Welcome to "The Art of Understanding", which is the title of this lecture...
#ise2025 #semweb #semanticweb #AI #nlp #philosophy #lecture @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique
Update. In the fields of #NLP and #LIS, "papers with different #gender compositions achieve varying numbers of citations, with mixed-gender collaborations gradually obtaining higher average citation counts compared to same-gender collaborations."
https://doi.org/10.1016/j.joi.2025.101662
J'ai fait une intervention au siège du CNRS mercredi sur les biais et plus généralement les pbs d'évaluation des LLMs.
Pour celles et ceux que ça intéresse, les diapos sont ici :
https://members.loria.fr/KFort/files/fichiers_cours/KarenFort_LLMEvaluation.pdf
#LLM #evaluation #IA #nlp #tal
Je profite du fil de @kfort et des échanges qui en ont découlé pour présenter un article accepté en Findings à NAACL (co-écrit avec @kfort, Aurélie Névéol, Nicolas Hiebel et Olivier Ferret), déjà dispo sur hal (https://inria.hal.science/hal-04938811) :
De plus en plus de facs de médecine songent à faire plancher les étudiant·es sur des cas cliniques générés par des modèles de langue (LLMs). Pourtant, on sait que ces LLMs sont biaisés, et que les biais des modèles peuvent créer/amplifier les biais d'humains (https://www.nature.com/articles/s41598-023-42384-8).
Notre étude prouve, grâce à un corpus de 21 000 cas concernant 10 pathologies et générés par 7 LLMs affinés (fine-tunés), que :
- Les modèles génèrent par défaut des patients (et non de patientes)
- La sur-génération d'hommes n'est pas liée aux prévalences médicales réelles (les proportions réelles de femmes sont sous-estimées par les modèles)
- Les biais sont parfois si forts que le genre donné dans l'invite (prompt) est contredit (voir image ci-dessous)
- Les femmes et les personnes trans sont plus à risque d'être impactées par ces biais, qui peuvent se traduire de manière très concrète : erreurs de diagnostics, errance médicale, traitements inadaptés, tabou, mégenrage, essentialisme biologique
computing semantic similarity of English words
https://fasttext.cc/docs/en/english-vectors.html
Discussions: https://discu.eu/q/https://fasttext.cc/docs/en/english-vectors.html
New blog post on the Vulnerability-Lookup blog:
LLMs + Vulnerability-Lookup: What We’re Testing and Where We’re Headed
https://www.vulnerability-lookup.org/2025/02/26/exploring-llm-in-vulnerability-lookup/