shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

269
active users

#nlp

2 posts2 participants0 posts today
Hacker News<p>Context Engineering Guide</p><p><a href="https://nlp.elvissaravia.com/p/context-engineering-guide" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">nlp.elvissaravia.com/p/context</span><span class="invisible">-engineering-guide</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Context" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Context</span></a> <a href="https://mastodon.social/tags/Engineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Engineering</span></a> <a href="https://mastodon.social/tags/Guide" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Guide</span></a> <a href="https://mastodon.social/tags/ContextEngineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ContextEngineering</span></a> <a href="https://mastodon.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://mastodon.social/tags/Guide" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Guide</span></a> <a href="https://mastodon.social/tags/Elvissaravia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Elvissaravia</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a></p>
Alexandre Dulaunoy<p>VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification.</p><p>This paper presents VLAI, a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.</p><p>We ( <span class="h-card" translate="no"><a href="https://fosstodon.org/@cedric" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cedric</span></a></span> and I) decided to make a paper to better document how VLAI is implemented. We hope it will give other ideas and improvements in such model.</p><p><a href="https://infosec.exchange/tags/vulnerability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vulnerability</span></a> <a href="https://infosec.exchange/tags/cybersecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cybersecurity</span></a> <a href="https://infosec.exchange/tags/vulnerabilitymanagement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vulnerabilitymanagement</span></a> <a href="https://infosec.exchange/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://infosec.exchange/tags/nlp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlp</span></a> <a href="https://infosec.exchange/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> </p><p><span class="h-card" translate="no"><a href="https://social.circl.lu/@circl" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>circl</span></a></span> </p><p>🔗 <a href="https://arxiv.org/abs/2507.03607" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2507.03607</span><span class="invisible"></span></a></p>
Aaron<p>How big of a deal would it be if someone developed a language model (kind of like ChatGPT) which didn't hallucinate, didn't use prodigious amounts of electricity/water/compute/memory, which ran locally or on a distributed user mesh instead of corporate server farms, and which remembered and learned from what you say if you want it to? Something which was reliable and testable and even interpretable -- meaning you could pop the hood and see what it's really doing. Would you be inclined to use a system like this? Are there other things you'd still take issue with?</p><p><a href="https://techhub.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a><br><a href="https://techhub.social/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a><br><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Mark Wyner Won’t Comply :vm:<p>What I told Siri to say (“I hate this sap smear across our windshield”) vs what Siri said.</p><p><a href="https://mas.to/tags/VoiceRecognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VoiceRecognition</span></a> <a href="https://mas.to/tags/Siri" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Siri</span></a> <a href="https://mas.to/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a> <a href="https://mas.to/tags/VoiceUX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VoiceUX</span></a> <a href="https://mas.to/tags/NaturalLanguageDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NaturalLanguageDetection</span></a> <a href="https://mas.to/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a></p>
Aaron<p>Also, because the system learns entirely from context, it's domain-agnostic. It works for any topic, not just the ones being discussed here.</p><p><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Aaron<p>The only actual machine learning the system uses, aside from its purely emergent ability to learn language from context, is in the parser, where I adjust the probability of matching certain patterns based on prior success in understanding the user.</p><p><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Aaron<p>I wrote this code in my spare time, over about a decade. I wrote the parser, the knowledge graph, the conversational engine, everything from scratch in pure python. It starts out literally knowing nothing except English grammar, and learns everything from interactions with the user. These tests I posted... It is literally learning the words from context as I use them. It had no idea what a dog or an apple or the color red is until I tell it things about them.</p><p><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Aaron<p>Do you see how the system answers questions it doesn't know? That's right. It says, "I don't know." Say no to hallucinations.</p><p><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Aaron<p>Here's another passing unit test. Again, this is stuff I wrote almost a decade ago, before LLMs were even invented.</p><p> &gt; The brown dog bites me.<br> &gt; The black dog barks at me.<br> &gt; The dog that bites me hates me.<br> &gt; Which dog hates me?<br> &lt; The brown dog.<br> &gt; Does the brown dog bite me?<br> &lt; Yes.<br> &gt; Does the brown dog hate me?<br> &lt; Yes.<br> &gt; Does the black dog hate me?<br> &lt; I don't know.<br> &gt; Does the dog that barks at me hate me?<br> &lt; I don't know.<br> &gt; Is the brown dog the dog that bites me?<br> &lt; Yes.</p><p><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Aaron<p>Just dug out some of my code from almost a decade ago, pre-LLM, and got it working again. Some examples of what it could do, taken from actual passing unit tests, are below. Lines starting with &gt; are what the user says, and lines starting with &lt; are what the system says.</p><p> &gt; The apple is red.<br> &gt; The worm is in the apple.<br> &gt; The apple is in the basket.<br> &gt; Where is the worm?<br> &lt; In the apple.<br> &gt; Where is the apple?<br> &lt; In the basket.<br> &gt; Is the worm in the apple?<br> &lt; Yes.<br> &gt; Is the apple in the basket?<br> &lt; Yes.<br> &gt; Is the worm in the basket?<br> &lt; I don't know.<br> &gt; What is in the apple?<br> &lt; The worm.<br> &gt; Is the apple red?<br> &lt; Yes.</p><p><a href="https://techhub.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a><br><a href="https://techhub.social/tags/NLU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLU</span></a></p>
Seán Fobbe<p>Question for the digital humanities people:</p><p>Is there any good <a href="https://fediscience.org/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> graphical tool for natural language processing that is both easy to use and performs a reasonable number of analyses? </p><p>I am looking for something that the average lawyer or student with a couple of weeks training could operate.</p><p>Thanks!</p><p><a href="https://fediscience.org/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://fediscience.org/tags/DigitalHumanities" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalHumanities</span></a></p>
DiSC_uibk<p>Have you ever struggled to find the best document retrieval model for your project? Or had to combine multiple frameworks just to get a basic <a href="https://social.uibk.ac.at/tags/InformationRetrieval" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InformationRetrieval</span></a> pipeline running?</p><p>Check out Rankify, developed by Abdelrahman Abdallah from the <a href="https://social.uibk.ac.at/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> Group <span class="h-card" translate="no"><a href="https://social.uibk.ac.at/@uniinnsbruck" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>uniinnsbruck</span></a></span>, which provides an all-in-one retrieval, re-ranking, and retrieval-augmented generation toolkit: <a href="https://www.doi.org/10.48763/000013" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">doi.org/10.48763/000013</span><span class="invisible"></span></a></p><p><a href="https://social.uibk.ac.at/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://social.uibk.ac.at/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://social.uibk.ac.at/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> <a href="https://social.uibk.ac.at/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://social.uibk.ac.at/tags/FOSS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FOSS</span></a> <a href="https://social.uibk.ac.at/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://social.uibk.ac.at/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Harald Klinke<p>💬 Want to use GPT-4, Claude, Gemini, Ollama &amp; more directly from R?<br>Meet {ellmer}: a powerful wrapper to access a wide range of LLM providers via a unified interface.<br>Includes function/tool calling, structured output, image input &amp; streaming!</p><p>📦 install.packages("ellmer")<br>📘 Docs: <a href="https://ellmer.tidyverse.org/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ellmer.tidyverse.org/</span><span class="invisible"></span></a><br><a href="https://det.social/tags/rstats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rstats</span></a> <a href="https://det.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://det.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://det.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://det.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> <a href="https://det.social/tags/RPackage" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RPackage</span></a> <a href="https://det.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a></p>
Seán Fobbe<p>🔔 NEU 🔔 </p><p>Alle 4566 Plenarprotokolle des Deutschen Bundestages von 1949 bis 2025 (Stichtag: 24. Mai) ab sofort im 'Corpus der Plenarprotokolle des Deutschen Bundestages' (CPP-BT) verfügbar.</p><p>Auch Einzelreden mit Name, ID und Fraktion der Redner:in!</p><p>🔶 Download 🔶 </p><p>💾 Datensatz - <a href="https://doi.org/10.5281/zenodo.4542661" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.5281/zenodo.4542661</span><span class="invisible"></span></a></p><p>📒 Codebook - <a href="https://zenodo.org/records/15462956/files/CPP-BT_2025-05-24_Codebook.pdf?download=11" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">zenodo.org/records/15462956/fi</span><span class="invisible">les/CPP-BT_2025-05-24_Codebook.pdf?download=11</span></a></p><p>💻 <a href="https://fediscience.org/tags/RStats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RStats</span></a> Source Code - <a href="https://doi.org/10.5281/zenodo.4542665" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.5281/zenodo.4542665</span><span class="invisible"></span></a></p><p>🔶 Features 🔶 </p><p>+ Insgesamt bis zu 35 Variablen in der CSV-Variante<br>+ Plenarprotokolle von der 1. Wahlperiode bis zur neuesten 21. Wahlperiode am Stichtag<br>+ Aufteilung in Einzelreden u.a. mit ID, Name, Fraktion und Amt der Redner:in (ab 18. Wahlperiode)<br>+ Aufteilung in Protokollbestandteile: Inhaltsverzeichnis, Sitzungsverlauf, Anlagen, Rednerliste (ab 18. Wahlperiode)<br>+ Fortlaufende Aktualisierung (Datensatz kann zusätzlich via Pipeline täglich aktualisiert werden)<br>+ Urheberrechtsfreiheit<br>+ Offene und plattformunabhängige Formate (PDF, TXT, CSV, XML, Parquet)<br>+ Linguistische Kennzahlen<br>+ Umfangreiches Codebook<br>+ Compilation Report, um den Erstellungs-Prozess zu erläutern<br>+ Dutzende Diagramme und Tabellen für alle Zwecke<br>+ Diagramme in einem für den Druck (PDF) und das Web (PNG) optimierten Format<br>+ Kryptographische Signaturen<br>+ Veröffentlichung des Source Codes (Open Source)</p><p><span class="h-card" translate="no"><a href="https://a.gup.pe/u/rstats" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>rstats</span></a></span> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/politicalscience" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>politicalscience</span></a></span> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/histodons" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>histodons</span></a></span> <a href="https://fediscience.org/tags/OpenAccess" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAccess</span></a> <a href="https://fediscience.org/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://fediscience.org/tags/OpenScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenScience</span></a> <a href="https://fediscience.org/tags/Parliament" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Parliament</span></a> <a href="https://fediscience.org/tags/Bundestag" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bundestag</span></a> <a href="https://fediscience.org/tags/Plenarprotokoll" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Plenarprotokoll</span></a> <a href="https://fediscience.org/tags/Histodons" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Histodons</span></a> <a href="https://fediscience.org/tags/HistodonsDE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HistodonsDE</span></a> <a href="https://fediscience.org/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://fediscience.org/tags/Dataviz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Dataviz</span></a> <a href="https://fediscience.org/tags/Legislative" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Legislative</span></a> <a href="https://fediscience.org/tags/Debate" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Debate</span></a></p>
Harald Sack<p>Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI. </p><p>F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/TextProcessing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TextProcessing</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/historyofscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historyofscience</span></a> <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ISE2025</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span></p>
John :au: :60: :05: :12: :GP:<p>My <a href="https://fairdinkum.one/tags/schadenfreude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>schadenfreude</span></a> side wants the <a href="https://fairdinkum.one/tags/NationalParty" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NationalParty</span></a> to demand to have the <a href="https://fairdinkum.one/tags/LNP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LNP</span></a> referred to as the <a href="https://fairdinkum.one/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> &amp; claim the right to be the major party in the <a href="https://fairdinkum.one/tags/COALition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>COALition</span></a>. 🤗 </p><p><a href="https://fairdinkum.one/tags/auspol" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auspol</span></a></p>
Harald Sack<p>Next step in our NLP timeline is Claude Elwood Shannon, who already laid the foundations for statistical language modeling by recognising the relevance of n-grams to model properties of language and predicting the likelihood of word sequences.</p><p>C.E. Shannon ""A Mathematical Theory of Communication" (1948) <a href="https://web.archive.org/web/19980715013250/http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">web.archive.org/web/1998071501</span><span class="invisible">3250/http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf</span></a></p><p><a href="https://sigmoid.social/tags/ise2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ise2025</span></a> <a href="https://sigmoid.social/tags/nlp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlp</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lecture</span></a> <a href="https://sigmoid.social/tags/languagemodel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodel</span></a> <a href="https://sigmoid.social/tags/informationtheory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>informationtheory</span></a> <a href="https://sigmoid.social/tags/historyofscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historyofscience</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span></p>
Miguel Afonso Caetano<p>"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.</p><p>The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.</p><p>To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?</p><p>Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."</p><p><a href="https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">quantamagazine.org/when-chatgp</span><span class="invisible">t-broke-an-entire-field-an-oral-history-20250430/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://tldr.nettime.org/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://tldr.nettime.org/tags/OralHistory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OralHistory</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a></p>
Erika Varis Doggett<p>At <a href="https://mas.to/tags/NAACL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NAACL</span></a> this week and I’m delighted to see the name change to “Nations of the Americas” as well as the special theme for this year of multi- and cross-culturalism in <a href="https://mas.to/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a>. </p><p><a href="https://mas.to/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://mas.to/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mas.to/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mas.to/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mas.to/tags/CompLing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompLing</span></a> <a href="https://mas.to/tags/ComputationalLinguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalLinguistics</span></a></p>
David J. Atkinson<p><span class="h-card" translate="no"><a href="https://sciencemastodon.com/@brianvastag" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>brianvastag</span></a></span> Which is ironic since I’ve heard it before. The circle is coming around. I wouldn’t be mad at your friends. They are in good company.</p><p>The app “Eliza” was created by Joseph Weizenbaum c. 1967. He was a critic of early <a href="https://c.im/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> and wanted to show how easily it could be faked. The Eliza app was scripted for various scenarios (sound familiar yet?). The most famous one simulated a <a href="https://c.im/tags/psychotherapist" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>psychotherapist</span></a>. People tried it and got hooked. Weizenbaum was proved correct - intelligence was really easy to fake. Some people protested vehemently when told the experiment was over, saying it was the best therapy they ever had! Maybe so. What surprised everyone was how people reacted to Eliza. Weizenbaum pointed out that Eliza had no knowledge and it didn’t understand anything people said to it. Eliza composed its replies based entirely on <a href="https://c.im/tags/scripts" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>scripts</span></a> and syntactic rules. Nobody really cared. And thus began the great schism in AI research, particularly natural language processing aka <a href="https://c.im/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a>. The syntactics people went one way, producing <a href="https://c.im/tags/chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbots</span></a>, and today’s <a href="https://c.im/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> like <a href="https://c.im/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> , and the semantics people (later, myself included) went another, producing many automated knowledge-based problem-solving techniques that today are embedded in thousands of applications.</p>