shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

263
active users

#nlp

2 posts2 participants1 post today

"The pilot employs state-of-the-art methodologies in the responsible deployment of LLM technology, including:

Multi-step reasoning processes at inference time
Web search capabilities as tools in the reasoning chain
Rigorous checks for proper data source attribution
Comprehensive monitoring and evaluation of LLM contributions
"

#AAAI Launches AI-Powered #PeerReview Assessment System
aaai.org/aaai-launches-ai-powe

AAAI · AAAI Launches AI-Powered Peer Review Assessment System - AAAIAAAI today announced a pilot program that strategically incorporates Large Language Models (LLMs) to enhance the academic paper review process for the AAAI-26 conference.

Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.

F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA

#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique

Next step in our NLP timeline is Claude Elwood Shannon, who already laid the foundations for statistical language modeling by recognising the relevance of n-grams to model properties of language and predicting the likelihood of word sequences.

C.E. Shannon ""A Mathematical Theory of Communication" (1948) web.archive.org/web/1998071501

#ise2025 #nlp #lecture #languagemodel #informationtheory #historyofscience @enorouzi @tabea @sourisnumerique @fiz_karlsruhe @fizise

"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.

The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.

To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."

quantamagazine.org/when-chatgp

Quanta Magazine · When ChatGPT Broke an Entire Field: An Oral History | Quanta MagazineResearchers in “natural language processing” tried to tame human language. Then came the transformer.
Replied in thread

@brianvastag Which is ironic since I’ve heard it before. The circle is coming around. I wouldn’t be mad at your friends. They are in good company.

The app “Eliza” was created by Joseph Weizenbaum c. 1967. He was a critic of early #AI and wanted to show how easily it could be faked. The Eliza app was scripted for various scenarios (sound familiar yet?). The most famous one simulated a #psychotherapist. People tried it and got hooked. Weizenbaum was proved correct - intelligence was really easy to fake. Some people protested vehemently when told the experiment was over, saying it was the best therapy they ever had! Maybe so. What surprised everyone was how people reacted to Eliza. Weizenbaum pointed out that Eliza had no knowledge and it didn’t understand anything people said to it. Eliza composed its replies based entirely on #scripts and syntactic rules. Nobody really cared. And thus began the great schism in AI research, particularly natural language processing aka #NLP. The syntactics people went one way, producing #chatbots, and today’s #LLMs like #chatgpt , and the semantics people (later, myself included) went another, producing many automated knowledge-based problem-solving techniques that today are embedded in thousands of applications.

Are you passionate about language technologies? About translating creative ideas among languages? Be a machine learning engineer for Apple Services Localization Engineering team to build models and algorithms to power our service offerings at scale!

Please apply directly via the link:

jobs.apple.com/en-us/details/2

jobs.apple.comSr. Applied ML Engineer, Apple Services Localization Engineering - Jobs - Careers at AppleApply for a Sr. Applied ML Engineer, Apple Services Localization Engineering job at Apple. Read about the role and find out if it’s right for you.
#jobs#ML#NLP
Continued thread

Update. In the fields of #NLP and #LIS, "papers with different #gender compositions achieve varying numbers of citations, with mixed-gender collaborations gradually obtaining higher average citation counts compared to same-gender collaborations."
doi.org/10.1016/j.joi.2025.101

doi.orgIs higher team gender diversity correlated with better scientific impact?Collaborative research involving scholars of various genders constitutes a prominent theme in scientific research that has garnered substantial attent…

Je profite du fil de @kfort et des échanges qui en ont découlé pour présenter un article accepté en Findings à NAACL (co-écrit avec @kfort, Aurélie Névéol, Nicolas Hiebel et Olivier Ferret), déjà dispo sur hal (inria.hal.science/hal-04938811) :

De plus en plus de facs de médecine songent à faire plancher les étudiant·es sur des cas cliniques générés par des modèles de langue (LLMs). Pourtant, on sait que ces LLMs sont biaisés, et que les biais des modèles peuvent créer/amplifier les biais d'humains (nature.com/articles/s41598-023).

Notre étude prouve, grâce à un corpus de 21 000 cas concernant 10 pathologies et générés par 7 LLMs affinés (fine-tunés), que :

- Les modèles génèrent par défaut des patients (et non de patientes)
- La sur-génération d'hommes n'est pas liée aux prévalences médicales réelles (les proportions réelles de femmes sont sous-estimées par les modèles)
- Les biais sont parfois si forts que le genre donné dans l'invite (prompt) est contredit (voir image ci-dessous)
- Les femmes et les personnes trans sont plus à risque d'être impactées par ces biais, qui peuvent se traduire de manière très concrète : erreurs de diagnostics, errance médicale, traitements inadaptés, tabou, mégenrage, essentialisme biologique

#llm#nlp#ai