shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

284
active users

#llms

16 posts16 participants2 posts today

"Microsoft tempted to hit the gas as renewables can't keep up with AI"

The not so hidden costs of the so-called AI. This is your contribution every single time you use Copilot, ChatGPT and all that crap, and if you do really believe the "...with carbon capture technology..." part, at best, you're naive.

theregister.com/2025/03/13/mic

#LLMs #Microsoft #AI #datacenters
#environment #energy #sustainability

The Register · Microsoft tempted to hit the gas as renewables can't keep up with AIBy Dan Robinson

I basically have a DIY Perplexity setup running in OpenWebUI (which is running politely alongside Plex). I'm using Mistral-Large with web search via SearXNG and the system prompt that Perplexity uses for their Sonar Pro searches.

And since OpenWebUI has an OpenAI-compatible API, I can connect to it from this GPTMobile app on my phone and query my custom search assistant model.

#AI#LLM#LLMs

"Programming in Lua" taking the gas out of the current Large Language Model AI bubble without realizing it or trying to.

10.2 – Markov Chain Algorithm

Our second example is an implementation of the Markov chain algorithm. The program generates random text, based on what words may follow a sequence of n previous words…

…After building the table, the program uses the table to generate random text, wherein each word follows two previous words with the same probability of the base text. As a result, we have text that is very, but not quite, random.

lua.org/pil/10.2.html

All that LLMs are doing is this same trick from Chapter 10.2, but scaled up massively. However it's still just generating random text, just with a high probability of it looking like text it's already seen. A regurgitating bullshit generator that often exceeds our ability to spot the randomness, but LLMs are just convincing liars.

And burning metric fucktons of fossil fuels. (👈 the real point of AI in 2025)

www.lua.orgProgramming in Lua : 10.2
Continued thread

Thinking about the wasteful nature of #LLMs got me thinking about waste in my own development. While it can be convenient to use the large, enterprise-grade frameworks to deliver a minimalist website in 2025 - it's absurd.

Do I really need #laravel with #react, #jquery, #tailwind, #webFonts, #postgres to host some simple #markdown?

Do I need to re-render a bunch of static content at every hit? Does every simple article require 64 connections to the server to display?

I think not.

I want my material to be available to anyone who wants it - regardless of the device they are using or the robustness of their connection.

I want to respect users who disable #javascript for their personal protection.

I want to respect #ScreenReaders and users of assistive technology, without unnecessary complexity.

Everything we need is built into the HTML and CSS specs.

Replied in thread

#Microsoft's viral paper about *correlations* between #AI use and #criticalThinking also has "impact" in the title (despite admitting "Our analysis does not establish #causation"). 🤦‍♂️

#Confidence in #GenAI predicted LESS critical thinking.

SELF-confidence predicted MORE critical thinking.

PREDICTED ≠ CAUSED

microsoft.com/en-us/research/p

"This infection of Western chatbots was foreshadowed in a talk American fugitive turned Moscow based propagandist John Mark Dougan gave in Moscow last January at a conference of Russian officials, when he told them, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.”

A NewsGuard audit has found that the leading AI chatbots repeated false narratives laundered by the Pravda network 33 percent of the time — validating Dougan’s promise of a powerful new distribution channel for Kremlin disinformation."

newsguardrealitycheck.com/p/a- #ai #llm #llms

NewsGuard's Reality Check · A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propagandaBy NewsGuard

Hunger Games for AI

OK really it's called "Survival Game" but that's the basic idea. Some Cornell researchers decided to put various LLMs through their paces, giving them a limited number of tries to solve a variety of problems, and eliminating those that failed in successive rounds of evaluation. I do NOT have the time to read this comprehensively, but from the intro:

Our results show that while AI systems achieve the Autonomous Level in simple tasks, they are still far from it in more complex tasks, such as vision, search, recommendation, and language. While scaling current AI technologies might help, this would come at an astronomical cost. Projections suggest that achieving the Autonomous Level for general tasks would require 1026 parameters. To put this into perspective, loading such a massive model requires so many H100 GPUs that their
total value is 4 × 107 times that of Apple Inc.’s market value. Even with Moore’s Law, supporting such a parameter scale would take 70 years. This staggering cost highlights the complexity of human tasks and the inadequacies of current AI technologies.

arxiv.org/pdf/2502.18858

Check it out! Tell me what jumped out at you.

#FuckAI#AI#LLMs

I set up #OpenWebUI on one of my more powerful servers, and it is fantastic. I'm running a couple smaller local Llama models, and hooked up my Anthropic and OpenRouter API keys to get access to Claude and a bunch of other models including Mistral, DeepSeek, and others. I also linked up my Kagi search API key to give web search capabilities to the models that don't have a web index. I will probably lower my Kagi Ultimate subscription to Professional since I no longer have a need for their Assistant.

#AI#LLM#LLMs

New word: Enturdification!

➡️ Enturdification: When there's some AI slop in it, the whole thing is suspect.

A teaspoon of turd ruins the barrel of water. So a source that shows a single AI turd anywhere is enturdified, and untrustworthy everywhere.

(L) copyleft, take it, it's yours, etc

Replied in thread

“…the US’s single-minded commitment to Large Language Models, predicated on implausible assumptions about AGI’s imminence, particularly combined with the untimely gutting of its scientific institutions, could ultimately lead to its undoing.”
—Gary Marcus, Ezra Klein’s new take on AGI – and why I think it’s probably wrong
#llms #llm #agi #ai #usa

Replied in thread

“If LLMs are the wrong path to AGI, as I and a great many academics think, we may have a lot of thinking to do. We may be massively wasting resources, and getting locked into a lot foolish battles around the wrong technology.”
—Gary Marcus, Ezra Klein’s new take on AGI – and why I think it’s probably wrong
#llms #llm #agi #ai