shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

264
active users

#aibubble

5 posts4 participants1 post today

"- OpenAI and Anthropic both lose billions of dollars a year after revenue, and their stories do not mirror any other startup in history, not Uber, not Amazon Web Services, nothing. I address the Uber point in this article.

- SoftBank is putting itself in dire straits simply to fund OpenAI once. This deal threatens its credit rating, with SoftBank having to take on what will be multiple loans to fund the remaining $30 billion of OpenAI's $40 billion round, which has yet to close and OpenAI is, in fact, still raising.

- This is before you consider the other $19 billion that SoftBank has agreed to contribute to the Stargate data center project, money that it does not currently have available.

- OpenAI has promised $19 billion to the Stargate data center project, money it does not have and cannot get without SoftBank's funds.

- Again, neither SoftBank nor OpenAI has the money for Stargate right now.

- OpenAI must convert to a for-profit by the end of 2025, or it loses $20 billion of the remaining $30 billion of funding. If it does not convert by October 2026, its current funding converts to debt. It is demanding remarkable, unreasonable concessions from Microsoft, which is refusing to budge and is willing to walk away from the negotiations necessary to convert.

- OpenAI does not have a path to profitability, and its future, like Anthropic's, is dependent on a continual flow of capital from venture capitalists and big tech, who must also continue to expand infrastructure.

Anthropic is in a similar, but slightly better position — it is set to lose $3 billion this year on $4 billion of revenue. It also has no path to profitability, recently jacked up prices on Cursor, its largest customer, and had to put restraints on Claude Code after allowing users to burn 100% to 10,000% of their revenue. These are the actions of a desperate company."

wheresyoured.at/the-haters-gui

Ed Zitron's Where's Your Ed At · The Hater's Guide To The AI BubbleHey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free work too.  Also,

On reflection, I think the big mistake is the conflation of #AI with #LLM and #MachineLearning.
There are genuine exciting advances in ML with applications all over the place, in science, (not least in my own research group looking at high resolution regional climate downscaling), health diagnostics, defence etc. But these are not the AIs that journalists are talking about, nor that are really related the LLMs.
They're still good uses of GPUs and will probably produce economic benefits, but probably not the multi- trillion ones the pundits seem to be expecting

fediscience.org/@Ruth_Mottram/
Ruth_Mottram - My main problem with @edzitron.com 's piece on the #AIbubble is that I agree with so much of it.
I'm now wondering if I've missed something about #LLMs? The numbers and implications for stock markets are terrifyingly huge!

wheresyoured.at/the-haters-gui

FediScience.orgRuth Mottram (@Ruth_Mottram@fediscience.org)My main problem with @edzitron.com 's piece on the #AIbubble is that I agree with so much of it. I'm now wondering if I've missed something about #LLMs? The numbers and implications for stock markets are terrifyingly huge! https://www.wheresyoured.at/the-haters-gui/

"The intoxicating buzz around artificial intelligence stocks over the last few years looks concerningly like the dot-com bubble, top investor Richard Bernstein warns.

The CIO at $15 billion Richard Bernstein Advisors wrote in a June 30 post that the AI trade is starting to look rich, and that it may be time for investors to turn their attention toward a more "boring" corner of the market: dividend stocks.

"Investors seem universally focused on 'AI' which seems eerily similar to the '.com' stocks of the Technology Bubble and the 'tronics' craze of the 1960s. Meanwhile, we see lots of attractive, admittedly boring, dividend-paying themes," Bernstein wrote.

Since ChatGPT hit the market in November 2022, the S&P 500 and Nasdaq 100 have risen 54% and 90%, respectively. Valuations, by some measures, have surged back toward record highs, rivaling levels seen during the dot-com bubble and the 1929 peak.

While Bernstein said he's not calling a top, trades eventually go the other way, and the best time to invest in something is when it's out of favor — not when a major rally has already occurred."

businessinsider.com/stock-mark

Business Insider · AI stocks look 'eerily similar' to the dot-com craze, CIO warnsBy William Edwards

"In May, researchers at Carnegie Mellon University released a paper showing that even the best-performing AI agent, Google's Gemini 2.5 Pro, failed to complete real-world office tasks 70 percent of the time. Factoring in partially completed tasks — which included work like responding to colleagues, web browsing, and coding — only brought Gemini's failure rate down to 61.7 percent.

And the vast majority of its competing agents did substantially worse.

OpenAI's GPT-4o, for example, had a failure rate of 91.4 percent, while Meta's Llama-3.1-405b had a failure rate of 92.6 percent. Amazon's Nova-Pro-v1 failed a ludicrous 98.3 percent of its office tasks.

Meanwhile, a recent report by Gartner, a tech consultant firm, predicts that over 40 percent of AI agent projects initiated by businesses will be cancelled by 2027 thanks to out-of-control costs, vague business value, and unpredictable security risks.

"Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied," said Anushree Verma, a senior director analyst at Gartner.

The report notes an epidemic of "agent washing," where existing products are rebranded as AI agents to cash in on the current tech hype. Examples include Apple's "Intelligence" feature on the iPhone 16, which it currently faces a class action lawsuit over, and investment firm Delphia's fake "AI financial analyst," for which it faced a $225,000 fine.

Out of thousands of AI agents said to be deployed in businesses throughout the globe, Gartner estimated that "only about 130" are real."

futurism.com/ai-agents-failing

Futurism · The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the IndustryBy Joe Wilkins
Continued thread

The full #Anthropic post on their experience with #Claude running a vending machine is chock full of amusingly crazy behaviors.

However, this paragraph near the end exemplifies how we are stuck in an #AIBubble, because once again someone says "we experimented with having AI do a task, and it completely failed, but we will continue to believe that AI will be great at this task because surely someone will figure out how to get AI to do this task."

anthropic.com/research/project

"OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.

Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development.

Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.

The altruistic OpenAI is gone, if it ever existed. What future is the company building now?"

theatlantic.com/technology/arc

The Atlantic · What Really Happened When OpenAI Turned on Sam AltmanBy Karen Hao

Bookmarking this conference slide my coworker shared for posterity, since either we are about to reach the singularity, or we are walking straight into yet another tech bubble where the majority of people are somehow convinced that the line will keep going up forever, unlike the last 5 times this exact same thing happened.

"So this is why you keep invoking AI by accident, and why the AI that is so easy to invoke is so hard to dispel. Like a demon, a chatbot is much easier to summon than it is to rid yourself of.

Google is an especially grievous offender here. Familiar buttons in Gmail, Gdocs, and the Android message apps have been replaced with AI-summoning fatfinger traps. Android is filled with these pitfalls – for example, the bottom-of-screen swipe gesture used to switch between open apps now summons an AI, while ridding yourself of that AI takes multiple clicks.

This is an entirely material phenomenon. Google doesn't necessarily believe that you will ever want to use AI, but they must convince investors that their AI offerings are "getting traction." Google – like other tech companies – gets to invent metrics to prove this proposition, like "how many times did a user click on the AI button" and "how long did the user spend with the AI after clicking?" The fact that your entire "AI use" consisted of hunting for a way to get rid of the AI doesn't matter – at least, not for the purposes of maintaining Google's growth story.

Goodhart's Law holds that "When a measure becomes a target, it ceases to be a good measure." For Google and other AI narrative-pushers, every measure is designed to be a target, a line that can be made to go up, as managers and product teams align to sell the company's growth story, lest we all sell off the company's shares."

pluralistic.net/2025/05/02/kpi

pluralistic.netPluralistic: AI and the fatfinger economy (02 May 2025) – Pluralistic: Daily links from Cory Doctorow

"To test this out, the Carnegie Mellon researchers instructed artificial intelligence models from Google, OpenAI, Anthropic, and Meta to complete tasks a real employee might carry out in fields such as finance, administration, and software engineering. In one, the AI had to navigate through several files to analyze a coffee shop chain's databases. In another, it was asked to collect feedback on a 36-year-old engineer and write a performance review. Some tasks challenged the models' visual capabilities: One required the models to watch video tours of prospective new office spaces and pick the one with the best health facilities.

The results weren't great: The top-performing model, Anthropic's Claude 3.5 Sonnet, finished a little less than one-quarter of all tasks. The rest, including Google's Gemini 2.0 Flash and the one that powers ChatGPT, completed about 10% of the assignments. There wasn't a single category in which the AI agents accomplished the majority of the tasks, says Graham Neubig, a computer science professor at CMU and one of the study's authors. The findings, along with other emerging research about AI agents, complicate the idea that an AI agent workforce is just around the corner — there's a lot of work they simply aren't good at. But the research does offer a glimpse into the specific ways AI agents could revolutionize the workplace."

tech.yahoo.com/ai/articles/nex

Yahoo Tech · Carnegie Mellon staffed a fake company with AI agents. It was a total disaster.By Shubham Agarwal

"One hint that we might just be stuck in a hype cycle is the proliferation of what you might call “second-order slop” or “slopaganda”: a tidal wave of newsletters and X threads expressing awe at every press release and product announcement to hoover up some of that sweet, sweet advertising cash.

That AI companies are actively patronising and fanning a cottage economy of self-described educators and influencers to bring in new customers suggests the emperor has no clothes (and six fingers).

There are an awful lot of AI newsletters out there, but the two which kept appearing in my X ads were Superhuman AI run by Zain Kahn, and Rowan Cheung’s The Rundown. Both claim to have more than a million subscribers — an impressive figure, given the FT as of February had 1.6mn subscribers across its newsletters.

If you actually read the AI newsletters, it becomes harder to see why anyone’s staying signed up. They offer a simulacrum of tech reporting, with deeper insights or scepticism stripped out and replaced with techno-euphoria. Often they resemble the kind of press release summaries ChatGPT could have written."

ft.com/content/24218775-57b1-4

Financial Times · AI hype is drowning in slopagandaBy Siddharth Venkataramakrishnan