shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

292
active users

#perplexity

0 posts0 participants0 posts today

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek

#US / Discredited pro-Israel organization targets Wikipedia while ignoring its own documented bias

The ADL published a report titled "Editing for Hate" claiming 30 "bad faith" Wikipedia editors conducted a "coordinated campaign" to insert anti-Israel and pro-Hamas narratives. The ADL is demanding that #Google and other #AI companies refrain from using Wikipedia as a source until Wikipedia takes steps to combat what the ADL considers bias in its coverage of Israel.

Background

Right-wing figures are mounting a coordinated assault on #Wikipedia following #trump's reelection, with #musk calling to "defund" the site after it documented his Nazi-style salute at the inauguration, tech executives like #Perplexity's CEO demanding an "unbiased" alternative, the Heritage Foundation planning to use facial recognition and hacked data to "identify and target" volunteer editors, and the New York Post publishing an editorial citing a "bombshell report" claiming Wikipedia equates "true" with "left wing" - all part of what appears to be a strategic campaign to intimidate and undermine one of the internet's last major platforms for transparent, collaborative information that remains free from paywalls and corporate influence.

The infamously unreliable (as per the Wikipedia itself) pro-Israel Islamophobic organization whose CEO, Jonathan Greenblatt, isn't bothered at all by Musk's sieg heil, has been accused of:

[…] Even though the ADL is integrated into community work on a range of issues, it has a history and ongoing pattern of attacking social justice movements led by communities of color, queer people, immigrants, Muslims, Arabs, and other marginalized groups, while aligning itself with police, right-wing leaders, and perpetrators of state violence. More disturbing, it has often conducted those attacks under the banner of “civil rights.” This largely unpublicized history has come increasingly to light as activists work to make sense of the ADL’s role in condemning the Movement for Black Lives, Palestinian rights organizing, and Congressional Representative Ilhan Omar, among others.

See droptheadl.org/

The #Wikimedia Foundation responded that they found the ADL report "flawed" with "troubling conclusions that are not supported by the Anti-Defamation League's data"​​​​​​​​​​​​​​​​:

[…] “The values of Wikipedia and the Wikimedia Foundation reflect our commitment to integrity and accuracy, and we categorically condemn antisemitism and all forms of hate. Though our preliminary review of this report finds troubling and flawed conclusions that are not supported by the Anti-Defamation League's data, we are currently undertaking a more thorough and detailed analysis. It is unfortunate that we were not asked to provide context that might have helped allay some of the concerns raised,”

@palestine
@israel
#IsraelWarCrimes
#GazaGenocide

I basically have a DIY Perplexity setup running in OpenWebUI (which is running politely alongside Plex). I'm using Mistral-Large with web search via SearXNG and the system prompt that Perplexity uses for their Sonar Pro searches.

And since OpenWebUI has an OpenAI-compatible API, I can connect to it from this GPTMobile app on my phone and query my custom search assistant model.

#AI#LLM#LLMs

A new report has exposed the stark reality behind AI search engine promises, revealing a plunge in referral traffic for publishers and a surge in covert website scraping by AI giants like OpenAI and Perplexity.

computing.co.uk/news/2025/ai/a

www.computing.co.ukAI search is hurting publisher traffic - badlyAnalysis shows a massive fall in traffic to publishers’ websites since the advent of AI search.

📜 Large Language Models in #archives : Prompt Engineering for Archivists

Can LLMs assist in writing archival finding aids? 🤔 Paul M. Näger & @dom3nic tested #gpt4o on metadata from the Westphalian Literary Archive.
🔍 Approach:
➡️ Naïve prompt: “Summarize” → Generic, cliché-filled results.
➡️ Advanced prompt engineering: Importing CVs (generated with #perplexity), persona prompting as an archivist, and structured guidance → Much better performance!
#Dhd2025 #Dhd #AI #DigitalHumanities

Replied in thread

@devontechnologies It would be really neat if you would collaborate with @bbedit on a common way to specify AI chat models. Users of #BBEdit’s AI Chat Worksheets could then have the same available across both applications.

See barebones.com/support/technote for how it’s currently done with a fairly simple JSON-based format. As an example, I’ve successfully used it to add #Perplexity to BBEdit.

www.barebones.comBare Bones Software | BBEdit AI Worksheet Models