shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

264
active users

#LargeLanguageModel

1 post1 participant0 posts today

LLMs don’t know your PDF.
They don’t know your company wiki either. Or your research papers.

What they can do with RAG is look through your documents in the background and answer using what they find.

But how does that actually work? Here’s the basic idea behind RAG:
:blobcoffee: Chunking: The document is split into small, overlapping parts so the LLM can handle them. This keeps structure and context.
:blobcoffee: Embeddings & Search: Each part is turned into a vector (a numerical representation of meaning). Your question is also turned into a vector, and the system compares them to find the best matches.
:blobcoffee: Retriever + LLM: The top matches are sent to the LLM, which uses them to generate an answer based on that context.

Replied in thread

@boelder
RE
corporations putting confidential data in #insecure #datastorage owned by Amazon

@not2b
RE
using it for training...

@Haste
RE
AI scribe taking session notes.... the rooms are capable of recording now, but assured me that it is⭕ "completely deleted” in a “timely fashion”

IMO, the #AI is ⭕listing and using the sentences that go into the #LLM after this the original TEXT and AUDIO can be deleted.
This deletion is not the issue, right⁉️

#LargeLanguageModel
en.wikipedia.org/wiki/Large_la

en.wikipedia.orgLarge language model - Wikipedia
Replied in thread

@skribe Conversely, the cost of printing, distribution, and storage puts up a barrier to spamming people on other continents with mass quantities of low value slop.

Just think through the logistics of a hostile Eurasian state sending a mass quantity of printed materials to Australia or North America.

Or, for that matter, a hostile North American state sending a mass quantity of printed materials to Europe or Asia.

You would either need:–

a) At least one printing press on each continent;
b) You could try shipping the magazines, but they'd be a month out of date when they arrive; or
c) You could try flying them overseas, but that would be very expensive very quickly.

That's before you worry about things like delivery drivers (or postage), and warehouses.

These are less of an issue for books than they are for newspapers or magazines.

And if a particular newspaper or magazine is known to be reliable, written by humans, researched offline, and the articles are not available online, then there's potentially value in people buying a physical copy.

Had a very insightful conversation about the limitations on AI with a marketing copywriter.

Her comment was that actually writing marketing materials is a small part of her job.

If it was just about writing something that persuades a customer to buy a product, it would be a cakewalk.

What takes time is the stakeholder management.

It's navigating conflicting and contradictory demands of different departments.

Legal wants to say one thing. Sales something different. Legal something else entirely.

There's higher-up managers who need their egos soothed.

There's different managers with different views about what the customers want and what their needs are.

And there's a big difference in big bureaucratic organisations between writing marketing collateral, and writing something that gets signed off by everyone who needs to.

She's tried using AI for some tasks, and what that typically involves is getting multiple AI responses, and splicing them together into a cohesive whole.

Because it turns out there's a big difference in the real world between generating a statistically probable output, and having the emotional intelligence to navigate humans.

#AI#LLM#ChatGPT
Replied in thread

@paninid I draw great optimism from a study finding that use if AI (aka LLI) reduces people's conviction to conspiracy theories. Sure AI makes mistakes, but it's more important that AI is modeling fact-based learning, reasoning, and decision making. I literally believe that AI could be the tech to save American democracy.

mitsloan.mit.edu/ideas-made-to

MIT SloanMIT study: An AI chatbot can reduce belief in conspiracy theories | MIT Sloan
Replied in thread

@lyndamerry484 Ah. OK, that's a different question. A #LargeLanguageModel, although it is an example of a neural network system, is certainly not 'intelligent' in this sense. It has no semantic layer and no concept of truth or falsity. All it does is string together symbols (which it does not understand the meanings of) into sequences which represent plausible responses to the sequence of symbols that it was fed.

There is no semantic significance to its answer.

“In this life, we are very careful knowing that the wrong kind of domain, the wrong kind of #LargeLanguageModel, can allow the #AI to hallucinate, which would be very dangerous.

Here, we will be engaging experts to assist in machine language so that the data that comes out of #ArtificialIntelligence will be very accurate,” Leonen said. #Philippines

Supreme Court starts AI pilot test for transcription, research
msn.com/en-ph/news/other/supre

www.msn.comMSN

Included in the #ExcelForPython release is a #LargeLanguageModel integration that will allow Excel users to ask the #Copilot to build scripts for them with plain language commands.

First teased last year, the new feature allows #Excel users to run #Python scripts inside workbooks for analytics and other purposes. #DataAnalytics #Spreadsheets #AI #GenerativeAI #Microsoft365

Python in Excel goes live – but only for certain #Windows users
msn.com/en-us/news/technology/

www.msn.comMSN