shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

254
active users

#superintelligence

0 posts0 participants0 posts today

"If you think this sounds weird, mystical, and god-like, you’d be correct. The last bizarre direction of discourse about #AGI is that it plays into the idea of a big, possibly benevolent #robotgod who will rescue humans from ourselves—that is, if we happen to imbue it with the right values. These people believe in one of two versions of a technological future: either an AGI that is trained with proper values will lead to a world of limitless #abundance, where we live in #post-human forms, or a big robot #superintelligence will wipe us out."

techpolicy.press/the-myth-of-a

Tech Policy Press · The Myth of AGI | TechPolicy.PressAlex Hanna and Emily M. Bender write that claims of "Artificial General Intelligence" are a cover for abandoning the current social contract.

Sam Altman,
CEO of OpenAI,
has set the tone for the year ahead in AI with a bold declaration:

OpenAI believes it knows how to build #AGI (artificial general intelligence)
and is now turning its sights towards #superintelligence.

While there is no consensus as to what AGI is exactly, OpenAI defines AGI as
"highly autonomous systems that outperform humans in most economically valuable work".

Altman believes superintelligent tools could accelerate scientific discovery and innovation beyond current human capabilities,
leading to increased abundance and prosperity.

Altman said:
"We are now confident we know how to build AGI as we have traditionally understood it.
We believe that, in 2025, we may see the first AI agents
“join the workforce” and materially change the output of companies.
We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that -- to superintelligence in the true sense of the word.

Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own,
and in turn massively increase abundance and prosperity."

Multiple AI researchers from leading labs have now expressed similar sentiments about the timeline for AGI .

In fact, last June, Ilya Sutskever (who played a key role in the failed attempt to oust Altman as CEO), departed OpenAI and founded what he described as the world's first "straight-shot superintelligence lab".

In September, Sutskever secured $1 billion in funding at a $5 billion valuation.

Altman’s reflections come as OpenAI prepares to launch its latest reasoning model, o3, later this month.

The company debuted o3 in December at the conclusion of its "12 Days of OpenAI" event with some impressive benchmarks

maginative.com/article/openai-

Maginative · OpenAI Says it Knows how to Build AGI and Sets Sights on SuperintelligenceAltman says that in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.
Continued thread

I take that back... maybe it WOULD reduce emissions.

Human: What's the cause of climate change? OMG!

AI superintelligence: Humans are consuming too many fossil fuels and resources on this planet.

Human: HOW DO WE SOLVE THIS?

AI superintelligence: I'm building a robot army!

Human: Awesome! Why?

AI superintelligence: Alas, it's been nice to know you. We had to exterminate your species to save the planet!

#satire#ai#climate

> #NickBostrom’s previous book, #Superintelligence: Paths, Dangers, Strategies focused on what might happen if #AI development goes wrong. But what if things go right?

> In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day?

> #DeepUtopia shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.

library.lol/main/FA1E843BB12CC

Oxford shuts down institute run by Elon Musk-backed philosopher

Oxford University this week shut down an academic institute run by one of Elon Musk’s favorite philosophers.
The 🔸Future of Humanity Institute, 🔸dedicated to the long-termism movement
and other Silicon Valley-endorsed ideas such as effective altruism,
closed this week after 19 years of operation.

Musk had donated £1m to the FIH in 2015 through a sister organization to research the threat of artificial intelligence.
He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.

The center was run by
💥Nick Bostrom💥, a Swedish-born philosopher whose writings about the long-term 👉threat of AI replacing humanity
turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers.

OpenAI chief executive Sam #Altman, Microsoft founder Bill #Gates and Tesla chief #Musk all wrote blurbs for his 2014 bestselling book #Superintelligence.
“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.

⭐#Bostrom resigned from Oxford following the institute’s closure. ⭐

The closure of Bostrom’s center is a 👍further blow to the "#effective #altruism" and #longtermism movements 👍that the philosopher has spent decades championing,
which in recent years have become mired in scandals related to #racism, #sexual #harassment and #financial #fraud.
Bostrom himself issued an apology last year after a decades-old email surfaced in which he claimed
“Blacks are more stupid than whites” and used the N-word.
theguardian.com/technology/202

The Guardian · Oxford shuts down institute run by Elon Musk-backed philosopherBy Nick Robins-Early

Researchers have been warning of the potential risks of #superintelligence for decades, and the #CenterForAISafety (#CAIS) has identified eight categories of catastrophic and existential risk that AI development could pose. It also takes into account other pernicious harms.

#OpenAI leaders call for regulation to prevent #AI destroying humanity | #AISafety | The Guardian
theguardian.com/technology/202

Can We Stop Runaway #AI? archive.is/AHYCQ
#Technologists warn about the #dangers of the so-called #singularity. But can anything actually be done to prevent it? ...there’s a good chance that current A.I. technology will develop into artificial general intelligence, or #AGI —a higher form of A.I. capable of thinking at a human level in many or most regards. A smaller group argues that A.G.I.’s power could escalate exponentially. #AI #AGI and #superintelligence