shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

270
active users

#superintelligence

0 posts0 participants0 posts today
IT News<p>AGI may be impossible to define, and that’s a multibillion-dollar problem - When is an AI system intelligent enough to be called artific... - <a href="https://arstechnica.com/ai/2025/07/agi-may-be-impossible-to-define-and-thats-a-multibillion-dollar-problem/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/07/agi</span><span class="invisible">-may-be-impossible-to-define-and-thats-a-multibillion-dollar-problem/</span></a> <a href="https://schleuss.online/tags/largelanguagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>largelanguagemodels</span></a> <a href="https://schleuss.online/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/aichatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aichatbots</span></a> <a href="https://schleuss.online/tags/microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microsoft</span></a> <a href="https://schleuss.online/tags/features" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>features</span></a> <a href="https://schleuss.online/tags/chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatbots</span></a> <a href="https://schleuss.online/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> <a href="https://schleuss.online/tags/chatgtp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgtp</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://schleuss.online/tags/agi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>agi</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
Xamanismo Coletivo<p>"If you think this sounds weird, mystical, and god-like, you’d be correct. The last bizarre direction of discourse about <a href="https://hachyderm.io/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> is that it plays into the idea of a big, possibly benevolent <a href="https://hachyderm.io/tags/robotgod" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>robotgod</span></a> who will rescue humans from ourselves—that is, if we happen to imbue it with the right values. These people believe in one of two versions of a technological future: either an AGI that is trained with proper values will lead to a world of limitless <a href="https://hachyderm.io/tags/abundance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>abundance</span></a>, where we live in <a href="https://hachyderm.io/tags/post" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>post</span></a>-human forms, or a big robot <a href="https://hachyderm.io/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> will wipe us out."</p><p><a href="https://techpolicy.press/the-myth-of-agi" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">techpolicy.press/the-myth-of-a</span><span class="invisible">gi</span></a></p>
Europe Says<p><a href="https://www.europesays.com/2171306/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">europesays.com/2171306/</span><span class="invisible"></span></a> Meta Assembling ‘Superintelligence Group’ to Pursue Artificial General Intelligence — THE Journal <a href="https://pubeurope.com/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> <a href="https://pubeurope.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://pubeurope.com/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://pubeurope.com/tags/Meta" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Meta</span></a> <a href="https://pubeurope.com/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a></p>
audioflyer79🇺🇦🇨🇦🇪🇺🏳️‍🌈<p>What are some good <a href="https://mstdn.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> accounts to follow? Especially <a href="https://mstdn.social/tags/watchdog" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>watchdog</span></a> monitoring <a href="https://mstdn.social/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> and <a href="https://mstdn.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Superintelligence</span></a> development? <a href="https://mstdn.social/tags/AI2027" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI2027</span></a></p>
Chloé Messdaghi<p>The next step toward achieving superintelligence might not stem from training on datasets curated by humans. Instead, it could arise from agents acquiring knowledge through their own experiences—by experimenting, reasoning, and forming non-human ways of thinking. Discover more: <a href="https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">storage.googleapis.com/deepmin</span><span class="invisible">d-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf</span></a> <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Superintelligence</span></a> <a href="https://infosec.exchange/tags/Innovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Innovation</span></a></p>
Chuck Darwin<p>Sam Altman, <br>CEO of OpenAI, <br>has set the tone for the year ahead in AI with a bold declaration: </p><p>OpenAI believes it knows how to build <a href="https://c.im/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> (artificial general intelligence) <br>and is now turning its sights towards <a href="https://c.im/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a>.</p><p>While there is no consensus as to what AGI is exactly, OpenAI defines AGI as <br>"highly autonomous systems that outperform humans in most economically valuable work". </p><p>Altman believes superintelligent tools could accelerate scientific discovery and innovation beyond current human capabilities, <br>leading to increased abundance and prosperity.</p><p>Altman said:<br>"We are now confident we know how to build AGI as we have traditionally understood it. <br>We believe that, in 2025, we may see the first AI agents <br>“join the workforce” and materially change the output of companies. <br>We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.<br> <br>We are beginning to turn our aim beyond that -- to superintelligence in the true sense of the word. </p><p>Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, <br>and in turn massively increase abundance and prosperity."</p><p>Multiple AI researchers from leading labs have now expressed similar sentiments about the timeline for AGI . </p><p>In fact, last June, Ilya Sutskever (who played a key role in the failed attempt to oust Altman as CEO), departed OpenAI and founded what he described as the world's first "straight-shot superintelligence lab". </p><p>In September, Sutskever secured $1 billion in funding at a $5 billion valuation.</p><p>Altman’s reflections come as OpenAI prepares to launch its latest reasoning model, o3, later this month. </p><p>The company debuted o3 in December at the conclusion of its "12 Days of OpenAI" event with some impressive benchmarks</p><p><a href="https://www.maginative.com/article/openai-says-it-knows-how-to-build-agi-and-sets-sights-on-superintelligence-2/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">maginative.com/article/openai-</span><span class="invisible">says-it-knows-how-to-build-agi-and-sets-sights-on-superintelligence-2/</span></a></p>
IT News<p>Sam Altman says “we are now confident we know how to build AGI” - On Sunday, OpenAI CEO Sam Altman offered two eye-catching predictions abou... - <a href="https://arstechnica.com/information-technology/2025/01/sam-altman-says-we-are-now-confident-we-know-how-to-build-agi/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/information-te</span><span class="invisible">chnology/2025/01/sam-altman-says-we-are-now-confident-we-know-how-to-build-agi/</span></a> <a href="https://schleuss.online/tags/simulatedreasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>simulatedreasoning</span></a> <a href="https://schleuss.online/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> <a href="https://schleuss.online/tags/claude3" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude3</span></a>.5sonnet <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/garymarcus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>garymarcus</span></a> <a href="https://schleuss.online/tags/anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>anthropic</span></a> <a href="https://schleuss.online/tags/samaltman" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>samaltman</span></a> <a href="https://schleuss.online/tags/srmodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>srmodels</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>claude</span></a> <a href="https://schleuss.online/tags/o1" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>o1</span></a>-pro <a href="https://schleuss.online/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://schleuss.online/tags/agi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>agi</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://schleuss.online/tags/o1" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>o1</span></a> <a href="https://schleuss.online/tags/o3" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>o3</span></a> <a href="https://schleuss.online/tags/sr" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sr</span></a></p>
Steve Dustcircle 🌹<p>Why <a href="https://masto.ai/tags/BillGates" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BillGates</span></a> believes <a href="https://masto.ai/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://masto.ai/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> will require some <a href="https://masto.ai/tags/selfawareness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>selfawareness</span></a></p><p><a href="https://www.fastcompany.com/91150606/bill-gates-ai-superintelligence" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fastcompany.com/91150606/bill-</span><span class="invisible">gates-ai-superintelligence</span></a> </p><p>Current systems display ‘genius’ but need strategies for thinking through problems, the <a href="https://masto.ai/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a> cofounder says.</p>
IT News<p>Major shifts at OpenAI spark skepticism about impending AGI timelines - Enlarge (credit: Benj Edwards / Getty Images) </p><p>Over the past we... - <a href="https://arstechnica.com/?p=2041450" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arstechnica.com/?p=2041450</span><span class="invisible"></span></a> <a href="https://schleuss.online/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> <a href="https://schleuss.online/tags/benjamindekraker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>benjamindekraker</span></a> <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/gregbrockman" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gregbrockman</span></a> <a href="https://schleuss.online/tags/johnschulman" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>johnschulman</span></a> <a href="https://schleuss.online/tags/peterdeng" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>peterdeng</span></a> <a href="https://schleuss.online/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> <a href="https://schleuss.online/tags/chatgtp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgtp</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://schleuss.online/tags/gpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpt</span></a>-4 <a href="https://schleuss.online/tags/agi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>agi</span></a> <a href="https://schleuss.online/tags/asi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>asi</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
AI6YR Ben<p>I take that back... maybe it WOULD reduce emissions. </p><p>Human: What's the cause of climate change? OMG!</p><p>AI superintelligence: Humans are consuming too many fossil fuels and resources on this planet.</p><p>Human: HOW DO WE SOLVE THIS?</p><p>AI superintelligence: I'm building a robot army!</p><p>Human: Awesome! Why?</p><p>AI superintelligence: Alas, it's been nice to know you. We had to exterminate your species to save the planet!</p><p><a href="https://m.ai6yr.org/tags/satire" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>satire</span></a> <a href="https://m.ai6yr.org/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://m.ai6yr.org/tags/climate" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>climate</span></a> <a href="https://m.ai6yr.org/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://m.ai6yr.org/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a></p>
Matjö<p>Mind = blown 🤯 </p><p><a href="https://situational-awareness.ai/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">situational-awareness.ai/</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/aschenbrenner" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aschenbrenner</span></a> <a href="https://mastodon.social/tags/agi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>agi</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> <a href="https://mastodon.social/tags/alignement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>alignement</span></a> <a href="https://mastodon.social/tags/intelligenceexplosion" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>intelligenceexplosion</span></a></p>
John Leonard<p>Ilya Sutskever forms new AI startup to pursue 'safe superintelligence'</p><p>Former OpenAI chief scientist promises efforts will be insulated from commercial pressures</p><p><a href="https://www.computing.co.uk/news/4325343/ilya-sutskever-forms-ai-startup-pursue-safe-superintelligence" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">computing.co.uk/news/4325343/i</span><span class="invisible">lya-sutskever-forms-ai-startup-pursue-safe-superintelligence</span></a></p><p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/ssi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ssi</span></a> <a href="https://mastodon.social/tags/ilyasutskever" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ilyasutskever</span></a> <a href="https://mastodon.social/tags/sutskever" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sutskever</span></a> <a href="https://mastodon.social/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://mastodon.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> <a href="https://mastodon.social/tags/technews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technews</span></a></p>
🧿🪬🍄🌈🎮💻🚲🥓🎃💀🏴🛻🇺🇸<p>&gt; <a href="https://mastodon.social/tags/NickBostrom" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NickBostrom</span></a>’s previous book, <a href="https://mastodon.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Superintelligence</span></a>: Paths, Dangers, Strategies focused on what might happen if <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> development goes wrong. But what if things go right?</p><p>&gt; In such a solved world, what is the point of human existence? What gives meaning to life? What do we do all day?</p><p>&gt; <a href="https://mastodon.social/tags/DeepUtopia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepUtopia</span></a> shines new light on these old questions, and gives us glimpses of a different kind of existence, which might be ours in the future.</p><p><a href="https://library.lol/main/FA1E843BB12CC24567006C299D268644" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">library.lol/main/FA1E843BB12CC</span><span class="invisible">24567006C299D268644</span></a></p><p><a href="https://mastodon.social/tags/book" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>book</span></a> <a href="https://mastodon.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.social/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a> <a href="https://mastodon.social/tags/books" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>books</span></a></p>
Chuck Darwin<p>Oxford shuts down institute run by Elon Musk-backed philosopher </p><p>Oxford University this week shut down an academic institute run by one of Elon Musk’s favorite philosophers. <br>The 🔸Future of Humanity Institute, 🔸dedicated to the long-termism movement <br>and other Silicon Valley-endorsed ideas such as effective altruism, <br>closed this week after 19 years of operation. </p><p>Musk had donated £1m to the FIH in 2015 through a sister organization to research the threat of artificial intelligence. <br>He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.</p><p>The center was run by <br>💥Nick Bostrom💥, a Swedish-born philosopher whose writings about the long-term 👉threat of AI replacing humanity <br>turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers. </p><p>OpenAI chief executive Sam <a href="https://c.im/tags/Altman" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Altman</span></a>, Microsoft founder Bill <a href="https://c.im/tags/Gates" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gates</span></a> and Tesla chief <a href="https://c.im/tags/Musk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Musk</span></a> all wrote blurbs for his 2014 bestselling book <a href="https://c.im/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Superintelligence</span></a>.<br>“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.</p><p>⭐️<a href="https://c.im/tags/Bostrom" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bostrom</span></a> resigned from Oxford following the institute’s closure. ⭐️</p><p>The closure of Bostrom’s center is a 👍further blow to the "<a href="https://c.im/tags/effective" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>effective</span></a> <a href="https://c.im/tags/altruism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>altruism</span></a>" and <a href="https://c.im/tags/longtermism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>longtermism</span></a> movements 👍that the philosopher has spent decades championing, <br>which in recent years have become mired in scandals related to <a href="https://c.im/tags/racism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>racism</span></a>, <a href="https://c.im/tags/sexual" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sexual</span></a> <a href="https://c.im/tags/harassment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>harassment</span></a> and <a href="https://c.im/tags/financial" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>financial</span></a> <a href="https://c.im/tags/fraud" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fraud</span></a>. <br>Bostrom himself issued an apology last year after a decades-old email surfaced in which he claimed <br>“Blacks are more stupid than whites” and used the N-word.<br><a href="https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes?CMP=Share_iOSApp_Other" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/technology/202</span><span class="invisible">4/apr/19/oxford-future-of-humanity-institute-closes?CMP=Share_iOSApp_Other</span></a></p>
IT News<p>Elon Musk: AI will be smarter than any human around the end of next year - Enlarge / Elon Musk, owner of Tesla and the X (formerly Twitter) platfo... - <a href="https://arstechnica.com/?p=2015706" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arstechnica.com/?p=2015706</span><span class="invisible"></span></a> <a href="https://schleuss.online/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/socialmedia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>socialmedia</span></a> <a href="https://schleuss.online/tags/gradybooch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gradybooch</span></a> <a href="https://schleuss.online/tags/elonmusk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>elonmusk</span></a> <a href="https://schleuss.online/tags/twitter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>twitter</span></a> <a href="https://schleuss.online/tags/biz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biz</span></a>⁢ <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://schleuss.online/tags/x" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>x</span></a></p>
Matt Willemsen<p>AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist<br><a href="https://www.livescience.com/technology/artificial-intelligence/ai-agi-singularity-in-2027-artificial-super-intelligence-sooner-than-we-think-ben-goertzel" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">livescience.com/technology/art</span><span class="invisible">ificial-intelligence/ai-agi-singularity-in-2027-artificial-super-intelligence-sooner-than-we-think-ben-goertzel</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/singularity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>singularity</span></a> <a href="https://mastodon.social/tags/SuperIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SuperIntelligence</span></a></p>
Norobiik @Norobiik@noc.social<p>Researchers have been warning of the potential risks of <a href="https://noc.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a> for decades, and the <a href="https://noc.social/tags/CenterForAISafety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CenterForAISafety</span></a> (<a href="https://noc.social/tags/CAIS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CAIS</span></a>) has identified eight categories of catastrophic and existential risk that AI development could pose. It also takes into account other pernicious harms.</p><p><a href="https://noc.social/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> leaders call for regulation to prevent <a href="https://noc.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> destroying humanity | <a href="https://noc.social/tags/AISafety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISafety</span></a> | The Guardian<br><a href="https://www.theguardian.com/technology/2023/may/24/openai-leaders-call-regulation-prevent-ai-destroying-humanity" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theguardian.com/technology/202</span><span class="invisible">3/may/24/openai-leaders-call-regulation-prevent-ai-destroying-humanity</span></a></p>
Joey de Villa 🪗<p>If you’re worried about the threat to humanity that a superintelligent AI might present, here are arguments that the threat is unlikely. They cite examples ranging from Stephen Hawking’s cat to the fact that AI weenies are often wrong.</p><p><a href="https://mastodon.cloud/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.cloud/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> <a href="https://mastodon.cloud/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.cloud/tags/ArtificialGeneralIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialGeneralIntelligence</span></a> <a href="https://mastodon.cloud/tags/SuperIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SuperIntelligence</span></a> <a href="https://mastodon.cloud/tags/keynote" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>keynote</span></a> <a href="https://mastodon.cloud/tags/funny" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>funny</span></a> <a href="https://mastodon.cloud/tags/RokosBasilisk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RokosBasilisk</span></a> <a href="https://mastodon.cloud/tags/ideas" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ideas</span></a></p><p><a href="https://www.globalnerdy.com/2023/05/23/maciej-ceglowskis-reassuring-arguments-for-why-an-ai-superintelligence-might-not-be-a-threat-to-humanity/" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">globalnerdy.com/2023/05/23/mac</span><span class="invisible">iej-ceglowskis-reassuring-arguments-for-why-an-ai-superintelligence-might-not-be-a-threat-to-humanity/</span></a></p>
beSpacific<p>Can We Stop Runaway <a href="https://newsie.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>? <a href="https://archive.is/AHYCQ" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="">archive.is/AHYCQ</span><span class="invisible"></span></a><br><a href="https://newsie.social/tags/Technologists" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technologists</span></a> warn about the <a href="https://newsie.social/tags/dangers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dangers</span></a> of the so-called <a href="https://newsie.social/tags/singularity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>singularity</span></a>. But can anything actually be done to prevent it? ...there’s a good chance that current A.I. technology will develop into artificial general intelligence, or <a href="https://newsie.social/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> —a higher form of A.I. capable of thinking at a human level in many or most regards. A smaller group argues that A.G.I.’s power could escalate exponentially. <a href="https://newsie.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://newsie.social/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> and <a href="https://newsie.social/tags/superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>superintelligence</span></a></p>
HistoPol (#HP) 🏴 🇺🇸 🏴<p><span class="h-card"><a href="https://fedi.simonwillison.net/@simon" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>simon</span></a></span> <span class="h-card"><a href="https://wandering.shop/@annaleen" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>annaleen</span></a></span> <span class="h-card"><a href="https://mstdn.party/@voron" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>voron</span></a></span> </p><p>(6/n)</p><p>...it'll have human <a href="https://mastodon.social/tags/bias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bias</span></a>. Humans have always been great at bending or breaking the law when it suited their interests. How could a <a href="https://mastodon.social/tags/Superintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Superintelligence</span></a> created with human values *not* arrive at the same, self-preserving conclusion?</p><p>A gloomy, yet, IMO, quite fitting assessment of the shape of things to come unless there's a <a href="https://mastodon.social/tags/Chernobyl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chernobyl</span></a>-style "fallout" before <a href="https://mastodon.social/tags/GAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GAI</span></a> evolves into <a href="https://mastodon.social/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> + humanity gets its act together and, as Prof. <a href="https://mastodon.social/tags/Tegmark" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tegmark</span></a> admonishes: "Just look up!"</p>