shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

292
active users

#computation

0 posts0 participants0 posts today

Yesterday, my #soundinstallation "Antechamber" opened at KW Institute for #ContemporaryArt in Berlin 🥵 and will be on display until May 4. It is an inquiry into various timekeeping systems through rhythm, which sprang from my broader ongoing research on the origins and plurality of #computation :www_server:

kw-berlin.de/en/jessica-ekoman

KW Institute for Contemporary ArtJessica Ekomane – Curatorial Interview – KW Institute for Contemporary Art

The discovery of the fifth Busy Beaver number highlights the boundaries of computation itself. This elusive concept reveals complex tasks that even the most advanced machines can’t solve. As we push the limits of what can be calculated, we confront deeper questions: are there problems forever beyond the reach of algorithms?
#Computation #Mathematics #LimitsOfKnowledge
quantamagazine.org/amateur-mat

Calculating Empires – A Genealogy of Technology and Power since 1500
calculatingempires.net/

Check out this astonishing large-scale research visualization that explores how technical & social structures co-evolved over five centuries - focusing on 4 themes: communication, computation, classification, & control. Definitely use your largest screen!

calculatingempires.netCalculating Empires: A Genealogy of Technology and Power since 1500Explore how technical and social structures co-evolved over five centuries in this large-scale research visualization.

"Claude Shannon's 1950 paper Programming a Computer for Playing Chess was a seminal work in the field of Artificial Intelligence [...] In this paper, Shannon employs the Minimax Algorithm, which takes as its premise that your opponent will always choose the move that is best for them, and worst for you."

The Unimax Algorithm is a cooperation-centric alternative to this foundational yet adversarial paradigm in #computation.

unimax.run/

unimax.rununimax.run

“Dynamicland is a way for real people in the real world to explore ideas together, not just with words and pictures, but with computation.
But, for us, computation doesn't mean scrolling around in screens.It means working out in the real is a way for real people in the real world to explore ideas together, not just with words and pictures, but with computation.”

#technology #computation

dynamicland.org/

dynamicland.orgDynamiclandIncubating a humane dynamic medium.

What is the relationship between information, causation, and entropy?

The other day, I was reading a post from Corey S. Powell on how we are all ripples of information. I found it interesting because it resonated with my own understanding of information (i.e. it flattered my biases). We both seem to see information as something active rather than passive. In my case I see it fundamentally related to causation itself, more specifically a snapshot of causal processing. Powell notes that Seth Lloyd has an excellent book on this topic, so I looked it up.

Lloyd’s 2006 book is called Programming the Universe, which by itself gives you an idea of his views. He sees the entire universe as a giant computer, specifically a quantum computer, and much of the book is about making a case for it. It’s similar to the “it from qubit” stance David Chalmers explores in his book Reality+. (I did a series of posts on Chalmers’ book a while back.)

One of the problems with saying the universe is a computer is it invites an endless metaphysical debate, along with narrow conceptions of “computer” leading people to ask things like what kind of hardware the universe might be running on. I’ve come to think a better strategy is to talk about the nature of computation itself. Then we can compare and contrast that nature with the universe’s overall nature, at least to the extent we understand it.

Along those lines, Chalmers argues that computers are causation machines. I think it helps to clarify that we’re talking about logical processing, which is broader than just calculation. I see logical processing as distilled causation, specifically a high degree of causal differentiation (information) at the lowest energy levels currently achievable, in other words, a high information to energy ratio.

The energy point is important, because high causal differentiation tends to be expensive in terms of energy. (Data centers are becoming a major source of energy consumption in the developed world, and although the brain is far more efficient, it’s still the most expensive organ in the body, at least for humans.)

Which is why computational systems always have input/output interfaces that reduce the energy levels of incoming effects from the environment to the levels of their internal processing, and amplify the energy of outgoing effects. (Think keyboards and screens for traditional PCs, or sense organs and muscles for nervous systems.)

Of course, there’s no bright line, no sharp threshold in the information / energy ratio where a system is suddenly doing computation. As a recent Quanta piece pointed out, computation is everywhere. But for most things, like stars, the magnitude of their energy level plays a much larger role in the causal effects on the environment than their differentiation.

However, people like Lloyd or Chalmers would likely point out that the energy magnitude is itself a number, a piece of information, one that has computational effects on other systems. In a simulation of that system, the simulation wouldn’t have the same causal effects on other physical systems as the original, but it would within the environment of the simulation. (Simulated wetness isn’t wet, except for entities in the simulation.)

Anyway, the thing that really caught my eye with Lloyd was his description of entropy. I’ve covered before my struggles with the customary description of entropy as the amount of disorder in a system. Disorder according to who? As usually described, it leaves the question of how much entropy a particular system has as observer dependent, which seems problematic for a fundamental physics concept. My reconciliation of this is to think of entropy as disorder for transformation, or in engineering terms: for work.

Another struggle has been the relationship between entropy and information. I’ve long wanted to say that entropy and information are closely related, if not the same thing. That seems like the lesson from Claude Shannon’s theory of information, which uses an equation similar to Ludwig Boltzmann’s for entropy. Entropy is a measure of the complexity in a system, and higher values result in a system’s energy gradients being fragmented, making much of the energy in the system unavailable for transformation (work), at least without adding additional energy into the system.

However, people like Sean Carroll often argue that a high entropy state is one of low information. Although Carroll does frequently note that there are several conceptions of “information” out there. His response makes sense for what is often called “semantic information”, that is information whose meaning is known and useful to some kind of agent. The equivalence seems more for “physical information”, the broader concept of information as generally used in physics (and causes hand wringing due to the possibility of black holes losing it).

Lloyd seems to be on the same page. He sees entropy as information, although he stipulates that it’s hidden information, or unavailable information (similar to how energy is present but unavailable). But this again seems to result in entropy being observer dependent. If the information is available to you but not me, does that mean the system has higher entropy for me than it does for you? If so, then computers are high entropy systems since none of us have access to most of the current information in the device you’re using right now.

My reconciliation here is to include the observer as part of the accounting. So if a system is in a highly complex state, one you understand but I don’t, then the entropy for the you + system under consideration is lower than the entropy for the me + system combo. In other words, your knowledge, the correlations between you and the system, makes the combined you + system more ordered for transformation than the me + system combo. At least that’s my current conclusion.

But that means for any particular system considered in isolation, the level of entropy is basically the amount of complexity, of physical information it contains. That implies that the ratio I was talking about above, of information to energy, is also of entropy to energy. And another way to refer to these computational systems, in addition to information processing systems, is as entropy processing systems, or entropy transformers.

This might seem powerfully counter intuitive because we’re taught to think of entropy as bad. Computational systems seem to be about harnessing their entropy, their complexity, and making use of it. And we have to remember that these aren’t closed systems. As noted above, they’re systems that require a lot of inbound energy. It’s that supply of energy that enables transformation of their highly entropic states. (It’s worth noting that these systems also produce a lot of additional entropy that requires energy to be removed, such as waste heat or metabolic waste.)

So computers are causation machines and entropy transformers. Which kind of sounds like the universe, but maybe in a very concentrated form. Viewing it this way keeps us more aware of the causal relations not yet captured by current conventional computers. And the energy requirements remind us that computation may be everywhere, but the useful versions only seem to come about from extensive evolution or engineering. As Chalmers notes in his book, highly computational systems don’t come cheap.

What do you think? Are there differences between physical information and entropy that I’m overlooking? And how would you characterize the nature of computation? Does a star, rock, or hurricane compute in any meaningful sense? What about a unicellular organism?

Featured image credit

https://selfawarepatterns.com/2024/07/28/entropy-transformers/

#Avi #Wigderson is the recipient of the 2023 ACM A.M. #Turing #Award.

Wigderson's groundbreaking contributions to theoretical computer science include results that have🔸 elucidated both the power and limitations of #randomness in #computation. 🔸
Wigderson is the Herbert H. Maass Professor in the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey.
Prior to Wigderson's contributions, it appeared quite plausible that randomized algorithms might be qualitatively faster than deterministic computation for many fundamental problems. For example, efficient algorithms for finding large primes - a computation essential to modern cryptography - use randomization.
In a landmark series of works, Wigderson and colleagues proved that,
under standard and widely believed computational assumptions, ⭐️every efficient randomized algorithm can in fact be fully derandomized. ⭐
In other words, randomness is not necessary for efficient computation.
These results revolutionized our understanding of the role of randomness in algorithms, and the way we think about randomness more generally in mathematics and computer science.

awards.acm.org/award-recipient

awards.acm.orgAvi WigdersonACM Award Recipient page

A long and fascinating story on the people working on understanding the Mandelbrot Set.

"For decades, a small group of mathematicians has patiently unraveled the mystery of what was once math’s most popular picture. Their story shows how technology transforms even the most abstract mathematical landscapes."

quantamagazine.org/the-quest-t

Yale Environment 360:

As Use of A.I. Soars, So Does the Energy and Water It Requires

Generative artificial intelligence uses massive amounts of energy for computation and data storage and billions of gallons of water to cool the equipment at data centers. Now, legislators and regulators — in the U.S. and the EU — are starting to demand accountability.

e360.yale.edu/features/artific

Yale E360As Use of A.I. Soars, So Does the Energy and Water It RequiresGenerative artificial intelligence uses massive amounts of energy for computation and data storage and millions of gallons of water to cool the equipment at data centers. Now, legislators and regulators — in the U.S. and the EU — are starting to demand accountability.

An Easy-Sounding Problem Yields Numbers Too Big for Our Universe...
quantamagazine.org/an-easy-sou
Researchers hope that a better understanding of relatively simple cases will help them develop new tools to study other models of #computation that are more elaborate than vector addition systems. Currently, we know almost nothing about these more elaborate models.

Quanta Magazine · An Easy-Sounding Problem Yields Numbers Too Big for Our Universe | Quanta MagazineResearchers prove that navigating certain systems of vectors is among the most complex computational problems.

#introduction

I'm a political scientist at FSU. I work on computational methods and experimental design, applied to comparative electoral institutions, state policy, nuclear weapons, and political knowledge.

My most well-known paper addresses the fallacy of concluding “no effect” from “statistical insignificance” and shows how to make a better argument.(carlislerainey.com/papers/nme.)