shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

252
active users

#bayesian

2 posts2 participants0 posts today

Sunken British superyacht Bayesian is raised from the seabed.

A superyacht that sank off the coast of the Italian island of Sicily last year has been raised from the seabed by a specialist salvage team.

Seven of the 22 people on board died in the sinking, including the vessel's owner, British tech tycoon Mike Lynch and his 18-year-old daughter.

The cause of the sinking is still under investigation.

mediafaro.org/article/20250620

The Bayesian superyacht being lifted by a sea crane.
BBC · Sunken British superyacht Bayesian is raised from the seabed.By BBC
#Italy#UK#Bayesian

Interested in trying out *Bayesian nonparametrics* for your statistical research?

I'd be very grateful if people tried out this R package for Bayesian nonparametric population inference, called "inferno" :

<pglpm.github.io/inferno/>

It is especially addressed to clinical and medical researchers, and allows for thorough statistical studies of subpopulations or subgroups.

Installation instructions are here: <pglpm.github.io/inferno/index.>.

A step-by-step tutorial, guiding you through an example analysis of a simple dataset, is here: <pglpm.github.io/inferno/articl>.

The package has already been tested and used in concrete research about Alzheimer's Disease, Parkinson's Disease, drug discovery, and applications to machine learning.

Feedback is very welcome. If you find the package useful, feel free to advertise it a little :)

pglpm.github.ioBayesian nonparametric population inferenceFunctions for Bayesian nonparametric population (or exchangeable, or density) inference. From a machine-learning perspective, it offers a model-free, uncertainty-quantified prediction algorithm.

aeon.co/essays/no-schrodingers

This is a pretty good article for showing how confused the interpretation of QM is. And its a good article to understand why i personally side with Bohm and Bell in thinking the pilot wave theory is the one most reasonable to believe. Because the pilot wave theory has the following quality. The theory is a mapping from initial position at time t=0 to final position at time t=1...Its deterministic, but our knowledge of the initial condition is not
#quantum #bohm #bayesian

AeonNo, Schrödinger’s cat is not alive and dead at the same time | Aeon EssaysThe weird paradox of Schrödinger’s cat has found a lasting popularity. What does it mean for the future of quantum physics?

A #bayesian blogpost, by two of my undergraduate students! It's their report on their learning Bayesian modeling by applying it to my lab's data.
alexholcombe.github.io/brms_ps
Summary: we learned to use brms. But had trouble when we added more than one or two factors to the model. Little idea why; haven't had time to tinker much with that.

alexholcombe.github.ioBayesian analysis of psychophysical data using brms

I got an email from the author promoting this benchmark comparison of #Julialang + StanBlocks + #Enzyme vs #Stan runtimes.

StanBlocks is a macro package for Julia that mimics the structure of a Stan program. This is the first I've heard about it.

A considerable number of these models are faster in Julia than Stan, maybe even most of them.

nsiccha.github.io/StanBlocks.j

nsiccha.github.ioStanBlocks.jl - Julia vs Stan performance comparison

Everyone thinks big data should be better. If you have a good model that makes good predictions then often more data is an enormous nuisance. The posterior distributions are so narrow that you can never sample properly. It's like finding a hydrogen atom in your bedroom. The thing is just too damn small. So anyway we are trying tempering for our migration model just so we can get convergence. I don't care about tiny uncertainty intervals. just be near the right answer. #Bayesian #statistics

A couple of years ago, we took various tools for the analysis of repeated measures in very large big data for a ride.

We tested various pckages in #rstats and #stan and it turns out that the best tool for the task was Nelder's hierarchical likelihood (which has a #Bayesian interpretation too)

The repository is bitbucket.org/chrisarg/laplace
Link to the paper 👇
pmc.ncbi.nlm.nih.gov/articles/

During the process we bracketed what (we think should be the reference range for the serum potassium)

@AeonCypher @paninid

"A p-value is an #estimate of p(Data | Null Hypothesis). " – not correct. A p-value is an estimate of

p(Data or other imagined data | Null Hypothesis)

so not even just of the actual data you have. Which is why p-values depend on your stopping rule (and do not satisfy the "likelihood principle"). In this regard, see Jeffreys's quote below.

Imagine you design an experiment this way: "I'll test 10 subjects, and in the meantime I apply for a grant. At the time the 10th subject is tested, I'll know my application's outcome. If the outcome is positive, I'll test 10 more subjects; if it isn't, I'll stop". Not an unrealistic situation.

With this stopping rule, your p-value will depend on the probability that you get the grant. This is not a joke.

"*What the use of P implies, therefore, is that a hypothesis that may be true may be rejected because it has not predicted observable results that have not occurred.* This seems a remarkable procedure. On the face of it the fact that such results have not occurred might more reasonably be taken as evidence for the law, not against it." – H. Jeffreys, "Theory of Probability" § VII.7.2 (emphasis in the original) <doi.org/10.1093/oso/9780198503>.

Replied in thread

@paninid p-values, to a large extent, exist because calculating the posterior is computationally expensive. Not all fields use the .05 cutoff.

A p-value is an #estimate of p(Data | Null Hypothesis). If the two #hypotheses are equally likely and they are mutually exclusive and they are closed over the #hypothesis space, then this is the same as p(Hypothesis | Data).

Meaning, under certain assumption, the p-value does represent the actually probability of being wrong.

However, given modern computers, there is no reason that #Bayesian odds-ratios can't completely replace their usage and avoid the many many problems with p-values.

Small advertisement for my Ph.D. thesis and code, focused on #computervision for #robotics.
Using #julialang to implement #Bayesian inference algorithms for the 6D pose estimation of known objects in depth images.
TLDR: it works even with occlusions; needs <1sec on a GPU; does not need training; future research could focus on including color images / semantic information since SOA performs much better if color images are available.
doc: publications.rwth-aachen.de/re
code: github.com/rwth-irt/BayesianPo