shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

266
active users

#reproducibility

5 posts3 participants3 posts today

And yet another one in the ever increasing list of analyses showing that top journals are bad for science:

"Thus, our analysis show major claims published in low-impact journals are significantly more likely to be reproducible than major claims published in trophy journals. "

biorxiv.org/content/10.1101/20

bioRxiv · A retrospective analysis of 400 publications reveals patterns of irreproducibility across an entire life sciences research fieldThe ReproSci project retrospectively analyzed the reproducibility of 1006 claims from 400 papers published between 1959 and 2011 in the field of Drosophila immunity. This project attempts to provide a comprehensive assessment, 14 years later, of the replicability of nearly all publications across an entire scientific community in experimental life sciences. We found that 61% of claims were verified, while only 7% were directly challenged (not reproducible), a replicability rate higher than previous assessments. Notably, 24% of claims had never been independently tested and remain unchallenged. We performed experimental validations of a selection of 45 unchallenged claim, that revealed that a significant fraction (38/45) of them is in fact non-reproducible. We also found that high-impact journals and top-ranked institutions are more likely to publish challenged claims. In line with the reproducibility crisis narrative, the rates of both challenged and unchallenged claims increased over time, especially as the field gained popularity. We characterized the uneven distribution of irreproducibility among first and last authors. Surprisingly, irreproducibility rates were similar between PhD students and postdocs, and did not decrease with experience or publication count. However, group leaders, who had prior experience as first authors in another Drosophila immunity team, had lower irreproducibility rates, underscoring the importance of early-career training. Finally, authors with a more exploratory, short-term engagement with the field exhibited slightly higher rates of challenged claims and a markedly higher proportion of unchallenged ones. This systematic, field-wide retrospective study offers meaningful insights into the ongoing discussion on reproducibility in experimental life sciences ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, 310030_189085 ETH-Domain’s Open Research Data (ORD) Program (2022)

To my knowledge, first time that not only prestigious journals, but also prestigious institutions are implicated as major drivers of irreproducibility:

"Higher representation of challenged claims in trophy journals and from top universities"

biorxiv.org/content/10.1101/20

bioRxiv · A retrospective analysis of 400 publications reveals patterns of irreproducibility across an entire life sciences research fieldThe ReproSci project retrospectively analyzed the reproducibility of 1006 claims from 400 papers published between 1959 and 2011 in the field of Drosophila immunity. This project attempts to provide a comprehensive assessment, 14 years later, of the replicability of nearly all publications across an entire scientific community in experimental life sciences. We found that 61% of claims were verified, while only 7% were directly challenged (not reproducible), a replicability rate higher than previous assessments. Notably, 24% of claims had never been independently tested and remain unchallenged. We performed experimental validations of a selection of 45 unchallenged claim, that revealed that a significant fraction (38/45) of them is in fact non-reproducible. We also found that high-impact journals and top-ranked institutions are more likely to publish challenged claims. In line with the reproducibility crisis narrative, the rates of both challenged and unchallenged claims increased over time, especially as the field gained popularity. We characterized the uneven distribution of irreproducibility among first and last authors. Surprisingly, irreproducibility rates were similar between PhD students and postdocs, and did not decrease with experience or publication count. However, group leaders, who had prior experience as first authors in another Drosophila immunity team, had lower irreproducibility rates, underscoring the importance of early-career training. Finally, authors with a more exploratory, short-term engagement with the field exhibited slightly higher rates of challenged claims and a markedly higher proportion of unchallenged ones. This systematic, field-wide retrospective study offers meaningful insights into the ongoing discussion on reproducibility in experimental life sciences ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, 310030_189085 ETH-Domain’s Open Research Data (ORD) Program (2022)

We invite staff and students at the University of #Groningen to share how they are making #research or #teaching more open, accessible, transparent, or reproducible, for the 6th annual #OpenResearch Award.

Looking for inspiration?
Explore the case studies submitted in previous years:
🔗 rug.nl/research/openscience/op

More info:
🔗 rug.nl/research/openscience/op

#OpenScience #OpenEducation #OpenAccess #Reproducibility
@oscgroningen

Continued thread

7/ Wei Mun Chan, Research Integrity Manager

With 10+ years in publishing and data curation, Wei Mun ensures every paper meets our high standards for ethics and #reproducibility. From image checks to data policies, he’s the quiet force keeping the scientific record trustworthy.

🎙"#Reproducibility isn’t just about repeating results, it’s about making the #research process transparent, so others can follow the path you took and understand how you got there."

🎧Listen to our new OpenScience podcast with Sarahanne Field @smirandafield

🔗 rug.nl/research/openscience/po

⏳ In this 10 min episode, Sarahanne reimagines reproducibility for #qualitative research.
She addresses challenges in ethical #data sharing of transcripts, and the importance of clear methodological reporting.

New study: #ChatGPT is not very good at predicting the #reproducibility of a research article from its methods section.
link.springer.com/article/10.1

PS: Five years ago, I asked this question on Twitter/X: "If a successful replication boosts the credibility a research article, then does a prediction of a successful replication, from an honest prediction market, do the same, even to a small degree?"
x.com/petersuber/status/125952

What if #LLMs eventually make these predictions better than prediction markets? Will research #assessment committees (notoriously inclined to resort to simplistic #metrics) start to rely on LLM replication or reproducibility predictions?

SpringerLinkChatGPT struggles to recognize reproducible science - Knowledge and Information SystemsThe quality of answers provided by ChatGPT matters with over 100 million users and approximately 1 billion monthly website visits. Large language models have the potential to drive scientific breakthroughs by processing vast amounts of information in seconds and learning from data at a scale and speed unattainable by humans, but recognizing reproducibility, a core aspect of high-quality science, remains a challenge. Our study investigates the effectiveness of ChatGPT (GPT $$-$$ - 3.5) in evaluating scientific reproducibility, a critical and underexplored topic, by analyzing the methods sections of 158 research articles. In our methodology, we asked ChatGPT, through a structured prompt, to predict the reproducibility of a scientific article based on the extracted text from its methods section. The findings of our study reveal significant limitations: Out of the assessed articles, only 18 (11.4%) were accurately classified, while 29 (18.4%) were misclassified, and 111 (70.3%) faced challenges in interpreting key methodological details that influence reproducibility. Future advancements should ensure consistent answers for similar or same prompts, improve reasoning for analyzing technical, jargon-heavy text, and enhance transparency in decision-making. Additionally, we suggest the development of a dedicated benchmark to systematically evaluate how well AI models can assess the reproducibility of scientific articles. This study highlights the continued need for human expertise and the risks of uncritical reliance on AI.

🎙️"#Reproducibility should be a key factor in all your #research, in all your projects. It costs time, but it's also shifting the time, and in the end it can save time again."

🎧Listen to the latest episode of our #OpenScience Bites podcast with Michiel de Boer of the @Dutch_Reproducibility_Network
🔗 rug.nl/research/openscience/po

⏳Open Science Bites is a series of short #podcast episodes - each around 10 minutes long - focusing on one specific open science practice.

rug.nl/opensciencebites

Recently, I got several opportunities to discuss the reproducibility crisis in science. To help discuss that complex topic, we need to agree on a vocabulary.

My favorite one has been published by Manuel López-Ibáñez, Juergen Branke and Luis Paquete, and is summarized in the attached diagram, which you can also find here: nojhan.net/tfd/vocabulary-of-r

It's good that this topic is not fading away, but is gaining traction. "Slowly, but surely", as we say in French.

If you want a high resolution suitable for impression, do not hesitate to ask!

It's been known for quite some time that more prestigious journals publish less reliable #science. Now two papers provide compelling empirical evidence as to potential underlying mechanisms:

journals.uchicago.edu/doi/10.1
academic.oup.com/qje/advance-a

The gist of the story is that scientists are so afraid of being scooped that they cut corners. Corner-cutting is rewarded by hi-ranking journal publications and the successful authors then teach their students how to get ahead in science.

These 2 papers provide compelling empirical evidence that competition in #science leads to sloppy work being preferentially published in hi-ranking journals:
journals.uchicago.edu/doi/10.1
academic.oup.com/qje/advance-a
Using the example of structural #biology , the authors report that scientists overestimate the damage of being scooped, leading to corner-cutting and sloppy work in the race to be first. Faster scientists then end up publishing sloppier work in higher-ranking journals.

Now this would have been an interesting article if the title had been:

"Eleven strategies for getting research institutions to implement infrastructure supporting reproducible research and open science"

Bus why would one train researchers to do something their institution does not adequately support?

"Eleven strategies for making reproducible research and open science training the norm at research institutions"

elifesciences.org/articles/897

eLifeEleven strategies for making reproducible research and open science training the norm at research institutionsResearchers and administrators can integrate reproducible research and open science practices into daily practice at their research institutions by adapting research assessment criteria and program requirements, offering training, and building communities.