shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

290
active users

#psychometrics

0 posts0 participants0 posts today

Very interesting paper on the use of exploratory factor strategies to investigate indicators of psychopathology. The Intro and Discussion are packed with interesting literature from the field:
journals.sagepub.com/doi/full/

Thanks for HT to our paper on some conceptual challenges when using and interpreting factor models for this work. Below one of the parts of the paper the team may have referred to.
rdcu.be/ed1Vd

This one time in maybe 2018 I pissed off a few uni colleagues. At a "senate" meeting (it deserves quotation marks at my school) some people with #psychometrics training & experience, including me, explained that a specific kind of #assessment was invalid--as in zero validity for intended purpose--so we should not use it. It literally provides no information about what it says, so we might as well roll dice or draw #random words from a hat.

The administration argued vehemently for keeping the assessment. They claimed it was valid (we showed them it wasn't). They claimed that a "caring" or "astute" instructor could glean valuable information (we showed them that this wasn't possible). They appealed to the other professors, implying that not using this assessment meant they didn't care about their students, and that voting to eliminate this assessment meant they (the eggheaded intellectuals) were being dunked on by the eggheaded intellectuals.

The measure failed and we still use the assessment. After the meeting I and someone else were bemoaning the result. I said something like "What happened in there was Trumpian." A colleague walking by overheard and angrily asked, "What do you mean by that?!"

I said, "The faculty heard from the experts telling them something they didn't like and they chose to go with the people who had no expertise telling them what they wanted to hear."

(I am sometimes not diplomatic; this makes a good story, but I really wish I had found a better way to say that.)

The person audibly huffed, actually turned on their heel, and walked away. They haven't spoken to me since.

Been thinking about this graphic for quite a while. A couple of years, I think. It's been going around social media for a while. At one point I hunted down its origin, and... anyway, I'm going to basically dump all my issues with this right here.

Right up front: some people will think I'm obviously racist for criticizing something intended to be antiracist. And I do think the creator (and definitely the people sharing this around) have their hearts in the right place. Unfortunately (maybe just for me), that's not how science works, and I think this graphic was created in a way that essentially asks for the legitimacy of science despite having no scientific foundation.

The unloadening of mine issues shall begin in the reply to this.

Another #PeerReview done.
Manuscript c2,300 words
Review c1,600 words
1hr 45min

Using multiple indic[a]tors of the same construct as predictors in a regression equation makes it quite difficult to understand what the unique contribution of each indicator is.

And another plug for the #STROBE reporting guideline
strobe-statement.org/

#Psychometrics #Epidemiology
[]=edit

STROBESTROBESTROBE stands for an international, collaborative initiative of epidemiologists, methodologists, statisticians, researchers and journal editors involved in the conduct and dissemination of…

#ISOQOL

#1/
Excellent talk by Kevin Weinfurt in #Plenary1 discussing multiple types of measures (go also to #Plenary4 for that!), increased use of high-intensity longitudinal data, and generative AI for assessments.

Start reading his work:
rdcu.be/dWXSz
#HRQL #Psychometrics

#2/
Not sure this is what the #ISOQOL_NewInvestigators wanted when asking me to talk about advice for manuscript writing, but this is the slide people talked to me about 👇
#NightshiftEditor #ScientificPublishing

"The EQUATOR executive supports the practice of #DataSharing when reporting all research.
Data sharing is important and should be a checklist item in all reporting guidelines."

This is only from the highlights section of [bmj.com/content/386/bmj-2024-0], the full text is even better.

The BMJ · Reporting on data sharing: executive position of the EQUATOR NetworkThe EQUATOR (Enhancing the Quality and Transparency Of health Research) Network supports the practice of data sharing, and the reporting of data management and sharing plans, in all reports of biomedical research. Both practices should be included as checklist items when developing new or updating reporting guidelines. More focus should be given to structuring and standardising data management and sharing plans to help provide a similar impact as reporting guidelines. In 2006, the late Doug Altman, from the Centre for Statistics in Medicine, University of Oxford, and colleagues, established the first EQUATOR (Enhancing the Quality and Transparency Of health Research) Centre. As of February 2024, EQUATOR is a network comprising five centres (Australasia, Canada, China, France, and the UK) and an executive group. The remit of EQUATOR is “to achieve accurate, complete, and transparent reporting of all health research studies to support research reproducibility and usefulness.”1 While EQUATOR offers several toolkits and other resources to help researchers achieve transparency in reporting their research, the most recognised feature is the open online library of reporting guidelines.2 ### Summary points In 2010 EQUATOR proposed a working definition of a reporting guideline as “a checklist, flow diagram, or explicit text to guide authors in reporting a specific type of research, developed using explicit methodology.”3 The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist is a prime example of …

New #AcademicYear - New #introduction

I am interested in health-related quality of life
#HRQL #Psychometrics

I teach #ResearchMethods modules in the #DundeeUni #ProfDoc
#Interdisciplinary #RD62001 #RD62002

I convene our School's #ResearchEthics committee

My terms as academic editor of 'Quality of Life Research' come to an end, and I am mulling over roles and sense in #AcademicPublishing
#NightshiftEditor

There may be some occasional #Deutsch #Svenska #Gaidhlig and local stuff in my feed.

The #PrismaCosmin #guideline
was published jointly in Quality of Life Research, J of Patient Reported Outcomes, Health and Quality of Life Outcomes & J of Clinical Epidemiology.

As lead editors, Brittany Lapin and I offer some thoughts on content, process, and future
osf.io/preprints/osf/ukw93

#PRISMA #Psychometrics #HRQL #ISOQOL

The four versions of the guidelines are published here:
link.springer.com/article/10.1

jpro.springeropen.com/articles

hqlo.biomedcentral.com/article

sciencedirect.com/science/arti

osf.ioOSF

A team investigated to which degree construction methods for anchor-based minimal important difference methods are sensitive to the distribution of the #ChangeScore
link.springer.com/article/10.1
(Code in supplement)

Terluin et al argue that such simulations need to consider measurement error (& illustrate how)
rdcu.be/dNAJz
OSF osf.io/m5tuy/

The original authors respond with a community call👇
rdcu.be/dNAJ9
OSF osf.io/hgsu7