shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

248
active users

#DigitalEthics

0 posts0 participants0 posts today

🚨 Explosive allegations are hitting Elon Musk’s DOGE team.

A whistleblower says:
📂 10GB of NLRB data was exfiltrated
🔓 Security settings were disabled
📸 Photos of staff were used to intimidate
⚖️ Claims involve surveillance, union suppression, and cyber intrusion

Musk called it “insane,” but the NLRB is reportedly cooperating with federal investigations. Whether true or not — this underscores the growing overlap of cybersecurity, labor rights, and executive power.

#CyberSecurity #Whistleblower #ElonMusk #NLRB #DigitalEthics #security #privacy #cloud #infosec

npr.org/2025/04/15/nx-s1-53558

I’ve spent 37+ years in forensic handwriting analysis, including consulting on cold cases and training law enforcement internationally.

With a PhD in Applied Ethics and ongoing studies in constitutional + human rights law, my focus is on integrity, evidence, and reputation in high-conflict digital environments.

I’ll be using this space to quietly track patterns that apply to my work.

What people tend to forget about messaging services is that privacy isn't just about the content of your message. It is also about your metadata: the information about you, your message and your contacts.

https://axbom.com/signal-whatsapp/

#digitalethics #surveillance #signal #whatsapp
Axbom · Why Signal is more secure than WhatsAppA walkthrough of why you should choose Signal over WhatsApp, and how a focus on encrypted communication can be deceptive.
Generative AI can not generate its way out of prejudice

The concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that after hundreds of attempts would not generate an output matching the prompt. I tried their experiment with Stable Diffusion's free web version and found it every bit as concerning as you might imagine.

https://axbom.com/generative-prejudice/

#AIEthics #DigitalEthics
Axbom • Digital Compassion · Generative AI can not generate its way out of prejudiceThe concept of "generative" suggests that the tool can produce what it is asked to produce. In a study uncovering how stereotypical global health tropes are embedded in AI image generators, researchers found it challenging to generate images of Black doctors treating white children. They used Midjourney, a tool that
Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.

"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."

In this regard the tools don't take us to the future, but to the past.

No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.

In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.

Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.

It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.

https://www.nature.com/articles/s41746-023-00939-z

#DigitalEthics #AIEthics
NatureLarge language models propagate race-based medicine - npj Digital MedicineLarge language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.
Judy Estrin is on fire in her op-ed for Time Magazine.

“What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.“

She goes on to talk about the politics of inevitability and how we are tricked into thinking the future dictated by big tech is unavoidable. (Pro tip: it isn’t)

But this sentence actually caught me off-guard: “On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest.“

Do read it.

https://time.com/6302761/ai-risks-autonomy/

#DigitalEthics
Time · The Case Against AI Everything, Everywhere, All at OnceBy Judy Estrin
While a misleading narrative of doom is being widely pushed, I want to contribute to bringing attention to the real harms and risks of AI. And I do this, honestly, with an intent to show how the harms are tangible and can be addressed.

This is a good thing.

If we can be open and honest about the actual risks then we stand a better chance of owning them and acting to evade or mitigate them. If we don't talk about them, our chances of managing them responsibly are zero.

All the harms are human-made and hence are under human control. What we need to do is demand more transparency around each issue, and acknowledge how all teams that deploy or make use of AI need a mitigation strategy for many different types of harms.

I give you The Elements of AI Ethics, which borrows from and builds on my chart from 2021, The Elements of Digital Ethics.

Read about it here and download the chart in PDF format for your own, free, use: https://axbom.com/aielements/

Let me know what you think.

#AIEthics #DigitalEthics
Axbom · The Elements of AI EthicsLet's talk about harm caused by humans implementing AI.
I think what scares me the most about the ”AI” tools that have been hyped over the past few months is that they all appear to emphasise the elimination of thinking.

All while manufacturers falsely claim that this is what is going on inside the machine.

As if the time and effort between information retrieval and making sense of it is inconsequential to the growth of the human mind and the validity of the end result.

As if thinking is a nuisance.

#AIHype #AIEthics #DigitalEthics

a brilliant conversation by @parismarx and @timnitGebru on tech culture: a very accessible ethical discussion of what tech utopias really mean, and why we should never let that a bunch of nerdy billionaires decide how the future of the world should look like

'Tech won't save us' is a fantastic accessible podcast on digital literacy for non-engineer that everyone should listen to.

open.spotify.com/episode/3WY9c

SpotifyWhat the US-China Divide Means for Tech w/ Louise MatsakisListen to this episode from Tech Won't Save Us on Spotify. Paris Marx is joined by Louise Matsakis to discuss the growing divide between the US and China, the long history of Western concern about the East, and why we should pay attention to who these anti-China narratives benefit.Louise Matsakis is a technology reporter at Semafor who previously worked at NBC News, Rest of World, and Wired. You can follow her on Twitter at @lmatsakis.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Please participate in our listener survey this month to give us a better idea of what you think of the show: https://forms.gle/xayiT7DQJn56p62x7Louise wrote about YouTube videos predicting China’s collapse, the rise of Shein, and the prospect of TikTok bans.Many US states are banning TikTok from government-issued devices.In 2000, Bill Clinton said that trying to place restrictions on the internet was like trying to nail Jello to a wall. China proved him wrong.India has already banned TikTok and a number of other Chinese apps.Speakers of the Khmer language in Cambodia used voice chat on Messenger because keyboards weren’t designed to work with their language.Shein is taking off in Mexico.Support the show
#AI#tech#culture

One way of getting to know me is my writing and my talks.

Here is a selection of posts within the areas of Digital Ethics, Human Rights, Usability, Accessibility and more – all with a leaning towards design and communication theory.

I teach, consult and write about these issues. If you want to know more, don’t hesitate to contact me.

axbom.comThe Elements of Digital EthicsA chart to help guide moral considerations in the tech space.
#introduction

Hi, everyone! It's been over a week since I dove right into the #Fediverse. Like most, I started an account with a large #Mastodon instance and soon after, created another account for my more niche interests.

Lots of learnings along the way, including leaving the large instance because it was getting real slow or needing to keep my spiritual persona separate from my mundane one (for example, I thought I could ask CIRA on Twitter security risks for work but was completely ignored, I think because I had witches.live as my domain 😂 ).

Now, I'm working on setting up my own personal spiritual instance, but I'd like my more professional one to live here in mastodon.tech and find more techie friends.

So, if you've seen a similar introduction before (from mstdn.ca and witches.live), apologies. Those who haven't, these are what I'm (mostly) about:

* Systems Analyst for a Canadian #environmental #nonprofit
* Main Expertise: #Salesforce implementations
* Invested in tech ethics, Canadian #politics, real #climate solutions and cultural #anthropology
* I do kung fu
* Vanity is my guilty pleasure
* As you can tell, I'm pretty wordy so I'm loving the #Pleroma 5K default character limit.
* For the more witchy, occulty-side of me, you can follow @lunaria (although I'm transitioning from there soon-ish)

#SystemsAnalyst #Canada #BC #DigitalEthics #KungFu
Mastodon hosted on witches.livewitches.liveA witchy space for most any face! Whether a witch or a witch-respecter, join the coven that is free of fash, TERFs, feds, and bigots

In my case, one way of getting to know me is through my writing, tools and worksheets. Starting this thread to share some key blog posts and perhaps some other tidbits I’ve produced thoughout the years.

First out is The Elements of Digital Ethics, which sets the tone. A chart to help guide moral considerations in the tech space.

#DigitalEthics

axbom.com/elements/

axbom.comThe Elements of Digital EthicsA chart to help guide moral considerations in the tech space.