shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

245
active users

#aigovernance

1 post1 participant0 posts today

"You may have noticed in the above language in the bill goes beyond “AI” and also includes “automated decision systems.” That’s likely because there are two California bills currently under consideration in the state legislature that use the term; AB 1018, the Automated Decisions Safety Act and SB7, the No Robo Bosses Act, which would seek to prevent employers from relying on “automated decision-making systems, to make hiring, promotion, discipline, or termination decisions without human oversight.”

The GOP’s new amendments would ban both outright, along with the other 30 proposed bills that address AI in California. Three of the proposed bills are backed by the California Federation of Labor Unions, including AB 1018, which aims to eliminate algorithmic discrimination and to ensure companies are transparent about how they use AI in workplaces. It requires workers to be told if AI is used in the hiring process, allows them to opt out of AI systems, and to appeal decisions made by AI. The Labor Fed also backs Bryan’s bill, AB 1221, which seeks to prohibit discriminatory surveillance systems like facial recognition, establish worker data protections, and compels employers to notify workers when they introduce new AI surveillance tools.

It should be getting clearer why Silicon Valley is intent on halting these bills: One of the key markets—if not the key market—for AI is as enterprise and workplace software. A top promise is that companies can automate jobs and labor; restricting surveillance capabilities or carving out worker protections promise to put a dent in the AI companies’ bottom lines. Furthermore, AI products and automation software promise a way for managers to evade accountability—laws that force them to stay accountable defeat the purpose."

bloodinthemachine.com/p/de-dem

Blood in the Machine · De-democratizing AIBy Brian Merchant
#USA#GOP#AI
Replied in thread

@elementary tl;dr I support your objectives, and kudos on the goal, but I think you should monitor this new policy for unexpected negative outcomes. I take about 9k characters to explain why, but I’m not criticizing your intent.

While I am much more pragmatic about my stance on #aicoding this was previously a long-running issue of contention on the #StackExchange network that was never really effectively resolved outside of a few clearly egregious cases.

The triple-net is that when it comes to certain parts of software—think of the SCO copyright trials over header files from a few decades back—in many cases, obvious code will be, well…obvious. That “the simplest thing that could possibly work” was produced by an AI instead of a person is difficult to prove using existing tools, and false accusations of plagiarism have been a huge problem that has caused a number of people real #reputationalharm over the last couple of years.

That said, I don’t disagree with the stance that #vibecoding is not worth the pixels that it takes up on a screen. From a more pragmatic standpoint, though, it may be more useful to address the underlying principle that #plagiarism is unacceptable from a community standards or copyright perspective rather than making it a tool-specific policy issue.

I’m a firm believer that people have the right to run their community projects in whatever way best serves their community members. I’m only pointing out the pragmatic issues of setting forth a policy where the likelihood of false positives is quite high, and the level of pragmatic enforceability may be quite low. That is something that could lead to reputational harm to people and the project, or to community in-fighting down the road, when the real policy you’re promoting (as I understand it) is just a fundamental expectation of “original human contributions” to the project.

Because I work in #riskmanagement and #cybersecurity I see this a lot. This is an issue that comes up more often than you might think. Again, I fully support your objectives, but just wanted to offer an alternative viewpoint that your project might want to revisit down the road if the current policy doesn’t achieve the results that you’re hoping for.

In the meantime, I certainly wish you every possible success! You’re taking a #thoughtleadership stance on an important #AIgovernance policy issue that is important to society and to #FOSS right now. I think that’s terrific!

"Powerful actors, governments, and corporations are actively shaping narratives about artificial intelligence (AI) to advance competing visions of society and governance. These narratives help establish what publics believe and what should be considered normal or inevitable about AI deployment in their daily lives — from surveillance to automated decision-making. While public messaging frames AI systems as tools for progress and efficiency, these technologies are increasingly deployed to monitor populations and disempower citizens’ political participation in myriad ways. This AI narrative challenge is made more complex by the many different cultural values, agendas, and concepts that influence how AI is discussed globally. Considering these differences is critical in contexts in which data exacerbates inequities, injustice, or nondemocratic governance. As these systems continue to be adopted by governments with histories of repression, it becomes crucial for civil society organizations to understand and counter AI narratives that legitimize undemocratic applications of these tools.

We built on the groundwork laid by the Unfreedom Monitor to conduct our Data Narratives research into data discourse in five countries that face different threats to democracy: Sudan, El Salvador, India, Brazil, and Turkey. To better understand these countries’ relationships to AI, incentives, and public interest protection strategies, it is helpful to contextualize AI as a data narrative. AI governance inherently involves data governance and vice versa. AI systems rely on vast quantities of data for training and operation, while AI systems gain legibility and value as they are widely integrated into everyday functions that then generate the vast quantities of data they require to work."

#AI #AINarratives #AIHype #AITraining #AIGovernance #DataGovernance

globalvoices.org/2024/12/23/ar

Global Voices · Artificial Intelligence Narratives: A Global Voices ReportOur Data Narratives research delves into discourse about data and artificial intelligence in five countries: Sudan, El Salvador, India, Brazil, and Turkey.

Predictive AI technology has a well-known issue: how do we know the component is reliable, and can we trust the result enough to use it in our work? Learn more on how DW Innovation has found ways to make an AI-powered service more transparent and secure, assisting not only end-users, but also the process of AI governance.

innovation.dw.com/articles/ai-

innovation.dw.comAI in Media Tools: How to Increase User Trust and Support AI GovernancePrinciples like Trustworthy AI are well documented – but what about their implementation?