Eileen Guo<p>New from me, Gabriel Geiger, <br> + Justin-Casimir Braun at Lighthouse Reports. </p><p>Amsterdam believed that it could build a <a href="https://journa.host/tags/predictiveAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>predictiveAI</span></a> for welfare fraud that would ALSO be fair, unbiased, & a positive case study for <a href="https://journa.host/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResponsibleAI</span></a>. It didn't work. </p><p>Our deep dive why: <a href="https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">technologyreview.com/2025/06/1</span><span class="invisible">1/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/</span></a></p>