Jennifer Hamilton, MD PhD<p>I don't trust large language model (<a href="https://med-mastodon.com/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a>) AIs: They're trained to sound plausible without regard for accuracy, ie, generate bullshit.</p><p>If you can handle that "spicy" description, please read this essay by <span class="h-card" translate="no"><a href="https://scholar.social/@researchfairy" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>researchfairy</span></a></span>, describing how LLMs can be used to deliberately weaponize <a href="https://med-mastodon.com/tags/SystematicReview" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SystematicReview</span></a> articles. Want a topic review that will completely plausibly support your controversial viewpoint? Say, you want to support raw milk or decry <a href="https://med-mastodon.com/tags/vaccination" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vaccination</span></a> ?</p><p><a href="https://blog.bgcarlisle.com/2025/05/16/a-plausible-scalable-and-slightly-wrong-black-box-why-large-language-models-are-a-fascist-technology-that-cannot-be-redeemed/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.bgcarlisle.com/2025/05/16</span><span class="invisible">/a-plausible-scalable-and-slightly-wrong-black-box-why-large-language-models-are-a-fascist-technology-that-cannot-be-redeemed/</span></a></p><p><a href="https://med-mastodon.com/tags/AntiVax" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AntiVax</span></a> <a href="https://med-mastodon.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>