shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

283
active users

#llvm

0 posts0 participants0 posts today
pancake :radare2:<p>2025 and clang-format still can't enforce a space before function calls <a href="https://releases.llvm.org/20.1.0/tools/clang/docs/ClangFormatStyleOptions.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">releases.llvm.org/20.1.0/tools</span><span class="invisible">/clang/docs/ClangFormatStyleOptions.html</span></a> <a href="https://infosec.exchange/tags/llvm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llvm</span></a> <a href="https://infosec.exchange/tags/clang" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>clang</span></a> <a href="https://infosec.exchange/tags/format" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>format</span></a></p>
David Chisnall (*Now with 50% more sarcasm!*)<p>One of the reasons I'm still using GitHub for a lot of stuff is the free CI, but I hadn't really realised how little that actually costs. For <a href="https://infosec.exchange/tags/CHERIoT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CHERIoT</span></a> <a href="https://infosec.exchange/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a>, we're using Cirrus-CI with a 'bring your own cloud subscription' thing. We set up ccache backed by a cloud storage thing, so incremental builds are fast. The bill for last month? £0.31.</p><p>We'll probably pay more as we hire more developers, but I doubt it will cost more than £10/month even with an active team and external contributors. Each CI run costs almost a rounding-error amount, and that's doing a clean (+ ccache) build of LLVM and running the test suite. We're using Google's Arm instances, which have amazingly good price:performance (much better than the x86 ones) for all CI, and just building the x86-64 releases on x86-64 hardware (we do x86-64 and AArch64 builds to pull into our dev container). </p><p>For personal stuff, I doubt the CI that I use costs more than £0.10/month at this kind of price. There's a real market for a cloud provider that focuses on scaling down more than on scaling up and made it easy to deploy this kind of thing (we spent <em>far</em> more money on the developer time to figure out the nightmare GCE web interface than we've spent on the compute. It's almost as bad as Azure and seems to be designed by the same set of creatures who have never actually met a human).</p>
mattpd<p>Calculate Throughput with LLVM's Scheduling Model<br><a href="https://myhsu.xyz/llvm-sched-interval-throughput/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myhsu.xyz/llvm-sched-interval-</span><span class="invisible">throughput/</span></a><br>by <span class="h-card" translate="no"><a href="https://fosstodon.org/@mshockwave" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>mshockwave</span></a></span><br><a href="https://mastodon.social/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a></p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://chaosfem.tw/@freya" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>freya</span></a></span> <span class="h-card" translate="no"><a href="https://mstdn.social/@BrodieOnLinux" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>BrodieOnLinux</span></a></span> same with <span class="h-card" translate="no"><a href="https://infosec.space/@OS1337" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>OS1337</span></a></span> ...</p><p>It's <a href="https://infosec.space/tags/GNUfree" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GNUfree</span></a>-ness is a goal, because it's a <a href="https://infosec.space/tags/toybox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>toybox</span></a>+<a href="https://infosec.space/tags/musl" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>musl</span></a> / <a href="https://infosec.space/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> distro!</p><ul><li>I just haven't got the time to change it over from <a href="https://infosec.space/tags/GCC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GCC</span></a> to <a href="https://infosec.space/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a>...</li></ul>
Clayton<p><a href="https://freeradical.zone/tags/postmarketOS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>postmarketOS</span></a> now has "office hours"!</p><p><a href="https://wiki.postmarketos.org/wiki/Office_hours" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">wiki.postmarketos.org/wiki/Off</span><span class="invisible">ice_hours</span></a></p><p>Right now it's just me listed there, so if you'd rather talk to someone else then you'll have to wait for them to post hours 😉 </p><p>Thanks to the <a href="https://freeradical.zone/tags/llvm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llvm</span></a> folks for the great idea, and to <span class="h-card" translate="no"><a href="https://chaos.social/@adrianyyy" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>adrianyyy</span></a></span> for sharing it!</p>
John Regehr<p>here's a smallish proposal that I'm submitting to the US National Science Foundation, today:</p><p><a href="https://users.cs.utah.edu/~regehr/fmitf25.pdf" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">users.cs.utah.edu/~regehr/fmit</span><span class="invisible">f25.pdf</span></a></p><p>the gist is that I'd like to do substantial upgrades to both the hardware and software side of the online Alive2 web site that the <a href="https://mastodon.social/tags/llvm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llvm</span></a> community uses:</p><p><a href="https://alive2.llvm.org/ce/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">alive2.llvm.org/ce/</span><span class="invisible"></span></a></p><p>putting this online in case it catches the interest of anyone working for a company that might have some $$ for this sort of thing, since the situation at NSF is grim</p>
Ramkumar Ramachandra<p>A very nasty miscompile was reported on one of my patches, and it's not at all obvious what's wrong with the patch! In recent times, quite a few of my patches have been reverted due to miscompiles. <a href="https://mathstodon.xyz/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a></p>
systemd-jaded<p>there should be <a href="https://hachyderm.io/tags/llvm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llvm</span></a> <a href="https://hachyderm.io/tags/yuri" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>yuri</span></a> imo</p><p>im not sure what this would be like, maybe girls petting a cute dragon girl</p>
रञ्जित (Ranjit Mathew)<p><a href="https://mastodon.social/tags/Tilde" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tilde</span></a> is an alternative (like <a href="https://mastodon.social/tags/QBE" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>QBE</span></a> &amp; <a href="https://mastodon.social/tags/Cranelift" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Cranelift</span></a>) to <a href="https://mastodon.social/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a> for <a href="https://mastodon.social/tags/Compilers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Compilers</span></a>:</p><p>“Tilde, My LLVM Alternative”, Yasser Arguelles Snape (<a href="https://yasserarg.com/tb" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">yasserarg.com/tb</span><span class="invisible"></span></a>).</p><p>Via Lobsters: <a href="https://lobste.rs/s/jvruyj/tilde_my_llvm_alternative" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">lobste.rs/s/jvruyj/tilde_my_ll</span><span class="invisible">vm_alternative</span></a></p><p>On HN: <a href="https://news.ycombinator.com/item?id=42782872" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">2782872</span></a></p><p>At Handmade Seattle 2023: <a href="https://handmadecities.com/media/seattle-2023/tb/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">handmadecities.com/media/seatt</span><span class="invisible">le-2023/tb/</span></a></p>
Darryl Pogue<p>Apple finally published some source code for the Xcode 16 tools. Nice to see cctools slowly getting chipped away at and replaced with upstream LLVM/clang tools (previously `otool`, now `as`). Kinda surprised tapi hasn't managed to make its way upstream yet.</p><p>I do wish they would still release CF code as part of the macOS source dumps. Clearly macOS still uses CoreFoundation...</p><p><a href="https://mastodon.social/tags/Apple" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Apple</span></a> <a href="https://mastodon.social/tags/Xcode" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Xcode</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a> <a href="https://mastodon.social/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a></p>
Ramkumar Ramachandra<p>The gigantic proof term now type-checks! After working almost non-stop on our research project over the vacation, I’m not sure how I feel about returning to <a href="https://mathstodon.xyz/tags/LLVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLVM</span></a> land tomorrow 🙃</p>
postmodern<p>Lazy LLVM: can someone link me to where in the LLVM source tree the assembly opcodes logic is defined for various architectures?</p><p><a href="https://infosec.exchange/tags/llvm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llvm</span></a></p>

*Very* damning indictment of #GitHubCopilot as a professional-grade #programming tool from a former core #FreeBSD team member and current #LLVM and #IoT developer.

infosec.exchange/@david_chisna

My favorite bit:

> “It’s great for boilerplate!” No. APIs that require every user to write the same code *are broken*. Fix them, don’t fill the world with more code using them that will need fixing when the APIs change.

Infosec ExchangeDavid Chisnall (*Now with 50% more sarcasm!*) (@david_chisnall@infosec.exchange)I finally turned off GitHub Copilot yesterday. I’ve been using it for about a year on the ‘free for open-source maintainers’ tier. I was skeptical but didn’t want to dismiss it without a fair trial. It has cost me more time than it has saved. It lets me type faster, which has been useful when writing tests where I’m testing a variety of permutations of an API to check error handling for all of the conditions. I can recall three places where it has introduced bugs that took me more time to to debug than the total time saving: The first was something that initially impressed me. I pasted the prose description of how to communicate with an Ethernet MAC into a comment and then wrote some method prototypes. It autocompleted the bodies. All very plausible looking. Only it managed to flip a bit in the MDIO read and write register commands. MDIO is basically a multiplexing system. You have two device registers exposed, one sets the command (read or write a specific internal register) and the other is the value. It got the read and write the wrong way around, so when I thought I was writing a value, I was actually reading. When I thought I was reading, I was actually seeing the value in the last register I thought I had written. It took two of us over a day to debug this. The fix was simple, but the bug was in the middle of correct-looking code. If I’d manually transcribed the command from the data sheet, I would not have got this wrong because I’d have triple checked it. Another case it had inverted the condition in an if statement inside an error-handling path. The error handling was a rare case and was asymmetric. Hitting the if case when you wanted the else case was okay but the converse was not. Lots of debugging. I learned from this to read the generated code more carefully, but that increased cognitive load and eliminated most of the benefit. Typing code is not the bottleneck and if I have to think about what I want and then read carefully to check it really is what I want, I am slower. Most recently, I was writing a simple binary search and insertion-deletion operations for a sorted array. I assumed that this was something that had hundreds of examples in the training data and so would be fine. It had all sorts of corner-case bugs. I eventually gave up fixing them and rewrote the code from scratch. Last week I did some work on a remote machine where I hadn’t set up Copilot and I felt much more productive. Autocomplete was either correct or not present, so I was spending more time thinking about what to write. I don’t entirely trust this kind of subjective judgement, but it was a data point. Around the same time I wrote some code without clangd set up and that *really* hurt. It turns out I really rely on AST-aware completion to explore APIs. I had to look up more things in the documentation. Copilot was never good for this because it would just bullshit APIs, so something showing up in autocomplete didn’t mean it was real. This would be improved by using a feedback system to require autocomplete outputs to type check, but then they would take much longer to create (probably at least a 10x increase in LLM compute time) and wouldn’t complete fragments, so I don’t see a good path to being able to do this without tight coupling to the LSP server and possibly not even then. Yesterday I was writing bits of the CHERIoT Programmers’ Guide and it kept autocompleting text in a different writing style, some of which was obviously plagiarised (when I’m describing precisely how to implement a specific, and not very common, lock type with a futex and the autocomplete is a paragraph of text with a lot of detail, I’m confident you don’t have more than one or two examples of that in the training set). It was distracting and annoying. I wrote much faster after turning it off. So, after giving it a fair try, I have concluded that it is both a net decrease in productivity and probably an increase in legal liability. Discussions I am not interested in having: - You are holding it wrong. Using Copilot with this magic config setting / prompt tweak makes it better. At its absolute best, it was a small productivity increase, if it needs more effort to use, that will be offset. - This other LLM is *much* better. I don’t care. The costs of the bullshitting far outweighed the benefits when it worked, to be better it would have to *not bullshit*, and that’s not something LLMs can do. - It’s great for boilerplate! No. APIs that require every user to write the same code *are broken*. Fix them, don’t fill the world with more code using them that will need fixing when the APIs change. - Don’t use LLMs for autocomplete, use them for dialogues about the code. Tried that. It’s worse than a rubber duck, which at least knows to stay silent when it doesn’t know what it’s talking about. The one place Copilot was vaguely useful was hinting at missing abstractions (if it can autocomplete big chunks then my APIs required too much boilerplate and needed better abstractions). The place I thought it might be useful was spotting inconsistent API names and parameter orders but it was actually very bad at this (presumably because of the way it tokenises identifiers?). With a load of examples with consistent names, it would suggest things that didn't match the convention. After using three APIs that all passed the same parameters in the same order, it would suggest flipping the order for the fourth. #GitHubCopilot #CHERIoT

Hey #Haskell people, have any of you written a compiler with #Binaryen or #LLVM for the backend? Any recommendations on libraries, guides, or gotchas to watch out for?

I've seen the llvm-tf and llvm-ffi libraries which are reasonably current (each ~1 year since last update). I saw Tweag has a binaryen package, but I'm not sure they're updating it now that Asterius is defunct (~4 years since last update).