shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

289
active users

#cuda

0 posts0 participants0 posts today

Just got my RSS reader YOShInOn building with uv and running under WSL2 with the Cuda libraries, despite a slight version mismatch... All I gotta do is switch it from arangodb (terrible license) to postgres, and it might have a future... With sentence_transformers running under WSL2 I might even be able to deduplicate the million images in my Fraxinus image sorter

Replied in thread

Even now, Thrust as a dependency is one of the main reason why we have a #CUDA backend, a #HIP / #ROCm backend and a pure #CPU backend in #GPUSPH, but not a #SYCL or #OneAPI backend (which would allow us to extend hardware support to #Intel GPUs). <doi.org/10.1002/cpe.8313>

This is also one of the reason why we implemented our own #BLAS routines when we introduced the semi-implicit integrator. A side-effect of this choice is that it allowed us to develop the improved #BiCGSTAB that I've had the opportunity to mention before <doi.org/10.1016/j.jcp.2022.111>. Sometimes I do wonder if it would be appropriate to “excorporate” it into its own library for general use, since it's something that would benefit others. OTOH, this one was developed specifically for GPUSPH and it's tightly integrated with the rest of it (including its support for multi-GPU), and refactoring to turn it into a library like cuBLAS is

a. too much effort
b. probably not worth it.

Again, following @eniko's original thread, it's really not that hard to roll your own, and probably less time consuming than trying to wrangle your way through an API that may or may not fit your needs.

6/

#Nvidia offers #Cuda data-processing libraries that can be applied to any sector from drug discovery to fraud detection and self-driving, providing the groundwork for what different companies are trying to accomplish. This is topped off with networking services to establish data centres required to run the GPUs.’ << Key part of the firm’s competitive “moat”
thetimes.com/article/e38441d8-

The Times · Why rivals can’t compete with Nvidia’s chips for everythingBy Louisa Clarence-Smith
Replied in thread

@enigmatico @lispi314 @kimapr @bunnybeam case in point:

  • #Bloatedness was the original post topic and yes, due to #TechBros "#BuildFastBreakThings" mentality, #Bloatware is increasing given that a shitty bloated 50+MB "#WebApp" with like nw.js is easy to slap together (and yes I did so myself!) than to put in way more thought and effort (as you can see on the slow progression of OS/1337...

  • Yes, #Accessibility is something that needs to be taken more seriously and it's good to see that there's at least some attemots at making #accessibility mandatory (at least in #Germany, where I know from some insider that a big telco is investing a lot in that!) for a growng number of industries and websites...

  • And whilst one can slap an #RTX5090 on any laptop that has a fully-functional #ExpressCard slot (with #PCIe interface, using some janky adaptors!) that'll certainly not make sense beyond some #CUDA or other #GPGPU-style workloads as it's bottlenecked to a single PCIe lane of 2.0 (500MB/s) or just 1.0a(250MB/s) speeds.

Needless to say there is a need to THINN DOWN things cuz the current speed of #Enshittifcation and bloatedness combined with #AntiRepairDesign and overpriced yet worse #tech in general makes it unsustainable for an ever increasing population!

  • Not everyone wants (or even can!) indebt themselves just to have a phone or laptop!

Should we aim for more "#FrugslComputing"?

  • Abdolutely!

Is it realistic to expect things to be in a perfectly accessible TUI that ebery screenreader can handle?

  • No!

That being said the apathy of consumers is real, and very frustrating:

People get nudged into accepting all the bs and it really pisses me off because they want me to look like ab outsider / asshole for not submitting to #consumerism and #unsustainable shite...

ぷにすきーENIGMATICO :flag_bisexual: :flag_nonbinary: (@enigmatico)I get this is a joke, but here is the thing (aside of the joke). People doesnt use crappy laptops anymore. People moves on to phones/tablets, or if they want something more serious, something like a gamer PC. Most people will buy a console if they want to play games though. In that context, nobody cares anymore about bloat. If you are a developer its easier for you to use some bloaty framework that gets the job done in a couple days, because at the end of the day, if you're going to be exploited and crunched to death, you might as well make it as short as possible. And as a consumer, nobody really cares. You buy whatever allows you to do what you wwant and thats it. Or whatever your pocket allows you. And to be completely honest with you all, this has always been like this. You have to do with what you have. Could the world be better if everyone used pure C and assembly? Maybe... if companies had the intention to spend years developing ttheir products and fixing critical bugs before launch. By the time of the launch they would be obsolete. Kinda what happen to Duke Nukem Forever. RN: (📎1)
Replied in thread

It's out, if anyone is curious

doi.org/10.1002/cpe.8313

This is a “how to” guide. #GPUSPH, as the name suggests, was designed from the ground up to run on #GPU (w/ #CUDA, for historical reasons). We wrote a CPU version a long time ago for a publication that required a comparison, but it was never maintained. In 2021, I finally took the plunge, and taking inspiration from #SYCL, adapted the device code in functor form, so that it could be “trivially” compiled for CPU as well.

My #introduction on my new shiny mathstodon.xyz account! I'm slowly deprecating my @alexmath account but I'm kinda bad at fediverse stuff 😅

Hi all! I am Alex (she/her), a #trans mathematician with a PhD in extremal combinatorics now working in as a data scientist. I am a deeply curious experimentalist and I love to learn different topics. My favorite programming languages are #rust and #python but I've had some fun with #cuda GPGPU, too :) I like machine learning as a scientific problem-solving tool, but not the stuff that involves weapons, theft, and violence.

Presently, I live in #Philly with my fluffy orange cat Angus and my partner. I got a new bike and wish I could lose the car forever. Still masking in public. Still getting vaccines. Eternally exhausted, but hopeful and curious.

Fediverse etiquette suggestions welcome!

Continued thread

if anyone wants to know what the fucking deal with #CUDA for #PyTorch is, it's like this:
* You can run multiple versions of CUDA and swap between them using Environment variables
* If you're using Pycharm, you want to manually pull in the Pytorch PLUS Cuda variant of torch via pip, not the built in package manager
* 12.x version of CUDA can suck knob, use 11.x instead.