shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

266
active users

#pytorch

0 posts0 participants0 posts today
Continued thread

if anyone wants to know what the fucking deal with #CUDA for #PyTorch is, it's like this:
* You can run multiple versions of CUDA and swap between them using Environment variables
* If you're using Pycharm, you want to manually pull in the Pytorch PLUS Cuda variant of torch via pip, not the built in package manager
* 12.x version of CUDA can suck knob, use 11.x instead.

Andrej Karpathy just released a new repo with an implementation of training LLM with pure C/Cude with a few lines of code 🚀. This repo, according to Andrej Karpathy, is still WIP, and the first working example is of GPT-2 (or the grand-daddy of LLMS 😅) 👇🏼

🔗: github.com/karpathy/llm.c

GitHubGitHub - karpathy/llm.c: LLM training in simple, raw C/CUDALLM training in simple, raw C/CUDA. Contribute to karpathy/llm.c development by creating an account on GitHub.

Researchers have discovered three security vulnerabilities in TorchServe, an open-source tool for scaling PyTorch machine learning models. These vulnerabilities, collectively known as "ShellTorch," could potentially allow for server takeover and remote code execution (RCE). The flaws were found in the management interface API configuration of TorchServe, making it accessible to external requests without authentication. While there is no evidence of exploitation, it's essential to update TorchServe to the latest version (0.8.2) to mitigate these vulnerabilities and apply additional security measures to protect against potential attacks.

After speed-running two years’ worth of docs (some) and hacks (mostly) re: #PyTorch workloads running natively on the Apple silicon #MPS (#Metal performance shaders) device, the one-liner that takes care of it for any of the current PT 2.0 nightlies is putting this at the top of your code:

👉 torch.set_default_device("mps") 👈

Saved someone a lot of frustration, hopefully.

Exciting time to be a #pytorch user! Pytorch 2.0 is in the nightly branch and promises much faster training.

Oh yeah and pytorch nightly got hit with a supply chain attack.