shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

271
active users

#stablediffusion

0 posts0 participants0 posts today
Replied in thread

@alex_p_roe Speaking of fruit fly research, you'd be amused or surprised to learn that the original U-net architecture (which today powers stable diffusion, among many other machine learning techniques) introduced in a paper by Ronneberger et al. (2015; arxiv.org/abs/1505.04597 ) was developed to perform image segmentation of fly neural tissue as imaged with electron microscopy, to reconstruct neurons and therefore map the brain connectome.

So all those "wasteful" research funding grants to fruit fly research motivated and led to the biggest discovery fueling the whole of the modern "AI" boom. One never knows where basic research will lead, it's impossible to predict. Hence basic research is not at all wasteful, on the contrary, it's essential, it's the foundation of a rich, wealthy, creative society. And also very cheap, comparatively: albert.rierol.net/tell/2016060

Search also for the returns on the human genome project, or on the humble origins of DNA sequencing, to name just two among many.

arXiv.orgU-Net: Convolutional Networks for Biomedical Image SegmentationThere is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

Et si votre prochaine chanson funk ou votre vidéo style #Ghibli était générée depuis un PC coopératif ? 🎶✨

Avec #CopyLaRadio et le réseau #UPlanet, utilisez les superpouvoirs du tag #BRO et de votre #IA personnelle pour créer :

🖼️ Images (#image)

🎵 Musiques (#music + #parole)

🎥 Vidéos (#video)

Exemple :
#BRO #video Un robot en forêt enchantée, lumière dorée =>
ipfs.copylaradio.com/ipfs/QmYR

copylaradio.com/blog/blog-1/po

Casually using generative AI gives its progenitors the numbers they need to justify their mad expansion. Please abandon these tools and let's return to a time when we think up our own ideas, assemble physical and digital materials that allow the greatest possible expression, and make something that tells the world who we are.

→ 'I Loved That AI:' Judge Moved by AI-Generated Avatar of Man Killed in Road Rage Incident
404media.co/i-loved-that-ai-ju

“In a first, the Arizona #court accepted an AI-generated video statement in which an #avatar made to look and sound like [the man who was killed] spoke.”

“[They] used #StableDiffusion fine-tuned with a Low-Rank Adaptation (#LoRA) to craft the #video. "And then we used a #generative #AI and deep learning processes to create a voice #clone from his original #voice"”

404 Media · 'I Loved That AI:' Judge Moved by AI-Generated Avatar of Man Killed in Road Rage IncidentHow the sister of Christopher Pelkey made an avatar of him to testify in court.