shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

269
active users

#ffmpeg

3 posts3 participants0 posts today

Ok, any #video folks out there who know how to do what I want to do? I don't know what words to search for because I don't know what this technique is called. Boosts welcome, suggestions welcome.

I have a pool cleaning robot. Like a roomba, but for the bottom of the pool. We call it poomba. Anyways, I want to shoot an MP4 video with a stationary camera (a GoPro) looking down on the pool while the robot does its work. So I will have this overhead video of like 3-4 hours.

I want to kinda overlay all the frames of the video into a single picture. So the areas where the robot drove will be dark streaks (the robot is black and purple). And any area the robot didn't cover would show the white pool bottom. Areas the robot went over a lot would be darker. Areas it went rarely would be lighter.

I'm just super curious how much coverage I actually get. This thing isn't a roomba. It has no map and it definitely doesn't have an internet connection at the bottom of the pool. (Finally! A place they can't get AI, yet!) It's just using lidar, motion sensors, attitude sensors and some kind of randomizing algorithm.

I think of it like taking every frame of the video and compositing it down with like 0.001 transparency. By the end of the video the things that never changed (the pool itself) would be full brightness and clear. While the robot's paths would be faint, except where it repeated a lot, which would be darker.

I could probably rip it into individual frames using #ffmpeg and then do this compositing with #ImageMagick or something (I'm doing this on #Linux). But 24fps x 3600 seconds/hour x 3 hours == about 260K frames. My laptop will take ages to brute force this. Any more clever ways to do it?

If I knew what this technique/process was called, I'd search for it.

🎞️ A developer managed to reverse pixelation in video using FFmpeg, GIMP and edge detection - no AI involved.

By analyzing motion and edges across frames, they could reconstruct original content from blurred areas.

It’s a reminder: pixelation is visual, not secure.

🛠️ Code & demo: github.com/KoKuToru/de-pixelat

de-pixelate youtube video gaV-O6NPWrI. Contribute to KoKuToru/de-pixelate_gaV-O6NPWrI development by creating an account on GitHub.
GitHubGitHub - KoKuToru/de-pixelate_gaV-O6NPWrI: de-pixelate youtube video gaV-O6NPWrIde-pixelate youtube video gaV-O6NPWrI. Contribute to KoKuToru/de-pixelate_gaV-O6NPWrI development by creating an account on GitHub.

So I have hundreds of videos of ~1 minute recorded from my phone ~10 years ago, and they generally don’t have that great compression, nor they are stored in a modern and advanced video format.

For archiving purposes, I want to take advantage of my workstation’s mighty GPU to process them so that the quality is approximately the same, but the file size would be strongly reduced.

Nevertheless, compressing videos is terribly hard, and way more complex than compressing pictures, so I wouldn’t really know how to do this, what format to use, what codec, what bitrate, what parameters to keep an eye on, etc.

I don’t care if the compression takes a lot of time, I just want smaller but good looking videos.

Any tips? (Links to guides and tutorials are ok too)

Also, unfortunately I am forced to use Windows for this (don’t ask me why 🫠), but I know nothing about Windows because I hate it. Practical software suggestions are very much welcome, too!

#ffmpeg#help#askFedi
Replied in thread

@sotolf @joel

Millennial/Zoomer long-form podcast enjoyers vs. GenX Zero Attention Span dude. XD

(To be fair, I have had some audio things that were over nine hours long, but they were compilations of several sermons concatenated together (and audio-equalized) with #ffmpeg)

Replied in thread

@christianp

Just a thought, from a knuckle-dragging biology scientist. TL;DR: I believe there is scope to make the hosting of a peertube instance even more lightweight in the future.

I read some time ago of people using #webAssembly to transcode video in a user's web-browser. blog.scottlogic.com/2020/11/23

Since then, I believe #WebGPU has done/is doing some clever things to improve the browser's access to the device's GPU.

I have not seen any #peertube capability that offloads video transcoding to the user in this way.

I imagine, though, that this would align well with peertube's agenda of lowering the bar to entry into web-video hosting, so I cannot help but think that this will come in time.

My own interest is seeing a #Piefed (activitypub) instance whose web-pages could #autotranslate posts into the user's own language using the user's own processing power... One day, maybe!

Thank you again for all your hard work; it is an inspiration.

Scott LogicIn-browser transcoding of video files with FFmpeg and WebAssemblyThe WebAssembly build of FFmpeg allows you to run this powerful video processing tool directly within the browser. In this blog post I explore FFmpeg.wasm and create a simple client-side transcoder that streams data into a video element, with a bit of RxJS thrown in for good measure.

Re: okla.social/@johnmoyer/1147381

rsok.com/~jrm/2025May31_birds_

ffmpeg -loop 1 -i IMG_3666cs_gimp.JPG -y -filter_complex "[0]scale=1200:-2,setsar=1:1[out];[out]crop=1200:800[out];[out]scale=8000:-1,zoompan=z='zoom+0.005':x=iw/3.125-(iw/zoom/2):y=ih/2.4-(ih/zoom/2):d=3000:s=1200x800:fps=30[out]" -vcodec libx265 -map "[out]" -map 0:a? -pix_fmt yuv420p -r 30 -t 30 video_IMG_3666c_2.mp4

Replied in thread

@snacks that's because #ffmpeg was built by more people and can utilize more acceleration options (from SSE-SSE4.2) beyond vendor- & generation-proprietary code (GPU-Encoders & Decoders expect to get a native bitstream fed like hardware in dedicaded CD-/DVD-/bluray players)

  • Add to that parralelization and thousands of micro-optimization from how it ingests files to transcoding to how to optimize codec settings for the best possible results and you get a better product.

Not to mention big players from Alibaba to Zoom, from Amazon to Netflix and from Apple to Sony have a vested interest in it's performance, because bandwith, storage and computational power are expensive for them.

Usually, I'm quite happy with #Linux:

It just works, and I don't have to think about it at all.

But sometimes, I reckon: If it sucks, it sucks badly.

I use the command line video editor #ffmpeg quite a lot, but yesterday, it just stopped working, first producing wrong results and then weird error messages.

Internet search for the error messages gave lots of results from the 2010s and Reddit threads where the issue remained unresolved.

"OK", I thought, "Let's do what everyone does":

1/3

Replied in thread

@jcrabapple I had a few stumbles with #JellyFin to start (around it needing a special - but sanctioned - build of #ffmpeg on my platform), but once I figured that out it has been smooth sailing and Im quite enjoying it. Although metadata success relies on proper media curation, but #Filebot has been super helpful there and worth its one-time cost.

If only there were #xbox and #ps5 clients for it; rn i use a low power linux box for playback on my TV.

Replied in thread

🧵 3/3 - The Making Of

I kept getting #HTTP 413 "Payload Too Large" errors on my instance - it's not actually mine-mine, i'm just a guest here - layer8.space is run by @Sammy8806. (thanks, mate 🖖)

👉 1st, there was 1080p60 with audio
👉 then i trimmed and rendered it out as 720p30 with audio on the Galaxy S7
👉 on the PC, using #ffmpeg, i've removed the audio and rendered it out at 15 fps
👉 i've cropped it to 5 seconds

ffmpeg -i src.mp4 -an -c:v libx264 -vf "fps=15" -t 5 out-15fps-noaudio-5s.mp4