shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

283
active users

#ceph

1 post1 participant0 posts today
Continued thread

I was in grade school when my mom first set up a RAID box in our house (where she ran business as a consultant). It was a relatively small thing, but she was doing consulting work on storage systems and I got to play with hardware RAID cards which was a lot of fun (I mean, I was ten and I was getting to play with a brand new Macintosh Plus, cutting edge PCs, and anything else she could convince a customer to buy for her).

The first time we lost a drive, she and I spent hours trying to puzzle out how to recover it. There is a big difference between the theory of how RAIDs work and actually sitting at a table ten minutes before school watching it slowly jump from 3% recovered to 4. I mean, it felt like the slowest thing since she was in the middle of a project and we needed the files.

After I got home, the first thing I did when I got home was rush over to see that it was only 80-something percent. That put me in a sour mood. :) It wouldn't be done for another couple of hours but then it worked! It finished about a half hour after she came home and we interrupted dinner to check it out.

That was cool.

It wasn't until a few months later that I found where it didn't work. The house didn't have exactly clean power, and 80s technology wasn't exactly as reliable as it is today, so we lost another drive. But in the middle of the RAID 5 recovery, we lost a third drive.

And then is when I realized the heartbreak of trying to fix something that couldn't be fix. Fortunately, it was only a small project then and we were able to recover most of it from memory and the files we did have.

We ended up upgrading the house to a 200 amp service and then I got some penalty chores of helping my dad run new electrical lines to her office so she could have better power so we stopped losing drives, but that's a different aspects of my childhood.

But it came out as a good lesson: drives will fail. It doesn't matter how big they are, no matter how much you take care of them, or anything else. It also taught me that RAID was ultimate fragile. It handles "little" failures but there is always a bigger failure.

Plus, history has strongly suggested that when my mother or I got stressed, computer have a tendency to break around us. Actually after the derecho and the stunning series of bad luck I had for three years, high levels of stress around me cause things to break. I have forty years of history to back that. Hard drives are one of the first things to go around me, which has given me a lot of interest in resilient storage systems because having the family bitching about Plex not being up is a good way to keep being stressed out. :D

I think that is why I gravitated toward Ceph and SeaweedFS. Yeah, they are fun, but the distributed network is a lot less fragile than a single machine running a RAID. When one of my eight year old computer dies, I'm able to shuffle things around and pull it out. Technology improves or I get a few hundred dollar windfall, get a new drive.

It's also my expensive hobby. :D Along with writing.

And yet, cheaper than LEGO.

d.moonfire.usEntanglement 2021

Guess it's time for a new #introduction, post instance move.

Hi! I'm Crabbypup, or just 'crabby', though only in name most days.

I'm a Linux flavored computer toucher from Kitchener-Waterloo.

I tend to share stuff about the region, Open source software in general and #linux in specific.

I like to tinker in my #homelab, where I run #proxmox, #ceph, and a bunch of other #selfhosted services including #homeassistant.

I'm a rather inconsistent poster, but I'm glad to be here.

New blog post: blog.mei-home.net/posts/k8s-mi

I like to think that many of my blog posts are mildly educational, perhaps even helping someone in a similar situation.

This blog post is the exception. It is a cautionary tale from start to finish. I also imagine that it might be the kind of post someone finds on page 14 of google at 3 am and names their firstborn after me.

ln --help · Nomad to k8s, Part 25: Control Plane MigrationMigrating my control plane to my Pi 4 hosts.

New blog post: blog.mei-home.net/posts/ceph-c

I take a detailed look at the copy operation I recently did on my media collection, moving 1.7 TB from my old Ceph clusters to my Rook one.

Some musings about Ceph and HDDs as well as a satisfying amount of plots. Which are sadly not really readable. 😔 I definitely need a different blog theme which allows enlargement of figures.

ln --help · Ceph: My Story of Copying 1.7 TB from one Cluster to AnotherLots of plots and metrics, some grumbling. Smartctl makes an appearance as well.

There is no reason at all to entrust your company or personal data to ANY #Cloud service. If you are a company, build your own hardware infrastructure with #Ceph, #Proxmox, #Openstack or others. IT WILL SAVE YOU MONEY. If an individual back your data up at home on a NAS.
Use and support #OpenSource.
Ditch #Microsoft.
Now the US is bad, but no government or megacorporation can be trusted.
osnews.com/story/141794/it-is-

www.osnews.comIt is no longer safe to move our governments and societies to US clouds – OSnews
Replied in thread

deuxième sujet.
Nous allons certainement proposer des l'hébergement de machines virtuelles et de clusters #Kubernetes, pour nos membres. Il faudra notamment choisir entre plusieurs hyperviseurs. En partant de zéro, comment faire simple, si possible libre, pas trop cher et facile à administrer ?

Pour le moment, on a du #proxmox et du #ceph. Est-ce que cela vaut le coup de monter en technicité avec d'autres hyperviseurs ou vaut-il mieux s'investir dans l'administration k8s sur nos VM ?

We have published a new newsletter about our current activity: blog.codeberg.org/letter-from-

* Meet us at #FOSDEM in Brussel and get stickers for you and your friends!
* Learn about our infrastructure improvements, networking and #Ceph storage.
* Read about other news from the past months.

blog.codeberg.orgLetter from Codeberg: Looking into 2025 — Codeberg News(This is a stripped-down version of the news letters sent out to members of...

I'm learning #Proxmox #cluster with #Ceph by virtualizing it on my Proxmox Mini PC. Total inception mode 😀, but for learning purposes. The setup was relatively easy, and it worked very well.

- Manually migrate an LXC container from one node to another with only 10 seconds of downtime.

- Power off a node, and automatically migrate the LXC running on it to another node with one minute and 10 seconds of downtime.

I’m surprised by the last number. Any advice will be welcome.

An exciting evening ahead of us: We are performing maintenance on our #Ceph storage system and will distribute data across the machines for the first time.

We expect no or little interruption of our services, but performance might degrade while the new nodes are backfilling.

Replied in thread

@uastronomer it's something I.did implement in the past (abeit #KVM + #Proxmox, but the steps are similar enough):

You can seperate #Storage and #Compute given you have a Storage-LAN that is fast enough (and does at least 9k if not 64k Jumbo Frames) and have the "Compute Nodes" entirely #diskless (booting via #iPXE from the #SAN) and then mount the storage via #iSCSI or #Ceph.

  • Basically it allows you to scale Compute and Storage independently from each other as they are transparent layers and not be confined to limits of a single chassis & it's I/O options...

Did a bigger project (easily 8-digits in hardware, as per MSRP) where a Employer/Client did #CloudExit amidst escalating costs and #ROI being within quarters (if not months at the predicted growth rate)...

@marcan Well, #ZFS and #Ceph have entirely different use-cases and original designs.

  • Ceph, like #HAMMER & #HAMMER2 was specifically designed to be a #cluster #filesystem, whereas ZFS & #btrfs are designed for single-device, local storage options.

  • OFC I did see and even setup some "cursed" stuff like Ceph on ZFS myself, and yes, that is a real deployment run by a real corporation in production...

forum.proxmox.com/threads/solu

Still less #cursed than what a predecessor of mine once did and deploy ZFS on a Hardware-#RAID-Controller!

Proxmox Support Forum[Solution] CEPH on ZFSHi, i have many problems to install Ceph OSD on ZFS I Get you complet solution to resolve it: Step 1. (repeat on all machine) Install Ceph - #pveceph install Step 2. (run only in main machine on cluster) Init ceph - #pveceph init --network 10.0.0.0/24 -disable_cephx 1 10.0.0.0/24 - your...