shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

258
active users

#ceph

0 posts0 participants0 posts today
D. Moonfire<p>I was in grade school when my mom first set up a RAID box in our house (where she ran business as a consultant). It was a relatively small thing, but she was doing consulting work on storage systems and I got to play with hardware RAID cards which was a lot of fun (I mean, I was ten and I was getting to play with a brand new Macintosh Plus, cutting edge PCs, and anything else she could convince a customer to buy for her).</p><p>The first time we lost a drive, she and I spent hours trying to puzzle out how to recover it. There is a big difference between the theory of how RAIDs work and actually sitting at a table ten minutes before school watching it slowly jump from 3% recovered to 4. I mean, it felt like the slowest thing since she was in the middle of a project and we needed the files.</p><p>After I got home, the first thing I did when I got home was rush over to see that it was only 80-something percent. That put me in a sour mood. :) It wouldn't be done for another couple of hours but then it worked! It finished about a half hour after she came home and we interrupted dinner to check it out.</p><p>That was cool.</p><p>It wasn't until a few months later that I found where it didn't work. The house didn't have exactly clean power, and 80s technology wasn't exactly as reliable as it is today, so we lost another drive. But in the middle of the RAID 5 recovery, we lost a third drive.</p><p>And then is when I realized the heartbreak of trying to fix something that couldn't be fix. Fortunately, it was only a small project then and we were able to recover most of it from memory and the files we did have.</p><p>We ended up upgrading the house to a 200 amp service and then I got some penalty chores of helping my dad run new electrical lines to her office so she could have better power so we stopped losing drives, but that's a different aspects of my childhood.</p><p>But it came out as a good lesson: drives will fail. It doesn't matter how big they are, no matter how much you take care of them, or anything else. It also taught me that RAID was ultimate fragile. It handles "little" failures but there is always a bigger failure.</p><p>Plus, history has strongly suggested that when my mother or I got stressed, computer have a tendency to break around us. Actually after the derecho and the <a href="https://d.moonfire.us/tags/entanglement-2021/" rel="nofollow noopener noreferrer" target="_blank">stunning series of bad luck</a> I had for three years, high levels of stress around me cause things to break. I have forty years of history to back that. Hard drives are one of the first things to go around me, which has given me a lot of interest in resilient storage systems because having the family bitching about Plex not being up is a good way to keep being stressed out. :D</p><p>I think that is why I gravitated toward Ceph and SeaweedFS. Yeah, they are fun, but the distributed network is a lot less fragile than a single machine running a RAID. When one of my eight year old computer dies, I'm able to shuffle things around and pull it out. Technology improves or I get a few hundred dollar windfall, get a new drive.</p><p>It's also my expensive hobby. :D Along with writing.</p><p>And yet, cheaper than LEGO.</p><p><a href="https://polymaths.social/tags/seaweedfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SeaweedFS</span></a> <a href="https://polymaths.social/tags/ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a></p>
crabbypup<p>Guess it's time for a new <a href="https://mstdn.ca/tags/introduction" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>introduction</span></a>, post instance move. </p><p>Hi! I'm Crabbypup, or just 'crabby', though only in name most days.<br> <br>I'm a Linux flavored computer toucher from Kitchener-Waterloo. </p><p>I tend to share stuff about the region, Open source software in general and <a href="https://mstdn.ca/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> in specific.</p><p>I like to tinker in my <a href="https://mstdn.ca/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a>, where I run <a href="https://mstdn.ca/tags/proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>proxmox</span></a>, <a href="https://mstdn.ca/tags/ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ceph</span></a>, and a bunch of other <a href="https://mstdn.ca/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> services including <a href="https://mstdn.ca/tags/homeassistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homeassistant</span></a>. </p><p>I'm a rather inconsistent poster, but I'm glad to be here.</p>
Michael<p>New blog post: <a href="https://blog.mei-home.net/posts/k8s-migration-25-controller-migration/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.mei-home.net/posts/k8s-mi</span><span class="invisible">gration-25-controller-migration/</span></a></p><p>I like to think that many of my blog posts are mildly educational, perhaps even helping someone in a similar situation.</p><p>This blog post is the exception. It is a cautionary tale from start to finish. I also imagine that it might be the kind of post someone finds on page 14 of google at 3 am and names their firstborn after me.</p><p><a href="https://social.mei-home.net/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a> <a href="https://social.mei-home.net/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> <a href="https://social.mei-home.net/tags/Blog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Blog</span></a></p>
Ruud<p><span>I am testing out some tech I want to learn more about :-)<br><br>I have used </span><a href="https://ruud.social/tags/Terraform" rel="nofollow noopener noreferrer" target="_blank">#Terraform</a> to create some VMs in my <a href="https://ruud.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> server, then with <a href="https://ruud.social/tags/kubespray" rel="nofollow noopener noreferrer" target="_blank">#kubespray</a><span> installed a kubernetes cluster on them.<br><br>Next I'll install </span><a href="https://ruud.social/tags/rook" rel="nofollow noopener noreferrer" target="_blank">#rook</a> / <a href="https://ruud.social/tags/ceph" rel="nofollow noopener noreferrer" target="_blank">#ceph</a> so I have some storage, and last but not least I will install <a href="https://ruud.social/tags/CloudNativePG" rel="nofollow noopener noreferrer" target="_blank">#CloudNativePG</a> <a href="https://mastodon.social/@CloudNativePG" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@CloudNativePG@mastodon.social</a><span> on it.<br><br>If that all works, I'll repeat that in another datacenter to test cloudnativepg replica clusters.<br><br>Combining hobby and work..</span></p>
Michael<p>Ceph cluster goes brrrrrrrrr. 🤓 </p><p><a href="https://social.mei-home.net/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> <a href="https://social.mei-home.net/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a></p>
Michael<p>New blog post: <a href="https://blog.mei-home.net/posts/ceph-copy-latency/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.mei-home.net/posts/ceph-c</span><span class="invisible">opy-latency/</span></a></p><p>I take a detailed look at the copy operation I recently did on my media collection, moving 1.7 TB from my old Ceph clusters to my Rook one.</p><p>Some musings about Ceph and HDDs as well as a satisfying amount of plots. Which are sadly not really readable. 😔 I definitely need a different blog theme which allows enlargement of figures.</p><p><a href="https://social.mei-home.net/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a> <a href="https://social.mei-home.net/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> <a href="https://social.mei-home.net/tags/blog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blog</span></a></p>
okanogen VerminEnemyFromWithin<p>There is no reason at all to entrust your company or personal data to ANY <a href="https://mastodon.social/tags/Cloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Cloud</span></a> service. If you are a company, build your own hardware infrastructure with <a href="https://mastodon.social/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a>, <a href="https://mastodon.social/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a>, <a href="https://mastodon.social/tags/Openstack" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Openstack</span></a> or others. IT WILL SAVE YOU MONEY. If an individual back your data up at home on a NAS.<br>Use and support <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a>.<br>Ditch <a href="https://mastodon.social/tags/Microsoft" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Microsoft</span></a>.<br>Now the US is bad, but no government or megacorporation can be trusted.<br><a href="https://www.osnews.com/story/141794/it-is-no-longer-safe-to-move-our-governments-and-societies-to-us-clouds/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">osnews.com/story/141794/it-is-</span><span class="invisible">no-longer-safe-to-move-our-governments-and-societies-to-us-clouds/</span></a></p>
Rachel<p><span>Ok two Ceph questions:<br><br>1. Does anyone have a monitoring/alerting rec, or example for rook/ceph, or link to good article on it?<br><br>2. Any recs for a gui s3 browser? I can see details of the buckets from the dashboard, but nothing about the contents like I could with minio<br></span><a href="https://transitory.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> <a href="https://transitory.social/tags/Kubernetes" rel="nofollow noopener noreferrer" target="_blank">#Kubernetes</a> <a href="https://transitory.social/tags/Ceph" rel="nofollow noopener noreferrer" target="_blank">#Ceph</a></p>
Pierre Boudes<p>deuxième sujet.<br>Nous allons certainement proposer des l'hébergement de machines virtuelles et de clusters <a href="https://universites.social/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a>, pour nos membres. Il faudra notamment choisir entre plusieurs hyperviseurs. En partant de zéro, comment faire simple, si possible libre, pas trop cher et facile à administrer&nbsp;? </p><p>Pour le moment, on a du <a href="https://universites.social/tags/proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>proxmox</span></a> et du <a href="https://universites.social/tags/ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ceph</span></a>. Est-ce que cela vaut le coup de monter en technicité avec d'autres hyperviseurs ou vaut-il mieux s'investir dans l'administration k8s sur nos VM&nbsp;?</p>
Codeberg.org<p>We have published a new newsletter about our current activity: <a href="https://blog.codeberg.org/letter-from-codeberg-looking-into-2025.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.codeberg.org/letter-from-</span><span class="invisible">codeberg-looking-into-2025.html</span></a></p><p>* Meet us at <a href="https://social.anoxinon.de/tags/FOSDEM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FOSDEM</span></a> in Brussel and get stickers for you and your friends!<br>* Learn about our infrastructure improvements, networking and <a href="https://social.anoxinon.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> storage.<br>* Read about other news from the past months.</p>
Lucas Janin 🇨🇦🇫🇷<p>I'm learning <a href="https://mastodon.social/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a> <a href="https://mastodon.social/tags/cluster" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cluster</span></a> with <a href="https://mastodon.social/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> by virtualizing it on my Proxmox Mini PC. Total inception mode 😀, but for learning purposes. The setup was relatively easy, and it worked very well.</p><p>- Manually migrate an LXC container from one node to another with only 10 seconds of downtime.</p><p>- Power off a node, and automatically migrate the LXC running on it to another node with one minute and 10 seconds of downtime.</p><p>I’m surprised by the last number. Any advice will be welcome.</p><p><a href="https://mastodon.social/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://mastodon.social/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a></p>
makuharigaijin<p>I got my delivery of my new <a href="https://fosstodon.org/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> nodes (<a href="https://fosstodon.org/tags/minisforum" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>minisforum</span></a> <a href="https://fosstodon.org/tags/ms01" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ms01</span></a>). Now I'm waiting for ram and cables. Rather than decide between using the 10Gbit SFP+ ports and the USB-4 ports for the ring network (for <a href="https://fosstodon.org/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a>), I'm just going to set up both. That should give me 30Gbit between each node without using a switch.</p><p>Now I need to decide which OS I'll use. Stick with <a href="https://fosstodon.org/tags/RHEL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RHEL</span></a> 9, or change to <a href="https://fosstodon.org/tags/RockyLinux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RockyLinux</span></a> or <a href="https://fosstodon.org/tags/Fedora" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fedora</span></a>?</p>
Codeberg.org<p>An exciting evening ahead of us: We are performing maintenance on our <a href="https://social.anoxinon.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> storage system and will distribute data across the machines for the first time.</p><p>We expect no or little interruption of our services, but performance might degrade while the new nodes are backfilling.</p>
Jinna the bureaucracy witch<p>I have now tried everything in my power short of shutting down the <a href="https://tech.lgbt/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> cluster to get rid of these log lines:</p><blockquote><p>cluster [DBG] pgmap v1392: 321 pgs: 321 active+clean; 284 GiB data, 878 GiB used, 6.4 TiB / 7.3 TiB avail; 4.2 KiB/s rd, 976 KiB/s wr, 123 op/s</p></blockquote><p>They get spammed by every mon every 2 seconds which eats up log storage and ssd life for no goddamn reason. It's supposedly <code>DBG</code> but even setting every single (and I mean every single) <code>debug_</code> config option to <code>0/5</code> does not stop it. Central config store is also completely ignored by cephadm which forces either syslog or journal logging to be on, fuck you and your choices.</p><p>My remaining options are literally to either nuke journald to there's nowhere for the logs to go to, or to more journald into memory and write a VRL parser on top of it to drop the useless goddamn events before they go into log storage.</p><p>And what was my unreasonable goal? To send important logs to storage, not debug.</p>
Michael<p>Fresh blog post: <a href="https://blog.mei-home.net/posts/mastodon-media-cache-cleanup-issue/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.mei-home.net/posts/mastod</span><span class="invisible">on-media-cache-cleanup-issue/</span></a></p><p>In this one, I try to fix the failing media cache cleanup for my Mastodon server - with only a very small amount of success in the end. On the positive side: I got to look at some C++ code, recreationally!</p><p><a href="https://social.mei-home.net/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a> <a href="https://social.mei-home.net/tags/Blog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Blog</span></a> <a href="https://social.mei-home.net/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> <a href="https://social.mei-home.net/tags/MastoAdmin" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MastoAdmin</span></a></p>
okanogen VerminEnemyFromWithin<p><span class="h-card" translate="no"><a href="https://urface.social/@wolfensteijn" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>wolfensteijn</span></a></span> <br>Ugh. I had this happen with three nvme drives that had my <a href="https://mastodon.social/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> bluestore. A disaster.</p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://mastodon.monoceros.co.za/@uastronomer" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>uastronomer</span></a></span> it's something I.did implement in the past (abeit <a href="https://infosec.space/tags/KVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KVM</span></a> + <a href="https://infosec.space/tags/Proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Proxmox</span></a>, but the steps are similar enough):</p><p>You can seperate <a href="https://infosec.space/tags/Storage" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Storage</span></a> and <a href="https://infosec.space/tags/Compute" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Compute</span></a> given you have a Storage-LAN that is fast enough (and does at least 9k if not 64k Jumbo Frames) and have the <em>"Compute Nodes"</em> entirely <a href="https://infosec.space/tags/diskless" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>diskless</span></a> (booting via <a href="https://infosec.space/tags/iPXE" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>iPXE</span></a> from the <a href="https://infosec.space/tags/SAN" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SAN</span></a>) and then mount the storage via <a href="https://infosec.space/tags/iSCSI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>iSCSI</span></a> or <a href="https://infosec.space/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a>.</p><ul><li>Basically it allows you to scale Compute and Storage independently from each other as they are transparent layers and not be confined to limits of a single chassis &amp; it's I/O options...</li></ul><p>Did a bigger project (easily 8-digits in hardware, as per MSRP) where a Employer/Client did <a href="https://infosec.space/tags/CloudExit" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CloudExit</span></a> amidst escalating costs and <a href="https://infosec.space/tags/ROI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ROI</span></a> being within quarters (if not months at the predicted growth rate)...</p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://mastodon.monoceros.co.za/@uastronomer" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>uastronomer</span></a></span> now if you seperate the compute and storage layer with diskless compute nodes accessing the filesystem via <a href="https://infosec.space/tags/iSCSI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>iSCSI</span></a> or <a href="https://infosec.space/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a>, you can even do superfast updates by merely rebooting the jail/host...</p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://infosec.exchange/@perry_mitchell" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>perry_mitchell</span></a></span> I'd avoid not just <a href="https://infosec.space/tags/SMR" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SMR</span></a> but all <a href="https://infosec.space/tags/Helium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Helium</span></a>-filled drives as a matter of principle.</p><ul><li>Also isn't <a href="https://infosec.space/tags/UnRaid" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>UnRaid</span></a> that weird <a href="https://infosec.space/tags/KVM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KVM</span></a>-Distro?</li></ul><p>I mean, I know <a href="https://infosec.space/tags/trueNAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>trueNAS</span></a> SCALE &amp; <a href="https://infosec.space/tags/ProxMox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ProxMox</span></a> doing <a href="https://infosec.space/tags/ZFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ZFS</span></a> + <a href="https://infosec.space/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> for <a href="https://infosec.space/tags/clustering" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>clustering</span></a> and <a href="https://infosec.space/tags/redundancy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>redundancy</span></a>...</p>
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://social.treehouse.systems/@marcan" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>marcan</span></a></span> Well, <a href="https://infosec.space/tags/ZFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ZFS</span></a> and <a href="https://infosec.space/tags/Ceph" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ceph</span></a> have entirely different use-cases and original designs.</p><ul><li><p>Ceph, like <a href="https://infosec.space/tags/HAMMER" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HAMMER</span></a> &amp; <a href="https://infosec.space/tags/HAMMER2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HAMMER2</span></a> was specifically designed to be a <a href="https://infosec.space/tags/cluster" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cluster</span></a> <a href="https://infosec.space/tags/filesystem" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>filesystem</span></a>, whereas ZFS &amp; <a href="https://infosec.space/tags/btrfs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>btrfs</span></a> are designed for single-device, local storage options.</p></li><li><p>OFC I did see and even setup some <em>"cursed"</em> stuff like <em>Ceph on ZFS</em> myself, and yes, that is a <em>real deployment</em> run by a <em>real corporation in production</em>...</p></li></ul><p><a href="https://forum.proxmox.com/threads/solution-ceph-on-zfs.98437/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">forum.proxmox.com/threads/solu</span><span class="invisible">tion-ceph-on-zfs.98437/</span></a></p><p>Still less <a href="https://infosec.space/tags/cursed" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cursed</span></a> than what a predecessor of mine once did and deploy ZFS on a Hardware-<a href="https://infosec.space/tags/RAID" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RAID</span></a>-Controller!</p>