shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

268
active users

#zfsonlinux

0 posts0 participants0 posts today
Sean Fenian<p><a href="https://plasmatrap.com/tags/ZFS" rel="nofollow noopener" target="_blank">#ZFS</a><span> on </span><a href="https://plasmatrap.com/tags/Linux" rel="nofollow noopener" target="_blank">#Linux</a><span> observations:<br><br>1. ZFS on </span><a href="https://plasmatrap.com/tags/Solaris" rel="nofollow noopener" target="_blank">#Solaris</a><span> is awesome.<br>2. My experience with ZFS on Linux has been </span><i><span>terrible</span></i><span>.<br><br>I'm using a Dell </span><a href="https://plasmatrap.com/tags/R720" rel="nofollow noopener" target="_blank">#R720</a><span> configured as a NAS server, with a Dell PERC H310 controller that natively supports JBOD, running Gentoo Linux. The Dell replaced a succession of two SunFire X4540s, both of which were absolutely rock-solid as NAS servers (until their system controller boards failed) and never once had a ZFS error reported except when a drive physically failed. With the R720, I get hot and cold running errors reported. I'm using all Samsung 870 Evo solid-state drives, in two </span><a href="https://plasmatrap.com/tags/RAIDZ" rel="nofollow noopener" target="_blank">#RAIDZ</a><span> arrays, one of eight drives and one of six. I am at this very moment in the process of cleaning up the arrays ... </span><b><span>again</span></b><span>.<br><br>What I can't figure out is why.<br>— Is ZFS on Linux </span><b><span>really that terrible</span></b><span>?<br>— Does ZFS on Linux just somehow not work well with SSDs?<br>— Does the PERC controller in the R720 not work well with SSDs?<br><br>I wasn't originally running SSDs in this array; my first attempt was using 2.5" spinning rust drives. I rapidly discovered two things:<br>1. As far as I can determine, all 2.5" mechanical hard drives 2TB or larger on the market are SMR drives;<br>2. OH MY GOD, SMR DRIVES (especially, I am told, in ZFS) ARE UTTERLY FUCKING HORRIBLE except on WORM (read once, write many) applications in which </span><b><span>you don't really care how slow the original write is</span></b><span>. RAIDZ write performance on the Dell on brand new 2.5" SMR drives was four to six times slower than RAIDZ write performance on the X4540 with </span><i><span>older and slower</span></i><span> CMR drives on </span><i><span>older and slower</span></i><span> SCSI/SAS controllers. Despite newer, "faster" drives on a newer, faster controller, the SMR array was utterly unusable.<br><br>Now, I'm not experiencing any problems with SSDs in any of my other systems, Windows or Linux, INCLUDING the R720, </span><b><span>except</span></b><span> with ZFS. The boot drives on the R720 are an mdraid mirror formatted XFS and have never thrown a single error.<br><br>So this is really leading me to wonder a crucial question:<br><br>Is there something I don't know about </span><a href="https://plasmatrap.com/tags/ZFSonLinux" rel="nofollow noopener" target="_blank">#ZFSonLinux</a><span> that causes it to </span><b><span>not work well</span></b><span> with </span><a href="https://plasmatrap.com/tags/SSD" rel="nofollow noopener" target="_blank">#SSD</a><span> drives? Do I need to just </span><b><span>forget about</span></b><span> running ZFS on my NAS and let the PERC controller create hardware RAID5 volumes?<br><br>(And if anyone wonders "why don't you just run a commercial NAS appliance?", well, I tried that route. I tried one of the very latest generation QNAP servers that run ZFS storage on a Linux OS. Oh my god, I can't even begin to speak to how horribly bastardized it was. QNAP may well be a good NAS choice if you </span><b><span>only care about Windows and SMB</span></b><span> and never </span><i><span>ever</span></i><span> want to look under the hood or try to accomplish anything except through the web front-end, and don't already have an existing backup solution that you want to continue using.)</span></p>
Adam ♿<p>My new <a href="https://aus.social/tags/NAS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NAS</span></a>, aka "scratch" (after an unfortunate incident with one of the case panels) is up and running.</p><p>Thanks to anonymous and anonymous(?) and <span class="h-card"><a href="https://tech.lgbt/@directhex" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>directhex</span></a></span> for all the support and <span class="h-card"><a href="https://aus.social/@jpm" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>jpm</span></a></span> for the future support ;)</p><p><a href="https://aus.social/tags/HomeLab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeLab</span></a> <a href="https://aus.social/tags/ZFS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZFS</span></a> <a href="https://aus.social/tags/ZFSOnLinux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZFSOnLinux</span></a> <a href="https://aus.social/tags/UbuntuServer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UbuntuServer</span></a></p>