shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

253
active users

#ceph

3 posts3 participants0 posts today
Kevin Karhan :verified:<p><span class="h-card" translate="no"><a href="https://udongein.xyz/users/lispi314" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>lispi314</span></a></span> <span class="h-card" translate="no"><a href="https://wetdry.world/@memoria" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>memoria</span></a></span> Then I guess maybe <a href="https://infosec.space/tags/ZFS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZFS</span></a> on it's own isn't what you want, but <a href="https://infosec.space/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> and it's clustering structure?</p><ul><li>Granted ZFS was designed by Sun because pre-ZFS Storage on <a href="https://infosec.space/tags/Solaris" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Solaris</span></a> was just a nightmare to setup and scale (pretty shure <span class="h-card" translate="no"><a href="https://social.restless.systems/@ncommander" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>ncommander</span></a></span> and <span class="h-card" translate="no"><a href="https://meow.social/@catdraoichta" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>catdraoichta</span></a></span> can confirm that!) and it design was always for enterprise-grade storage where you have entire racks full with SCSI/SAS expanders to connect dozens if not hundreds of HDDs onto a huge machine that then distributes the storage transparently to diskless Workstations and -Servers (Based on NIS / NIS+ over Ethernet for networking all that together).</li></ul><p>It being able to run on consumer-grade hardware is rather a sign of computational power becoming more affordable downstream.</p><ul><li>OFC it is <em>not "the correct way"</em> to do so!</li></ul>
Lars Marowsky-Brée 😷<p>Die Stelle ist für mich nicht relevant, ihre Aufgaben und Inhalte finde ich aber wichtig. Vielleicht interessiert sich ja jemand aus meinem Netzwerk, oder kennt Menschen?</p><p>(Und ja, remote ist möglich.)</p><p><a href="https://jobs.allgeier-public.eu/de/search/eyJjdXJyZW50Sm9iSWQiOiJBRVAtMTUxMDcyIn0=/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jobs.allgeier-public.eu/de/sea</span><span class="invisible">rch/eyJjdXJyZW50Sm9iSWQiOiJBRVAtMTUxMDcyIn0=/</span></a></p><p><a href="https://mastodon.online/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> <a href="https://mastodon.online/tags/GetFediHired" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GetFediHired</span></a> <a href="https://mastodon.online/tags/JobAngebot" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JobAngebot</span></a> <a href="https://mastodon.online/tags/Digitalisierung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Digitalisierung</span></a></p>
Nemo_bis 🌈work, hiring
Métaphysicien Douteux<p>Hey les copains <a href="https://framapiaf.org/tags/sysadmin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sysadmin</span></a> grands sorciers de <a href="https://framapiaf.org/tags/proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>proxmox</span></a> et <a href="https://framapiaf.org/tags/ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ceph</span></a> question réseaux svp:<br>- 2 interfaces 10gb en bond0<br>- bond0 bridgé en vmbr0 (tolerance de panne + répartir 20gb sur vlans (vmbr0.x) (internes ceph et utilisés par VMs) </p><p>q1: MTU, je veux jumbo sur le vlan ceph et normal sur le reste. Je mets:<br>- 9000 sur le bond<br>- un peu moins sur vmbr0<br>- ce que je veux sur les differents vmbr0.x mais max &lt;vmbr0</p><p>q2: comment gérer les limites de bande passante par vlan?<br><span class="h-card" translate="no"><a href="https://friend.geoffray-levasseur.org/profile/fatalerrors" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fatalerrors</span></a></span> obviously ;)</p>

20:56:10 up 514 days, 6:15, 1 user, load average: 0.94, 0.85, 0.64

Just updating a couple of my #Ceph nodes… it's been a long time on Debian 10, but I need to move on. Ansible no longer works with it.

Procedure so far is remove the old Ceph repositories (as they don't support anything beyond Debian 10) and lean on the fact that Debian 11 ships Ceph 14.

It's a slightly older release of Ceph 14, but it'll be enough… I can get the OS updated on all the nodes by doing them one-by-one… then when I get them all up to Debian 11… I should be able to jump to the next stable release of Ceph… then do another round of OS updates to move to Debian 12.

Ohhh boy.

Ok probably another week to go until I get these drives running

I should probably use that to figure out backups for this current pool.....

if the bulk storage poll in ceph works well enough I'll ship the current file server off to the parent's but for now I'll be able to use the LAN to do initial syncs

S3-compatible backup via Velero is one option, with MinIO, or
Garage running in a container backed by ZFS (I am not building a remote ceph cluster 😅)

Anyone have thoughts/suggestions on backup strategy here? Probably backing up 2-3TB of data total (lots of photos)

I'll end up with local snapshots, and remote backups, huge bonus if they can be recovered without needing a ceph cluster to restore to in the case of something catastrophic.
#HomeLab #Ceph

GarageThe Garage team - An open-source distributed object storage service tailored for self-hostingAn open-source distributed object storage service tailored for self-hosting

Guess it's time for a new #introduction, post instance move.

Hi! I'm Crabbypup, or just 'crabby', though only in name most days.

I'm a Linux flavored computer toucher from Kitchener-Waterloo.

I tend to share stuff about the region, Open source software in general and #linux in specific.

I like to tinker in my #homelab, where I run #proxmox, #ceph, and a bunch of other #selfhosted services including #homeassistant.

I'm a rather inconsistent poster, but I'm glad to be here.

New blog post: blog.mei-home.net/posts/ceph-c

I take a detailed look at the copy operation I recently did on my media collection, moving 1.7 TB from my old Ceph clusters to my Rook one.

Some musings about Ceph and HDDs as well as a satisfying amount of plots. Which are sadly not really readable. 😔 I definitely need a different blog theme which allows enlargement of figures.

ln --help · Ceph: My Story of Copying 1.7 TB from one Cluster to AnotherLots of plots and metrics, some grumbling. Smartctl makes an appearance as well.

There is no reason at all to entrust your company or personal data to ANY #Cloud service. If you are a company, build your own hardware infrastructure with #Ceph, #Proxmox, #Openstack or others. IT WILL SAVE YOU MONEY. If an individual back your data up at home on a NAS.
Use and support #OpenSource.
Ditch #Microsoft.
Now the US is bad, but no government or megacorporation can be trusted.
osnews.com/story/141794/it-is-

www.osnews.comIt is no longer safe to move our governments and societies to US clouds – OSnews
Replied in thread

deuxième sujet.
Nous allons certainement proposer des l'hébergement de machines virtuelles et de clusters #Kubernetes, pour nos membres. Il faudra notamment choisir entre plusieurs hyperviseurs. En partant de zéro, comment faire simple, si possible libre, pas trop cher et facile à administrer ?

Pour le moment, on a du #proxmox et du #ceph. Est-ce que cela vaut le coup de monter en technicité avec d'autres hyperviseurs ou vaut-il mieux s'investir dans l'administration k8s sur nos VM ?

We have published a new newsletter about our current activity: blog.codeberg.org/letter-from-

* Meet us at #FOSDEM in Brussel and get stickers for you and your friends!
* Learn about our infrastructure improvements, networking and #Ceph storage.
* Read about other news from the past months.

blog.codeberg.orgLetter from Codeberg: Looking into 2025 — Codeberg News(This is a stripped-down version of the news letters sent out to members of...

I'm learning #Proxmox #cluster with #Ceph by virtualizing it on my Proxmox Mini PC. Total inception mode 😀, but for learning purposes. The setup was relatively easy, and it worked very well.

- Manually migrate an LXC container from one node to another with only 10 seconds of downtime.

- Power off a node, and automatically migrate the LXC running on it to another node with one minute and 10 seconds of downtime.

I’m surprised by the last number. Any advice will be welcome.

An exciting evening ahead of us: We are performing maintenance on our #Ceph storage system and will distribute data across the machines for the first time.

We expect no or little interruption of our services, but performance might degrade while the new nodes are backfilling.