Physical test fit is good!
(The 3.5in drives are not populated yet) #Homelab #Ceph #Kubernetes #Minilab
Are you running #OpenStack and #Ceph? Share your story at #Cephalocon In Vancouver on October 28! The CFP is currently open and closes this Sunday, July 13 at 11:59pm PT.
20:56:10 up 514 days, 6:15, 1 user, load average: 0.94, 0.85, 0.64
Just updating a couple of my #Ceph nodes… it's been a long time on Debian 10, but I need to move on. Ansible no longer works with it.
Procedure so far is remove the old Ceph repositories (as they don't support anything beyond Debian 10) and lean on the fact that Debian 11 ships Ceph 14.
It's a slightly older release of Ceph 14, but it'll be enough… I can get the OS updated on all the nodes by doing them one-by-one… then when I get them all up to Debian 11… I should be able to jump to the next stable release of Ceph… then do another round of OS updates to move to Debian 12.
Ohhh boy.
Ok probably another week to go until I get these drives running
I should probably use that to figure out backups for this current pool.....
if the bulk storage poll in ceph works well enough I'll ship the current file server off to the parent's but for now I'll be able to use the LAN to do initial syncs
S3-compatible backup via Velero is one option, with MinIO, or Garage running in a container backed by ZFS (I am not building a remote ceph cluster )
Anyone have thoughts/suggestions on backup strategy here? Probably backing up 2-3TB of data total (lots of photos)
I'll end up with local snapshots, and remote backups, huge bonus if they can be recovered without needing a ceph cluster to restore to in the case of something catastrophic. #HomeLab #Ceph
Guess it's time for a new #introduction, post instance move.
Hi! I'm Crabbypup, or just 'crabby', though only in name most days.
I'm a Linux flavored computer toucher from Kitchener-Waterloo.
I tend to share stuff about the region, Open source software in general and #linux in specific.
I like to tinker in my #homelab, where I run #proxmox, #ceph, and a bunch of other #selfhosted services including #homeassistant.
I'm a rather inconsistent poster, but I'm glad to be here.
New blog post: https://blog.mei-home.net/posts/ceph-copy-latency/
I take a detailed look at the copy operation I recently did on my media collection, moving 1.7 TB from my old Ceph clusters to my Rook one.
Some musings about Ceph and HDDs as well as a satisfying amount of plots. Which are sadly not really readable. I definitely need a different blog theme which allows enlargement of figures.
There is no reason at all to entrust your company or personal data to ANY #Cloud service. If you are a company, build your own hardware infrastructure with #Ceph, #Proxmox, #Openstack or others. IT WILL SAVE YOU MONEY. If an individual back your data up at home on a NAS.
Use and support #OpenSource.
Ditch #Microsoft.
Now the US is bad, but no government or megacorporation can be trusted.
https://www.osnews.com/story/141794/it-is-no-longer-safe-to-move-our-governments-and-societies-to-us-clouds/
deuxième sujet.
Nous allons certainement proposer des l'hébergement de machines virtuelles et de clusters #Kubernetes, pour nos membres. Il faudra notamment choisir entre plusieurs hyperviseurs. En partant de zéro, comment faire simple, si possible libre, pas trop cher et facile à administrer ?
Pour le moment, on a du #proxmox et du #ceph. Est-ce que cela vaut le coup de monter en technicité avec d'autres hyperviseurs ou vaut-il mieux s'investir dans l'administration k8s sur nos VM ?
We have published a new newsletter about our current activity: https://blog.codeberg.org/letter-from-codeberg-looking-into-2025.html
* Meet us at #FOSDEM in Brussel and get stickers for you and your friends!
* Learn about our infrastructure improvements, networking and #Ceph storage.
* Read about other news from the past months.
I'm learning #Proxmox #cluster with #Ceph by virtualizing it on my Proxmox Mini PC. Total inception mode , but for learning purposes. The setup was relatively easy, and it worked very well.
- Manually migrate an LXC container from one node to another with only 10 seconds of downtime.
- Power off a node, and automatically migrate the LXC running on it to another node with one minute and 10 seconds of downtime.
I’m surprised by the last number. Any advice will be welcome.
I got my delivery of my new #kubernetes nodes (#minisforum #ms01). Now I'm waiting for ram and cables. Rather than decide between using the 10Gbit SFP+ ports and the USB-4 ports for the ring network (for #Ceph), I'm just going to set up both. That should give me 30Gbit between each node without using a switch.
Now I need to decide which OS I'll use. Stick with #RHEL 9, or change to #RockyLinux or #Fedora?
An exciting evening ahead of us: We are performing maintenance on our #Ceph storage system and will distribute data across the machines for the first time.
We expect no or little interruption of our services, but performance might degrade while the new nodes are backfilling.
Yet another Ceph rant/vent with frustrated language
@wolfensteijn
Ugh. I had this happen with three nvme drives that had my #Ceph bluestore. A disaster.
@uastronomer now if you seperate the compute and storage layer with diskless compute nodes accessing the filesystem via #iSCSI or #Ceph, you can even do superfast updates by merely rebooting the jail/host...