Kubernetes

So I finally started doing it! With all my past dislike for Docker, my past failure with OpenShift, my long reluctance to Kubernetes and Containers and such, I've finally broken down and started really learning Kubernetes.

First Looks

To start, I investigated into the manual way to get into Kubernetes. See my one problem I always had with Kubernetes was storage. How does one handle storage in a clustered system, and actually get stuff done? I've tried a number of solutions for this over the years, and I've even implemented a few new approaches to some of them in this very project alone!

Ceph and Rook-Ceph

First time I really got into it, Ceph was badly supported in Kubernetes. Now it seems, it's much more well supported, however, I still have some of the age old problems I had with Ceph, well on the side factor I am still running everything under Proxmox VE. Either way, I tried it with rook-ceph, even really did a number to get the external cluster well done, before I ultimately pulled it all out and ended up going with longhorn. Another can of worms there.

Ceph can be very nice, it was, in fact, for so very many good long years I used it. It's around the time BlueStore came out it started becoming more beastly than my systems could really handle, than any systems could that wasn't dedicated solely for the use of Ceph as a storage cluster. Don't get me wrong, Ceph is amazing, Ceph is great. It just is a lot more demanding and not very forgiving if you don't have what it needs.

Longhorn

A very interesting approach to storage. iSCSI under the hood for the equivalent to RBD from Ceph, bridging block devices directly to a system for a disk mount. That, at least, is for ReadWriteOnce type. If you go with ReadWriteMany as an alternative, is actually wraps that around NFS. Versatile, for sure, curious, and very noisy when it comes to kernel logs about disk I/O errors and such. I'm still using Longhorn today, but it's definitely not without it's own issues as well.

GlusterFS and Ganesha

A very unusual choice, I know, but I'd long since switched my Proxmox VE systems from Ceph to GlusterFS, and it's been a very surprisingly reliable choice for what I've been using it for. I do say, it's been quite the interesting ride, though, trying to use this in Kubernetes. Fully backed out any support, and there's 3rd party CSI drivers for it, also in the same boat, not exactly looking promising at all.

I did find one way of things though with it. Using NFS Ganesha for accessing GlusterFS, I was able to come up with some interesting ways around using GlusterFS with Kubernetes. Not the most ideal, mind you, but functional. I'll be exploring more about this over time, it's not without it's own issues in itself. For lots of small files, as was needed for a Nextcloud kubernetes setup, it was so slow that it could never actually deploy without killing itself.

The Flux Template

After I got my feet wet enough, I chose to start working with a template concept around Kubernetes, GitOps, Flux, and k3s. Something from the Kubernetes at Home (Now named Home Operations) Discord community, it's rather quite impressive. I use it still to this day, and working with it, this has been one of the best setups I've seen using Kubernetes. Sure, I hit some bumps in the road here and there, but the overall setup is quite reliable and, for the most part, I could mostly nuke my whole cluster and rebuild it back up to what it is now, websites, applications, backups restored, everything. So, with this, I'd definitely have to go with, purely amazing concept!

You can look into this cluster here: https://github.com/onedr0p/flux-cluster-template

More to come on this adventure, I'm sure! Until then…

-- Psi-Jack

Leave a Reply

Your email address will not be published. Required fields are marked *