Hello again! I kind of wanted to pipe in and talk about all the cool things I’m doing with this project and where the project really is compared to the blog posts I’m just starting. Ultimately I hope all this inspires others to get out there and get going.
As I’ve eluded to, and from the front page image, I’ve fixed onto a newer hardware build. Presently there are five Pis in the cluster, three of them are Raspberry Pis and the other two are Orange Pis. I’m definitely liking the Orange Pis because the density of the rack I purchased will fit four additional Pis, in the end, in the 2U they fit in. In addition to higher density, the Orange Pi 5 Pro has twice the memory and CPU cores.
The software tutorials that I’m going to write will follow a very similar vein as the hardware builds. What I wanted was a container based system, to avoid the loss of system resources to para virtualization. I didn’t necessarily want a turnkey solution, although ultimately I’m very familiar with the hypervisor solution I ended up with.
I set out to install Kubernetes and while I succeeded, I kept running against the networking aspect. I wanted to have a simple bridged network into my cluster’s vlan, but Kubernetes is more geared toward cloud networking.
I can sleep better at night because I did install the base components of the hypervisor I went with and learned installing Ceph from scratch and getting it setup as well as getting a Kubernetes swarm up and running. I did switch to Proxmox via the Proxmox Port. I think it is important to point out before anyone jumps the gun on my how tos! I ran into interesting pitfalls on the way and I had to over come them. Fortunately I’m a Linux systems administrator with many years of experience, so none of them have so far been road blocks. This experience is what I really want to share with my audience. Some problems sometimes feel insurmountable but with determination, knowledge, experience and a little Google Foo, we’ll overcome them together.
As I mentioned, right now, the cluster is a 5 node cluster. There is one AWS t4g.nano instance providing some critical DNS help as the cluster is on a DHCP connection. I’ve stood up some load balancers with VRRP, configured a LEMP (Linux, Nginx, MariaDB, PHP) stack. This blog is actually being hosted out of this now. I’ve only just got the ability to write everything down in the blog so I have a lot of catch up.
I also have two active directory domain controllers, I’m standing up a Gitlab instance as I’m writing this post. I need to export some storage in a rational way and I believe I’ll just be setting up NFS. I want to create a Galera MariaDB, multi master, cluster so that each Pi can have a web server and horizontally spread the workload. When I’m done I want the cluster to have 16 nodes, altogether.
There is so much to talk about and share; and I promise that there will be some light kernel work ahead of us, well technically behind me! I’m absolutely looking forward to the posts I’ll be making in the future. Right-right now, what I need to do is get Ansible up and going because it’ll ‘force’ me to document what I’ve done done through the creation of various roles and plays when I go to bring new nodes online. I think ultimately what I’ll likely do is bring a new node and container in, destroy an old handcrafted node and then rebuild it with the scripts. I want to have some more redundancy in place before I do that tho. We’ll be learning Ansible together so look forward to those tutorials!
As always, I’d like to pander to anyone who’s enjoying this and might want to donate to my cause. The donations will all be going to the hardware involved to finish the ARMada Cluster off, but it will keep me engaged and possibly focused; especially if you include items you’d like to see more spotlight on! Regardless just consuming my material and engaging is positive for me. Well, back to the grinding board!
Leave a Reply