Hyper Converged Infrastructure with Proxmox

After Broadcom aquired VMware, I was once again introduced to a name that had already been floating around the home-lab circles: Proxmox. At the time I still had no idea what it even was, but as my little experiment began to grow and take a life of its own, I began to find myself searching for a more powerful solution than just containerization – virtual machines. I have previously avoided them due to the need for hardware (that was in short supply until recently, mind you). But now that my need for a more flexible setup emerges as the workload increases, this seems to be about the right time to get started.

Virtual machines are wonderful on a conceptual level, but there are some kinks that need to be addressed in the real world. Where is a virtual machine? How does compute get allocated? What software needs a dedicated VM in the first place? Spinning up Virtual Box for a single machine to test things is great, but what if I want to network with multiple VMs?

It turns out the answer is a hypervisor, a scheduler, and a hundred little goblins in running around with tiny hammers. The hypervisor with a dedicated OS can split its own resources into many different virtual machines that each enjoy its own “isolated” environment for the goblins to run around in. This isn’t a replacement for containers, this is a way to supplement them with software that can’t easily be run on a container. In this case, my realization that I can essentially spin up multiple instances of a server without worrying about the physical allocation of hardware – making it possible to run experiments with K8’s. Or Jupyter Hub, which is the only way to get Jupyter Notebook on a phone, which can’t easily be containerized. Even Home Assistant, which runs on-host, despite being running with Docker.

Setting this up might seem like a lot of work, but I promise you – being able to manage machines from a single computer negates all of it. So thus begins the journey of migrating the system onto a higher plane of abstraction. But before I begin, there is one thing to take care of first – making backups. I’ve been burned twice before; I shall not get burned again. I have two copies of all volumes that I used previously in a separate location in case the machine decides to brick itself.

Setting up Janky Hardware

To start off, this is a refurbished HP Elite G3 800 with an i5-6500, sold by an e-waste company for around 70$. I am essentially building this server out of any components that I have lying around. I had an old NVME drive as the primary boot drive, 4TB HDD and 32GB of 2133MHz RAM. In total, the rough cost of the build comes out to ~300$, partially due to the cheaper flash prices due to the recent NAND overproduction.

Most of my hardware didn’t complain when it was going into the server, save the RAM. The BIOS for HP prebuilt machines includes no overclocking or XMP profiles. Additionally, the other sticks were mismatched – which caused the computer to default to the lowest supported frequency, hence 2133MHz. One of the sticks was also causing errors, but cleaning the pins with some isopropyl and 10 seconds with a hair dryer fixed it.

Installing Proxmox and Configuring IPs

The next step was setting up the initial Proxmox install. Simple enough, flash a drive with Rufus… and the installer refuses to recognize my older USB drives, claiming that no disks were inserted, despite initially booting to the installer. After a bit of searching, I found that the installer would only detect the faster 3.0 drives. Strange, but not an issue.

There was also a moment where I noticed that the machine’s IP address was missing from the router; it looks as though the server IP that was not present in the router’s records after installation. I eventually manually assigned the server a static IP via the MAC.

Setting Up Storage and VMs

While the “correct” way of doing this probably involves some version of Ansible, I am only setting up three machines – a storage server, and two VMs for hosting containers.

For the storage server, this is a simple install of TrueNAS Core. I set up the VM with the disks passed though directly, installed TrueNAS, added the disks to a drive pool, created a dataset with the pool, added a user with read/write permissions, shared the dataset, and viola – networked storage, shared and accessible. This configuration might make it a bit of a hassle if I want to add additional storage, but I may as well set up a dedicated rackmount NAS if that were the case.

The other two VMs are running Ubuntu Server. After the initial install, I set up SSH keys and Docker. The only issue was mounting the networked storage – apparently, the proper way to mount a device is by editing the “/etc/fstab” file – the mount command does not persist between reboots.

To Cluster or Not?

All in all, I am happy with the progress that I’ve made with this server. The fact that tinkering this project has taken me so far makes me wonder what this will look like in another five years.  I know clustering seems to be the next step, but I don’t think high-availability is going to be going on my resume soon. I’ll feel the need to tinker eventually, but for now – this is done and done!


Posted

in

by

Tags: