Thoughts on starting a self-hosted homelab after 5 years

This is going to be a bit of an overview of the topics you need to look into when setting up your own homelab.

Since the post got quite long, here is the table of contents:

Hardware

First questions

I wanted to start with hardware since it's possible that if you landed here, you're most interested in this section as it's the first thing you need in order to start.

There are several topics to touch upon regarding hardware, but let's start asking ourselves a question.

What do I want to self-host?

If you don't know what, then I recommend you check the Awesome self-hosted repository. Here you'll find plenty of options categorized and with short descriptions for each item. It's also a good resource if you do know what you want to host, but you don't know what tool to use to get there.

It's important to ask yourself this question because it will drive the need for different hardware. It's also good to think a bit into the future, so that you can get some hardware that allows for potential expansion would you need it later on.

SBCs (Single Board Computers)

Since this can vary very much depending on what you decided on the previous section, I will try to offer some general directions so you can go and find out more about your specific setup.

My first self-hosted service ran on a Raspberry Pi. If you just need to host some static site, or even a couple of services, a Pi might be enough for you. If you can get hold of a Pi 4, that is even better. However, after the component shortage I haven't found a single store that has them available.

Getting a Pi comes with a caveat though. It runs on ARM, so some packages you might want to use could not be available. I'm not sure how much this is an issue nowadays, but it's important to be aware of it.

If that's not a problem, a Pi will give you a super low consumption, and you can do cool stuff like run it from a powerbank for extra resilience if there's a temporary power outage.

(old) PCs

Let's say that you need a more powerful option, though. Here, your options multiply exponentially, and we ask ourselves some more questions to refine our search.

Do you care about space? When that's not an issue, you can go for an old tower PC that you can have lying around or buy second hand for very cheap. Space is a concern? You could go for a laptop too! But you might need to tinker with it so that it stays on when the lid is closed.

For instance, Dell Small Form Factor (SFF) PCs are plenty available second hand on eBay, or your local used hardware market. Currently, one of the machines in my homelab is a Dell Optiplex 9020 SFF and I can totally recommend it. It's also smaller than your regular MIDI Tower PC so it takes a bit less space.

If you take this path, you get more options when it comes to expansions later. You can upgrade memory or storage without having to bleed your wallet since they use regular PC parts.

But let's get back to more options to consider.

HTPCs

HTPCs are also very useful when starting your homelab. NUCs for example, pack a decent amount of power with a very small footprint and consumption. My first piece of equipment after moving on from my Pi was an Intel NUC7PJYH.

This machine was specially nice as it had a quad-core CPU, was under 200 CHF new, and had a TDP of 10W. There's a bit of a catch, as NUCs come without storage or memory, so the total price will be a bit more. But 2.5" SSDs are not that expensive since NVMe is the rage nowadays and RAM was not outrageously pricey.

However, with HTPCs you will encounter some restrictions. If you're thinking of building a NAS, you will not have space for your typical 3.5" HDDs. Even if you want to use 2.5" SSDs, you are likely restricted to 2 internal SATA ports. Considering you will already need one drive for the OS, RAID options fade away quickly.

As an option for storage, you could do what I did for a while, which was using a USB hard drive enclosure with a couple of 2.5" HDDs. However, speeds will be capped to USB3 and the bandwith will be shared between all the HDDs connected to the enclosure. An example would be this enclosure.

If you're brave, you could also try to use NFS mounts that you host in your NAS (if you have one) in combination with an HTPC, but I wouldn't choose that path for your first adventures since it's likely going to be a bit more painful than having everything in one machine.

DIY

Here I want to share a bit of my personal journey. As I mentioned, I had a NUC with my services running, but eventually I wanted to run Nextcloud with a drive dedicated to it and my NUC couldn't handle it since it only supported one drive.

I really liked the specs of the CPU I had, because of the extremely low consumption while still rocking 4 cores and being a very recently released CPU at the time. The recency means the CPU also supported all the latest instructions. I found ASRock was selling a Mini-ITX motherboard with the J4105 CPU in it, and it had 4 SATA ports. The J4105 is very similar to the J5005 in terms of specs, so it was good enough. With it, I could build my NAS having low consumption and the storage I/O needed for a NAS.

You can check the full NAS specs here

Since then, I doubled the RAM because I wanted to potentially run other services on it. But to be honest, I haven't needed the 16 GB so far. That is mainly because I'm only running TrueNAS on it and no containers yet. My whole services fleet lives on the Optiplex 9020 that sports 32 GB of RAM (DDR3 though).

Other considerations regarding hardware

Maintenance cost is definitely a factor when owning the servers where your applications are running. Hard drive failures are a thing, and so are other components failures. That being said, I haven't experienced yet much of that, if any at all.

Nevertheless, you should take it into consideration. Not only for the fact that you might need to invest some more money into parts after the initial costs, but also because that means backups are pretty important.

Software

The next big thing once the hardware has been decided is software. Naturally, this will depend on what you answered to our first question. I am not going to go into specifics about the best applications for this or that. Instead, I will go a bit meta and talk about how to run those applications that you need.

I will focus mainly on 3 options to accomplish that:

Hypervisor + VM and/or containers

This is the last approach I have taken to running stuff self-hosted, but probably also the one I like most. I am currently using Proxmox's free edition to run most of my services. I'm not here to convince you to use one or the other, but I can tell you Proxmox provides a Web UI and several convenience tools to handle your fleet of applications. This makes managing everything way more pleasant, and that's coming from someone who enjoys using the terminal quite some.

Proxmox has a myriad of features that I will not explain, because I am no expert either. If you're interested in learning how to use it properly, I can recommend Learn Linux TV's Full Proxmox Course on Youtube. I've watched some of his videos and he's a youtuber that doesn't speak too fast, doesn't overhype stuff, and provides great content that is really helpful :)

Let's talk about some of the things I like about Proxmox. One very convenient feature is the possibility to access to all your services shells directly through the UI. Another very important one is backups, again via a button in Proxmox's UI. There are actually two types of "backups": backups and snapshots.

You can also create templates of VMs/containers. You can for instance create a machine/configuration that has the specs that you want and then generate a template from that which you can use as a starting point in the future.

A good thing about using a Hypervisor and different VMs or containers is that if one of them gets compromised, the attacker can't escalate privileges to get access to your whole group of services.

There is way more to it, but I'd recommend you to watch the Proxmox's course I linked above if you're interested in learning more about it.

Docker containers on bare metal

This was my first approach to running services, and a valid one as well. To be fair, this option can also include a Web UI if you use something like Portainer. I haven't used it yet, although I plan to try it. So maybe in an upcoming blogpost I will talk about it!

With the containers on bare metal option, we reduce the overhead for the machine since it all runs on one host. However, we could have issues if there is a container escape. On the other hand, managing everything is more convenient for the same reason. In the end, it's all about what trade-offs you are willing to make.

Because we are using containers, we still get the benefits of being able to run the same thing by just using the same docker command on another machine. This is different to installing on bare metal, where we need to install all the dependencies and the app itself on the host ourselves step by step.

Services directly running on bare metal

Lastly, we have the option of installing everything directly on the host. This option is the one I like the least because I think it brings the most downsides with it. Port conflicts can be an issue, as can be different dependency requirements clashing with each other.

Think of a service needing Node v16 and another one needing to run with Node v18. While we could circumvent some of the issues, it just makes everything messier and more difficult to maintain.

I guess a benefit could be that backing up one machine backs up all services, but I'd still prefer to have separate backups for separate services. This way, I can selectively roll back / upgrade / migrate an individual service without having to bring the whole machine with it.

ISP

The next thing we need to look into is what our ISP or Internet Service Provider allows us to do with our home connection. Some ISPs will restrict which ports you can open towards the internet, limiting the range of services we can self-host. Another very important thing to check is whether we have a static IP or a dynamic one assigned to us.

In my case, my ISP provides me with a static IPv4 because I have a Fiber connection, but depending on the company you can request a static IP regardless of your kind of connection. A static IP will make our life easier, since we only need to configure our domains to point to it once.

But don't worry, if you have a dynamic IP, there are also ways to solve this problem. One would be to manually update your domain record to point to your new IP every single time, but this would be quite painful for you. Thankfully, there are tools to auto-update the record every time your IP changes provided by the dynamic DNS providers. I had used noip.com very long ago, but I think they still do offer their free dynDNS service with a tool to update automatically your IP in the record.

Domains

Once the hardware has been acquired and the software has been installed, we need to expose our server to the world.

Or do we?

The answer is, it depends. If you need many people (especially if they are not tech savvy) to access your services, then your best bet is to use a domain and share that with them.

If they're tech savvy and not many, you could instead gate all your services behind a VPN and send them credentials to connect to it. This option appears to be more "secure" since you're getting all the bots continuously scanning the internet for vulnerable hosts out of the way. But it's pretty inconvenient if you want to casually access one of your services from an unknown machine. We'll get back to the VPN option but let's look at domains now.

Domains: free or paid?

We can decide if we want a FQDN (Fully Qualified Domain Name) of our choice so we can have something like mybestwebsite.com or we can go for a free option where we cannot completely decide on the domain. That could look like myfreedomain.hostingproviderdomain.com or something less human-friendly depending on the provider.

Since we already talked a bit about the free dynDNS route in the previous section when talking about dynamic IPs, let's have a look at the paid way. It's good to know that the price of the domain usually depends on the TLD or Top Level Domain. A domain with a .com TLD will usually be more expensive than one with .me.

I have been using Namecheap as my domain name registrar for years and I haven't really had complaints. But this is not the only one and I encourage you, like with any other topic, to do your own research and find the one that suits your needs.

Note that once you have already bought your domain and pointed it at your IP, it's very likely that you won't be able to reach it for some time. That is because of DNS propagation. That is also something that will affect us every time we change the IP of the record, so it's important to keep it in mind, especially when using dynDNS.

Done?

Hooray! It seems like we finally have conquered Mount self-host!

If you made it this far, congratulations :)

But that's far from the end, really. What you went through is probably the most fun part about self-hosting. Now comes the tedious part for most: maintenance.

However, since this post is beyond long, I am going to wrap up here for now. In an upcoming post, I will touch on that topic including backups, migrations and security.

Until then, enjoy your tinkering!