Homelab

In IT sector it is not unusual having a homelab. The motivation for this might vary: some like hosting their own content, some other are focussing on achieving certifications and need the appropriate infrastructure for learning purposes.

I’m heavily interested in hardware-related topics – on the other side I also focus on rounding off my skills continuously. For me, the best way of learning is building systems from scratch and – if requires – do low-level troubleshooting. In addition to that I also host some applications in my homelab that are used externally.

Currently, my homelab consists of:

Highlevel overview

Highlevel overview

  • D.I.Y. 2-node vSphere cluster with enabled vSAN
  • a HP ProLiant Microserver NAS with 12 TB
  • a Backup DAS with 12 TB
  • a Gigabit Ethernet/10G switch as central networking component
  • a APU.1D appliance running IPFire for separating networks
  • a FRITZ!Box from my interner provides
  • a APC Smart-UPS 750 UPS for avoiding data loss forced by power outages
  • some NETIOs for controlling power consumption

But – why? Below you will find some explanations and typical questions about my homelab.

vSphere cluster

My homelab rack, 2017

My homelab rack, 2017

The heart of my homelab is a vSphere cluster which consists of two ESXi hosts I’m running all my workloads on.

Currently, those ESXi hosts consist each of:

  • a Intel Xeon D-1518 CPU (8×2.2 GHz, 8M cache, 35W TDP)
  • 128 GB ECC DDR4 memory
  • 250 GB vSAN cache layer (NVMe)
  • 480 GB vSAN capacity layer (SATA)
  • 10G interconnect for vMotion and NFS respectively iSCSI

Additional hardware details can be found in this blog post.

Mainly, this cluster is used for evaluating and learning different tools and products, e.g.:

Homelab software stacks

Homelab software stacks

  • Operating systems
    • CentOS / Red Hat Enterprise Linux
    • SUSE Linux Enterprise
    • VMware Photon OS / Docker
  • Management / infrastructure tools
    • Red Hat Satellite / Foreman / Katello
    • Red Hat Identity Management / FreeIPA
    • OMD / Icinga / Icinga2 / Grafana
  • Development tools
    • GitLab / GitLab CI
  • VMware products
    • vSphere
    • vCenter Server Appliance
    • vRealize Suite
    • NSX
Why don’t you just go on trainings or use online labs?

I do! But I also figured out in the last couple of years that my learning success is higher if I use the tools I’m learning. 🙂

NAS & DAS

HP ProLiant MicroServer Gen8

HP ProLiant MicroServer Gen8

HP ProLiant MicroServer Gen8 with the following specifications is used as central storage for user data and virtual machines:

  • Intel Xeon E3-1220Lv2 CPU (4×2.3 GHz, 3M cache, 17W TDP)
  • 16 GB ECC DDR3 memory
  • 4×4 TB Enterprise SATA hard drives (HGST Ultrastar 7K6000) in RAID-5 mode
  • 10G connection for the vSphere cluster

By default, the server has a Intel Celeron G1610T CPU (2×2.3 GHz, 1M cache, 35W TDP) – which lacks the Intel AES New Instructions (AES-NI) extension that is required for hardware-accelerated encryption and decryption. I require this in my setup for protecting my confidential data via LUKS (Linux Unified Key Setup). Because the MicroServer Gen8 is already several years old, getting suitable CPUs for socket 1155 with this extension was not easy. Finally, I had the chance to get a Xeon E3-1220Lv2 which has a pretty decent performance/efficiency ratio.

NAS networks

NAS networks

Data between clients is exchanged via Samba, the vSphere cluster leverages NFS respectively iSCSI via a dedicated 10G network. This is required in order to satisfy VM’s IO demands – a conventional Gigabit Ethernet network is not fast enough. Because of the CPU, exchanging encrypted data via the 1G network can reach rates up to approximately 125 MB/s – which is also the physical limit of the network. Data exchange rates between the NAS and the vSphere cluster within the 10G network can reach up to approximately 800 MB/s:

vMotion via 10G

Before buying hard drives, I documented some thoughts about possible products in a blog post.

I’m using CentOS 7 as NAS operating system – basically I’m using it also for all other Linux workloads in my homelab. A DAS (Lian-Li EX-503), which is also equipped with 12 TB in RAID-5 mode, data are backed up in a weekly manner. An offsite backup is done using an external hard drive in sporadic intervals.

Who the heck needs 12 TB of storage? And also mirrors it?

I really like creating backups – and storing dozens of pictures and photos (beginning in my childhood). I do not like deleting. 🙂

Network

Switching

When I started playing around eith VMware vSAN, it quickly figured out that my network is a limiting factor. At this point, I still had a Cisco SG300 switch – a pretty decent device. Doing researches I saw that buying a successor variant with 4 SFP+ ports (a refurbished SG500-28X) would have exhausted my budget by far. Therefore, I finally decided to go for the D-Link DGS-1510-28X, which is quite comparable but also cheaper for something (see details in the blog post above).

Why don’t you connect your two vSphere nodes via SFP+ Direct Attach Copper Cable (DAC)?

The primary cluster storage is vSAN – and pretty often I screw this one up with experiments. Therefore I need a fallback storage: NFS über 10G – and this requires a third participant in the dedicated network. In addition to that I still wanted to have the option to add a third vSphere node.

Routing

AX206 display using lcd4linux

AX206 display using lcd4linux

Since I started building homelabs, I’m using IPCop respectively IPFire. After using an ALIX.2D13 appliance with IPCop 1.x/2.x for years, I needed new hardware back in 2016. I decided to go for the APU.1D appliance and installed IPFire on it.

My appliance takes over the following networking tasks:

  • Capsulating a DMZ for public workloads
  • Remote access via OpenVPN
  • WLAN access point
  • Creating a dashboard informing about current network events via USB
Dedicated hardware seems pointless to me – new FRITZ!Box devices also offer DMZ setups.

Well, you might be right – but I prefer having a fully-features Linux system that I can customize without losing warranty. In addition to that I’m able to create extensions for IPFire – but not for a FRITZ!Box device. 😉

Network overview

Network overview

I received a FRITZ!Box from my internet provider – the only device connected to it is my IPFire that gets external access using some ports.

Facility

All servers and appliances are located in a more or less ordered server rack. To avoid data loss forced by power outages, I’m using a small APC Smart-UPS 750 UPS. This UPS supplies electricity for all connected devices for up to 20 minutes – that’s enough for shutting down everything in a controlled manner.

Devices that are used in infrequently intervals are connected to power supplies that can be controlled via network – enabling me to turn them on/off remotely to save electricity. I’m using the products NETIO 230A and 230B by the Czech vendor Koukaam. To be honest, those devices are kind of outdated – but they fulfill their purpose and I also invested a lot of time to develop a Nagios/Icinga plugin and an experimental Android app binding me for some time to these products. 😉

Grafana facility dashboard

Grafana facility dashboard

It is really necessary to have a UPS for private stuff?

Well, it depends on where you live and how good/bad the electricity supplier does it’s job. I was really lucky having a UPS for some times. 🙂

Outlook

This could be even faster...

This could be even faster…

A homelab is a living infrastructure – in the last couple of years I upgraded and replaced hosts on a regular basis. Here are some things, that could be optimised in my environment:

  • Bigger and faster (SAS instead of SATA) capacity SSDs for the vSphere cluster
  • A third vSphere node if resources become exhausted
  • Implementing vSphere Replication for storing critical VMs on a standalone ESXi host
  • Network virtualization using NSX
  • SSD caching on the NAS

It is easy to expand this list – there are a lot of things that could be improved. So – stay tuned.. 😉

2 comments Write a comment

Leave a Reply