Emc2Net HomeLab - Linux KVM server
Table Of Contents
The Linux KVM server is one of the central components in the HomeLab. The server was going to run KVM1 and all the virtual machines in my HomeLab so I spent some time trying to find out a good balance between number og CPU cores, memory size and fast storage before I decided what I needed.
I wanted to be able to run several virtual machines with some load and good performance without spending thousands of krones on new enterprise level hardware. A new enterprise server was out of the scope because of the costs.
I considered to buy an old second hand server for a while but I didn’t make up my mind at the end, especially because enterprise servers usually consume a lot of power and produce a lot of noise when running, and two of the main requirements for my HomeLab were to have as low power consumption and as low noise level as possible.
I decided to build the “server” myself with some quality components, specially when it came to the motherboard, memory and storage. And this is what I put together as my “HomeLab server”.
Server rack case
I wanted to have a server case that used around 3-4U of the 18U I had available in my rack cabinet. I wanted to have also as many external hot swap drive bays as possible and enough internal space to have several components and fans so I could have enough cooling.
Finally I decided to get a case from Inter-Tech, the Inter-Tech IPC 4U-4408 4U Storage Chassis2. This case had decend quality, it was not to expensive, it had enough internal space and 8 external hot swap drive bays. The case came with a Hot-Swap Backplane with SFF-8087 connectors supporting SATA and SAS harddrives.
My Rack cabinet has a depth of 80cm so having a 52cm depth server chassis was also perfect so I could have enough space in the back for connections and air flow. The 3-4U gave me the opportunity of using bigger fans, so I could have a higher airflow inside the case with less RPMs and less noise.
To build this server I wanted to have a stable and reliable motherboard with support for the AMD Ryzen CPU, at least 64GB of memory and if possible ECC support. I didn’t need wifi, many extra USB ports or fancy colors. After multiple searches I found the ASUS Pro WS X570-Ace ATX Motherboard3 aimed mainly at professional users and workstations use.
This is a high-end motherboard, and it had the main requirements I needed for my “HomeLab server”. Efficient heatsinks and an enhanced power solution with alloy chokes and durable capacitors for stable power delivery was also important as this machine was going to be powered almost 24/7.
I had two main requirements for the CPU, it had to consume as little power as possible and it had to have as many cores as possible without expending a fortune.
I checked some Intel Xeon CPUs but they were too expensive for my budget and they needed server motherboards, I also checked some Intel core i9 CPUs, but at the time none of them were cheaper, faster or had more cores than the CPU I finally chose for the server, the AMD Ryzen 9 3900X 3.8GHz Socket AM4 Prosessor4.
The AMD Ryzen 9 3900X is a CPU with a TDP of 105W and 12 cores/24 threads, aimed at the desktop marked but with plenty of power for my HomeLab server.
To choose the memory was not difficult, I just read the motherboard documentation and chose ECC memory from a supported vendor with available DIMMS at the moment, in total 96G of memory installed in a Dual Channel Memory Architecture.
- 2 x Kingston 32GB 3200MHz DDR4 ECC CL22 DIMM 2Rx8 MicronE [KSM32ED8/32ME]5
- 2 x Kingston 16GB 3200MHz DDR4 ECC CL22 DIMM [KSM32ED8/16HD]6
Again, I wanted to have a stable and reliable power supply, silent, modular, efficient and able to deliver as clean power as possible to the components in the server. I chose the Corsair HX750i 750W 80 PLUS Platinum7, a 750W / 80 PLUS Platinum power supply with 10 years warranty and enough capacity to power all the components in the server and to grow in the future.
At full CPU and storage load, the server consumes around 240-250W, that is only 32-33% of the total capacity of this power supply. Under normal load the server consumes between 80W and 100W.
The controller I chose for the server was the Broadcom Megaraid SAS 9341-8i8 with 2 ports and the possibility of connecting all 8 drive bays to it with two Mini-SAS-HD (SFF-8643) to Mini-SAS (SFF-8087) cables between the controller and the backplane.
The controller does a decent job running all the disks as JBOD devices so they are ready to be used with the ZFS filesystem running in the system.
More information about this controller and what I did with it when I installed it in the server can be read in this article I wrote some time ago “Megaraid SAS 9341-8i on Linux - Cooling and initialization issues”
The server case has 8 external hot swap drive bays and a Backplane with 2 x SFF-8087 connectors (1 per 4 disks) that can be used to connect 8 disks to a controller.
I installed 7 x “HPE 1,92TB SAS 12G VO1920JEUQQ SSD”9 second hand enterprise disks, one for the operative system and six in a “ZFS pool configuration”10 using raidz1-0 with one extra spare disk and a total of 6,7TB of netto diskspace.
The last drive bay is used by 1 x WD Ultrastar DC HC310 6TB 3.5" Serial ATA-60011 disk used for local backups.
In addition the server has 1 x Samsung 980 Pro / PCIe 4.0 / 1TB / NVMe M.2 228012 used as cache for the ZFS pool and as a fast disk for virtual machines that need high speed/IO storage.
Fans / cooling
The server case has space for 2 x 80mm fans on the back and 4 x 80mm fans on the front, the case came with 2 fans installed on the back of the case and none in the front. I decided to upgrade the 2 fans on the back and install 4 new fans on the front, all of them from Noctua:
The 4x Noctua NF-A8 ULN 80mm were installed on the front of the case (They deliver an intake airflow of around 101m³/h at the lowest speed and around 138m³/h at full speed) and the 2x Noctua NF-A8 PWM 80mm were installed on the back of the case (They deliver an outtake airflow of around 88m³/h at the lowest speed and around 111m³/h at full speed.)
This configuration gives between 73,8 and 100,8 LFM through the cross sectional area of the case. The idea was to have a server case with positive air pressure, this prevents dust from penetrating into the chassis by using filters on the intake fans and forces air out of the server case through unfiltered vents and gaps.
For cooling the CPU I used a Noctua NH-CS1415 CPU cooler with an extra Noctua NF-A14 PWM 140mm16 fan, both fans blowing the air away from the CPU. The idle Tctl temperature of the CPU is around 31C. When using the CPU at 100% with all 12cores/24threads at full speed, the Tctl temperature gets up to 70-71C.
For cooling the Megaraid SAS 9341-8i disk controller I used a Noctua NF-A4x10 FLX 40mm17 fan. I wrote an article about this some time ago “Megaraid SAS 9341-8i on Linux - Cooling and initialization issues”. The idle ROC temperature of the LSISAS3008 chip in this controller is around 52C and 54-55C under load.
What can I say, for the use that I was going to give to this server a MSI GeForce GT 710 1GB18 video card was more than enough. It has low power consumption and cooling needs.
Inter-Tech IPC 4U-4408 4U Storage Chassis
ASUS Pro WS X570-ACE:
AMD Ryzen 9 3900X 3.8GHz Socket AM4 Prosessor
Corsair HX750i 750W 80 PLUS Platinum
Broadcom Megaraid SAS 9341-8i
HPE 1,92TB SAS 12G VO1920JEUQQ SSD https://support.hpe.com/connect/s/product?language=en_US&l5oid=3802118&kmpmoid=1010908600 ↩︎
Running OpenZFS raidz on Linux instead of hardware RAID5
WD Ultrastar DC HC310 6TB 3.5" Serial ATA-600
Samsung 980 Pro PCIe 1TB NVMe M.2 2280
Noctua NF-A8 PWM 80x80mm:
Noctua NF-A8 ULN 80x80mm:
Noctua NH-CS14 CPU cooler
Noctua NF-A14 PWM 140mm
Noctua NF-A4x10 FLX 40x40mm:
MSI GeForce GT 710 1GB