I’ve referenced my home server many times, but I never had the time to go into the details of how it was built or how it works. Recently I decided to completely rebuild it, replacing the hardware and basing it on-top of a completely new operating system. I thought it would be a good idea to keep a build log, tracking what I did, my design decisions, and constraints you should consider if you want to follow in my footsteps.
This series will be broken up into multiple parts
- Part 1 - Hardware
- Part 2 - Build Log
- Part 3 - MediaDepot/CoreOS Configuration
- Part 4 - Application Docker Containers
This is Part 1, where we’ll be talking about the Hardware. Specifically the hardware I chose to build my server, the alternatives I explored and compromised I had to consider.
Since most of you just care about the part list and the price, lets get that out of the way first (note, affiliate links):
Yeah, you read that right, ~3k for my ultimate media server, and thats using server grade hardware thats already 2 years old. This is an expensive hobby.
Let’s break it down and discuss each item, and why it was chosen over the alternatives.
Though its not traditionally important, in a home server the case you choose sets a lot more limitations than a traditional pc or even a rack mounted server.
In my case, I wanted the following:
- support for at-least 8 hot-swappable hard drives
- room for a dedicated SSD boot disk
- adequate ventilation for a micro-ATX (or a mini-ITX) motherboard
- look more like an appliance than a computer tower
- have the smallest footprint possible.
I decided to go with the NSC-810A by U-NAS. It’s a bit on the expensive side when it comes to a case, but it provides me with the room to use a micro-ATX motherboard, while still supporting 8 hot-swappable hard drives in a small footprint. And it doesn’t look that bad either, which is important for a server that’s sitting on my shelf, rather than a server rack in the basement.
|Hot Swap Drive Bays||8x 3.5”|
|Internal Drive Bays||1x 2.5”|
|Power Supply Form Factor||1U Flex|
|Dimensions(L x W x H)||31.5cm x 27.5cm x 19.7cm|
- Cable extensions are required for PSU
- Cable extensions are required for Front-Panel headers
- Riser cables/cards are required for PCIe expansion cards
When choosing a CPU, there’s a few requirements I had to consider
- My server would be running 24x7 so the CPU should be fairly efficient
- There would be lots of applications running at the same time, so a high core count would be preferable.
- A higher clock speed would ensure that video transcodes would complete faster
- My plan includes a dedicated video card for hardware transcodes, so this does not apply
- Uptime and stability are almost more important than raw performance, so ECC memory would be preferred.
- I want a modern (but cost effective) CPU that will be able to handle my workload for years
Here’s a helpful table I put together so I could quickly compare CPU’s as they are referenced in different ways.
|Generation||Year||Xeon Family Number||Core Family Name||Socket|
|7||2017-Jan||V6||Kaby Lake||1151, 2066|
Given these requirements I decided to go with a Xeon V6 processor. Specifically the Xeon E3-1275V6.
You might think that the Xeon Xeon E3-1275V6 is probably overkill for a simple NAS, and you’re not wrong. The reason I chose is is that my server is not a simple NAS, it’ll be running a bunch of applications in parallel 24x7 and I wanted the highest multi-core and CPU clock I could get without breaking the bank.
If you’re not going to be running as many workloads on your home server as I am feel free to dial back the power (and cost) of your CPU. However I would stay away from the E3-1220V6 or below as it only has 2 cores vs 4 cores for E3-1230V6 and above
|Code Name||Kaby Lake|
|Base Clock||3.80 GHz|
|Max Clock||4.20 GHz|
You’ll want to consider the applications you’re running on your server:
- The rule of thumb for ZFS is 1GB RAM for every 1TB of storage.
- With 8 storage drives you could potentially have 96TB of storage (8*12TB) which is more than the Max RAM.
- Socket 1151 has been replaced by Socket 2066, so if you want to eventually upgrade to a newer CPU, you wont be able to.
Given that we’ve selected a small form factor case and a powerful CPU, ventilation and cooling are going to become very important. Noctura is well known in the PC market for their quiet but powerful cooling solutions. They have a CPU fan thats targeted specifically towards small form factor cases, and is compatible with our LGA1151 CPU socket: Noctua - NH-L9i
Thats not enough though. We’ll need to verify that the Thermal Design Power (TDP) of our chosen CPU is compatible with the Noctura fan.
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload. https://en.wikipedia.org/wiki/Thermal_design_power
Thankfully Noctura releases TDP guidelines for all their fans, including the NH-L9i. Though our exact CPU model is not listed as compatible, we can see that the fan can handle ~91W TDP from Kaby Lake processors, which higher than our expected TDP of 71W.
Now that we’ve chosen our Case and CPU, it’s time to find a compatible motherboard. Given the size constraints of our case and socket constraints of our CPU, we’re looking for something that matches the following requirements:
- Socket LGA1151 compatible, specifically the Xeon V6 family.
- Can support 9+ SATA drives (8 storage drives + 1 OS drive)
- At least one 16x PCIe slot (we want to use a dedicated video card for transcoding, see Video Card)
- Support for 64GB of RAM
- Remote management capability (low priority)
For “enthusiast” server grade hardware, theres a couple of trusted names:
- ASRock Rack
I ended up focusing on Supermicro as I was able to find a broader range of server motherboards.
If you’re looking at Supermicro motherboards I highly recommend the following resources:
- Supermicro Motherboard Matrix - compare/filter all Supermicro motherboards
- Supermicro Product Naming Conventions - helped me decode the motherboard feature just by glancing at model numbers
After comparing features and looking at prices, I settled on the X11SSL-CF motherboard by Supermicro.
Before we dive into why, lets take a look at some other motherboards I considered, but ultimately decided against:
|X11SSZ-TLN4F||$357.41||Not enough SATA/No SAS expansion|
|X11SSZ-F||$230.72||Not enough SATA/No SAS expansion|
|X11SSM-F||$197||Not enough SATA/No SAS expansion|
|X11SSL-F||$197||Not enough SATA/No SAS expansion|
|X11SSi-LN4F||$229.69||Not enough SATA/No SAS expansion|
|X11SSH-LN4F||$227.63||Not enough SATA/No SAS expansion|
|X11SSH-F||$213.21||Not enough SATA/No SAS expansion|
|X11SSH-CTF||$408||No x16 PCIe slot, expensive|
My 2 reasons for filtering out motherboards were:
- Not enough SATA/No SAS expansion (solvable via HBA controller card but loses PCIe slot)
- No x16 PCIe slot (solvable via powered 8x-16x PCIE riser card but I’m uncomfortable with the power supply)
While there are solutions for each of the problems above I went with a no-compromises motherboard that gave me everything I wanted out of the box.
|Processor Support||Xeon E3-1200 v6/v5 or 7th/6th Gen. Core i3|
|RAM Slots||4 DIMM|
|RAM Type||Unbuffered ECC UDIMM DDR4 2400MHz|
|PCIe Slots||1 PCI-E 3.0 x8 (in x16), 1 PCI-E 3.0 x4 (in x8),1 PCI-E 3.0 x1|
|SAS||2x mini-SAS HD (SFF8643) SAS3|
While this motherboard works great for my requirements, you should pay attention to the following compromises:
- The Chipset is C232, which does not support Intel iGPU/onboard Video
- C236 is focused for desktop use and supports Intel iGPU
- DDR4 ECC UDIMM is expensive and hard to find
- Supermicro motherboards are notorious for RAM incompatibility (here’s the official compatibility list)
- v1.x of the BIOS does not work with Xeon V6 CPU’s, you need to upgrade to v2.x
- boot using a Xeon v5 and then flash the BIOS
- RMA the board and get the manufacturer to update the BIOS
- buy a license for IPMI (~$20) and update the BIOS using the management console.
- 1Gigabit Ethernet (not 10Gigabit Ethernet)
- Supermicro motherboards with an onboard SAS controller is more expensive, look at LSI HBA controller cards if you have room.
Our options for RAM are limited by our motherboard and CPU:
- 64GB max
- 16GB max per DIMM slot
- 4 DIMM slots
- must be DDR4
- must be ECC
- must be Unbuffered
- should be on the official compatibility list
- must not cost and arm and a leg
When it comes to ECC UDIMM DDR4 RAM, its that last point that’s the problem. DDR4 RAM is expensive, and RAM for our chosen motherboard/cpu even more so.
I ended up going with Crucial RAM that was compatible on paper, even though it wasn’t on the official compatibility list because I wanted to save money and it had been tested working on the same model motherboard.
Power Supply (PSU)
We’re looking for a PSU that is:
- 1U Flex form factor
- 8*10W per disk
- Video Card
- 65-250W Motherboard/CPU
- Modular if possible (saves space)
- 80 PLUS Certified - Power efficient since we’re running 24x7
- Quiet (server PSU’s are known to be jet-engine loud)
The only real PSU that matches these requirements is the Seasonic SS-300M1U and Seasonic SS-350M1U. I decided to go with the SS-350M1U as I felt the extra power was necessary with my dedicated video card. With just storage drives you may be fine with just the SS-300M1U.
- It seems that the SS-300M1U and SS-350M1U PSU’s are end-of-life and are no longer manufactured. Unfortunately I was unable to find a replacement, so I purchased mine used on Ebay.
- It does not have an 8-pin power connector, only a 4-pin. That was seemingly ok with my motherboard, but YMMV.
While a video card is optional for most servers, I’m building a dedicated streaming/transcoding server for Plex and the iGPU just isn’t enough.
I need a video card that:
- can handle a large number of simultaneous transcodes
- supports a large number of codecs
- only takes up 1 PCIe slot
- doesn’t have significant power requirements
- can handle heavy, long duration usage
- can fit in a small form factor case
Again, there wasn’t much to choose from. The Nvidia Quadro P2000 (released 2017) is the first enterprise video card that supports unlimited concurrent transcodes. Later versions of the video card add additional features while costing significantly more. It’s the best cost-effective solution for a transcode heavy server like mine.
|Model||Nvidia Quadro P2000|
|Max Power Consumption||75W|
|Form Factor||4.40” H x 7.90” L, Single Slot|
- requires a PCIe x16 slot (uncommon in micro-ATX motherboards)
- additional power consumption
- iGPU is more cost effective for infrequent transcoding usage
Here’s what I considered when choosing my boot drive:
- a 2.5” drive that mounts in the NSC-810A boot drive bay
- has fast I/O as I’ll be running multiple docker containers & applications on my server concurrently
- should be atleast 300GB large, as the boot drive will act like a cache drive for all my applications (some of which are media heavy like Plex) and will be the primary drive for new downloads (until the downloads are complete and moved to a storage drive automatically)
- S.M.A.R.T capable so that I can monitor the health of the drive using automated tools.
- Low power usage
I ended up going with a Samsung 500GB 860 EVO 2.5” SSD drive as it checked off all of the boxes.
|Model||Samsung 500GB 860 EVO|
|Max Seq Read||Up to 550 MB/s|
|Max Seq Write||Up to 520 MB/s|
|NAND Type||Samsung 64-Layer V-NAND|
|Max Power Consumption||4.0W|
The Samsung 860 EVO is a TLC NAND type drive:
Storing 3 bits per cell, TLC flash is the cheapest form of flash to manufacture. The biggest disadvantage to this type of flash is that it is only suitable for consumer usage, and would not be able to meet the standards for industrial use. Read/write life cycles are considerably shorter at 3,000 to 5,000 cycles per cell. SLC, MLC, TLC
While this concerning, I’ve mitigated the issue with the following:
- The boot drive contains very little important data, all completed downloads and persistent data is automatically moved off onto the storage drives
- Application data is backed up into the cloud using
- S.M.A.R.T will notify me when my drive begins to fail
- The drive is incredibly cheap for the performance it provides.
Storage one of the most important areas of our built, but its also one of the most flexible. Our server is designed to use JBOD (Just-a-bunch-of-disks) meaning we can add storage as necessary, and our disks can be of varying sizes (unlike RAID).
However, even with this flexibility, there are a couple of things we’re looking for:
- S.M.A.R.T support to ensure that we can monitor the health of our drives
- Read speed performance
- Good price to GB ratio
- NAS type storage drives, power efficient
- Large cache
I went with the Western Digital - Red 8 TB 3.5” 5400RPM Internal Hard Drive.
|Model||Western Digital - Red|