Scaling down the home lab, slightly

Since my last post about my new Zion 4U storage server I have been more and more impressed by Unraid and have started using Docker and KVM virtualization on it.

Since then I have moved some VMs over to KVM virtualization on Unraid. That left my Domain Controllers, IPAM, and VCSA left on my VMware environment. My secondary DC was also broken so I was running with just a single DC. This made me re-think how the home lab was configured and decided to simplify some of the many moving parts.

The main hassle is recovering from a power outage. Typically I have to manually intervene to get most things back online. Having systems more consolidated will allow a faster and more hands-off recovery.

Continue reading…

ZION – 4U Unraid Storage Server

I am excited to reveal my latest 2020 custom server build – ZION. See my previous planning post for Zion here.

Built from the ground up to be a custom 15+ drive bay rack-mountable storage server and virtual machine host. Using AMD Ryzen Zen+ architecture and X370 chipset. Zion was designed to be powerful enough to run KVM virtual machines and Docker containers.

Zion runs Unraid Pro with SSD caching and dual parity. It has capacity for 25 drives total: 24 HDDs (16 HBA + 8 On-board Mobo) and 1 x M.2 SSD/NVMe slot.

Named after Zion from the The Matrix movies.

Zion is the last human city on the planet Earth after a cataclysmic nuclear war between mankind and sentient machines, which resulted in artificial lifeforms dominating the world.

https://matrix.fandom.com/wiki/Zion
Continue reading…

2020 Home Lab Upgrade (Unraid 4U)

It’s a new year, and that means it’s time for some home lab upgrades!

The last few upgrades I did were some hardware hacks to my old Thecus N5550’s to support 6 drives and Unraid. Before that I did a SSD upgrade on NAS3 for my VMware VM storage. And finally almost a two years ago exactly I installed a Synology DS1817+.

After using Unraid more and more in my home environment and understanding how it works I have become more confident in using it and can say I am pretty much on the Unraid kool-aid.

So my goal for 2020 is to make a large capacity storage server, and more or less retire most of my Thecus N5550’s that have served me so well.

So the planned 2020 additions are:

  • New 4U Unraid 15+ Drive Storage Server
  • Replace 4-Tier Shelf with a 5-Tier Shelf
  • 10Gbps Pre-Planning
Continue reading…

NAS3 SSD Upgrade

This week NAS3 got a storage upgrade from 128GB SSD’s to 500GB SSDs. NAS3 is my SSD NAS which is used for hosting virtual machines.

NAS3 has been running with the 128GB SSD’s for many years now. In fact I paid more for the 128GB SSD than I did for the 500GB SSD. However that was exactly my reasoning for waiting so long to upgrade. NAS3 is not intended to be a performance beast since it’s limited to 1Gbps networking. It would literally be a waste to use high end SSDs in this machine.

Choosing an SSD is sometimes a difficult decision. You have to weigh the performance, cost, and endurance (quality) of the drive. Especially so in a NAS or RAID environment where SSD’s “total bytes written” or “TBW” endurance rating will become a factor.

I paid $90/each CDN for five Crucial MX500 500GB 3D NAND SATA 2.5 Inch Internal SSD – CT500MX500SSD1(Z).

I don’t like using a parity RAID with SSD’s because you will wear them out faster, but in this case I simply don’t care. I would rather have a little redundancy with the trade off of a faster wear rate. Considering the old 128GB SSD’s have been running in RAID5 for many years now as well, from my experience it’s not a big issue.

In RAID5 the disks give me 1.81 TB of usable SSD space. Compared to about 477 GB before. So big upgrade in comparison.

This upgrade should last a few years at least. The next storage upgrade for the SSD side will be getting rid of the ancient Thecus N5550’s and replacing them with a Synology NAS. But that’s a future wish-list.

Hope you enjoyed reading. If so please drop a like or share.

New Synology DS1817+ NAS & ISP Switch

By the end of 2017 nearly all my NAS servers were close to reaching full capacity. I had already pre-decided on getting a Synology DS1817+ but it was just a matter of when. I wanted something that was more than five bays and would be upgradable to 10G networking in the future. The DS1817+ seemed to match all of my needs and my budget. Continue reading…

Home Lab Rebuild

It’s been long overdue for some changes to my home lab. The latest full outage on Sept 4, 2017 due to a power brown-out had me realizing that some improvements can be made. There has not been any major changes to the lab since 2015. In 2016 I upgraded the storage in NAS1, memory upgrade for VMH02, added Ubiquiti UAP-AC-LITE access points, and a security camera.

Now I’m going back to the drawing board and doing a fresh rebuild. The goal this time around is to be simple and redundant.

  1. Hardware firewall: I have custom built a 1U Supermicro server that will be used as the new firewall. It has a Intel Xeon X3470 CPU, 8GB RAM, quad gigabit LAN ports and a 200W low power supply. I’ve also replaced the stock passive CPU heat-sink with the Thermaltake Engine 27 low profile heat-sink. It’s a well balanced combination of performance, power and noise. In the old lab design the virtualized firewall introduced too many dependencies and greatly increased the complexity of the network. During a power outage scenario it also requires me to have a VM host and storage online which does not last long on UPS batteries. Having a low power hardware firewall allows me more flexibility and faster recovery from a total lab black-out.
  2. Additional UPS backup power: There will now be a third UPS battery for the home lab. I will dedicate one UPS for the core networking equipment and try to keep the load on it under 25% to maximize the battery life. The rest of the gear will be balanced over the other two UPS batteries.
  3. Standard Virtual Switches: I will be removing the Virtual Distributed Switch and LACP on the ESXi hosts.  This is a tough call but I have weighed the options. The VDS in my environment is overkill. I have two hosts, with only one of them on at a time. In my scenario the VDS’s only purpose is configuration sync. I don’t use traffic shaping, private VLANs, LLDP, etc! The only loss I will take by moving down to a VSS is having to manually maintain the port groups exactly the same on each host and no LACP. That doesn’t concern me because that hardly ever changes.

Continue reading…