Home Lab

This is an overview of my vSkilled home lab environment. A home lab is a great learning tool for any tech savvy geek.

My home lab is running 24x7x365 and uses approximately 600W~ of power concurrently.  For years now my home lab has been a critical part of my home network. It is quite literally the backbone that runs the infrastructure for the rest of the electronics in the house. I run multiple AD/DNS domain controllers, a virtual firewall, Plex, Observium, Confluence, IPAM, Veeam. IPAM, vCenter and others; all virtually using VMware.

I am always making tweaks and changes so I will do my best to keep this page updated.

Last Updated: April 28, 2016

The vSkilled Home Lab includes the following components broken down into categories.

  • Rack Hardware
    • 4-Tier Black Wire Shelf
  • Power / Temperature
    • APC Back-UPS Pro 1000 (1000VA / 600W) x 2
  • Cooling
    • Industrial Floor Fan x 2
    • Ceiling Intake Fan (to Outdoors)
  • Security
  • Networking
    • Sophos Unified Threat Management 9 Firewall (Virtual)
    • Netgear GS748Tv3 48-Port 1GbE Switch (CSW 1)
    • Netgear GS724Tv3 24-Port 10GbE Switch (CSW 2)
    • Netgear GS108T-200NAS Prosafe 8 Port 1GbE Switch (CSW 3)
    • Linksys WRT1900AC Wireless Router
  • Compute
    • Dell PowerEdge C1100 CS24-TY LFF, 2 x Xeon L5520’s, 72GB ECC RAM (VMH01)
    • Supermicro Whitebox – 2 x Xeon X5570, 40GB RAM (VMH02)
  • NAS/SAN Storage
    • Thecus N5550 5-Bay (NAS 1)
    • Thecus N5550 5-Bay (NAS 2)
    • Thecus N5550 5-Bay (NAS 3)
    • Whitebox – 8-bay, AMD, 16GB RAM, LSI 9260-8i (NAS 4)
  • Storage Drives (approx.)
    • Seagate ST4000VN000 4TB 64MB NAS x 3
    • Western Digital WD WD40EFRX 4TB Red x 2
    • Western Digital Green 2TB x 4
    • Seagate Desktop 1TB x 2
    • Seagate Desktop 2TB x 3
    • Seagate Desktop 4TB x 1
    • Kingston HyperX 3K SSD 120GB x 8
  • Other Gear

Rack Hardware

I use a 4-tier wired shelf as a rack for the vSkilled Lab.  This allows me to have a more “open” design and have everything in close proximity to power, network, and compute hardware. In terms of environmental needs installing hardware in a open air, mount-less environment like this arguably does not help with proper airflow and keeping system + component temperatures down… but that’s just part of challenges of having a home lab. Power and cooling. You’ve got to work with what you’ve got.

In my current layout, I dedicate one shelf to compute infrastructure and the others to networking, storage and power gear as well as more transient hardware that stays for just the length of a product review for example. This layout allows me to keep cabling routing neatly organized.

Everything on the rack is connected to one of the two uninterruptible power supplies (UPS) in the event of a power failure. The lab lasts approximately 15-20 min on battery power before it will run dry.  The security camera records all motion activity in the room. It saves video directly to the NAS servers, and stores a picture every 2 seconds to a local SD card (just in case the NAS is down for whatever reason).

On the top-left portion of the top rack there is a smoke detector (white round thing). Right beside the door is a fire extinguisher. You know… Just in case. I would hate to burn down the building, or cause injury because of my home lab. Safety first!


Below is a quick video tour of the lab – in “disco” mode! This is really for demonstration. I normally don’t have it cycling through colors like this because it’s a bit too much like having a nightclub in the house. I normally have it solid blue. 🙂

Network

network_diagram_2016

Network: Overview

I try to keep my network as clean and simple as possible. I currently use no VLANs in my network except on VMware vSwitches for isolated testing. I plan to add VLAN’s later on but I just haven’t got around to that yet.

I use a Sophos UTM virtual appliance for my firewall and router. The firewall VM can run from any of my VM hosts so that I can have no interruption to the Internet when I’m constantly playing around or rebooting things. This means I need a connection from the modem to each of the VM hosts. There is a HA/clustering functionality you can setup with the Sophos UTM but instead I just simply use vSphere HA / vMotion. This saves me resources because I only need 1 firewall VM running.

Each VM host has 2 x MGMT and 2 x VM traffic up-links. In the real world these connections would be split over two stacked switches for proper redundancy but I only have one switch so just try to imagine. This allows me to have link redundancy on both the management and VM traffic link adapters.

 

Network: CSW1 (Core Switch 1)

The core / aggregate layer of my network stands on a very reliable Netgear GS748Tv3 48-Port 1GbE Switch. Almost all network and storage traffic will route through CSW1. On average CSW1 moves about 15,000GB of data around per month. The stock fans in this switch are very loud, so I have swapped out the fans to be much quieter while still giving adequate airflow.

Network: CSW2 (Core Switch 2)

The living room’s primary connectivity hub. Lots of room for expansion here. Mainly connects my primary desktop PC, the wireless router, game consoles, the TV and my Media PC. This switch is fan-less.

Network: CSW3 (Core Switch 3)

My apartment has CAT5e run in the walls to various rooms. That means that my server room essentially just needs to plug into the CAT5e wall socket for connectivity to the rest of the house. That is where CSW3 comes in. It’s located in the laundry room in a small network closet. This is also where the ISP coaxial cable feed comes in too. CSW3 is a rather robust Netgear GS108T-200NAS Prosafe 8 Port 1GbE Switch. It is basically the interconnect between the server room and the rest of the house. Home devices, wireless clients, and everything else from other rooms of the house will need to pass through here.

Network: Wireless

For primary wireless communication I use a Linksys WRT1900AC Wireless Router. I’ve replaced the two rear “ears” with a pair of high-gain omni directional RP-SMA antenna‘s for added signal strength and range. For cooling and stability it has it’s own dedicated Thermaltake cooling pad complete with blue LED’s! (because why not right?!)

Compute / VM Hosts

The virtual machine hosts are what power all my virtual machines, which at this point is 100% of my servers besides the VM host’s themselves.  They are running VMware ESXi and boot from a micro-USB flash drive. I am using compatible Intel CPU’s so that I can have VMware EVC mode running for vMotion optimizations. VMH01 is my primary work-horse. It has a dual socket CPU configuration and is loaded up with 72GB of RAM. VMH02 is a Supermicro white-box server I built myself, also with a dual socket CPU configuration.

  • VMH01: Dell PowerEdge C1100 CS24-TY LFF
    • CPU: 2 x Intel L5520 @ 2.27GHz
    • RAM: 72GB DDR3 ECC
    • 7 Network Ports
      • 2 x Intel 82576 1Gb Ethernet (2 Ports)
      • 4 x Intel I340-T4 1Gb Ethernet (4 Ports)
      • 1 x iLO Management @ 100Mbps
  • VMH02: Supermicro X8DT3/X8DTi Whitebox
    • CPU: 2 x Intel Xeon X5570 @ 2.93GHz
    • RAM: 40GB DDR3 non-ECC
    • 7 x Network Ports
      • 1 x Intel 82576 Dual-Port Gigabit (2 Ports)
      • 1 x Intel I340-T4 1Gb Ethernet (4 Ports)
      • 1 x IPMI Management @ 100Mbps

 

Storage

I currently exclusively use 3 x Thecus N5550 storage appliances for my storage needs. I started off with one and have been so impressed by them that I just continued to buy them. The Thecus N5550 is in my opinion the most cost effective 5-bay NAS devices on the market today. No other 5-bay NAS comes close to this price point. They are extremely reliable, have great performance, and use very little power. I would have prefered to go the Synology route, but I just couldn’t justify their premium price point.

Each of the storage appliances are connected to the CSW1 core switch using 2 x 1Gbps copper links in a LACP LAG group. This essentially combines the two links for added throughput bandwidth (up to 2Gbps!) and adds link redundancy. All storage network cables are yellow for easy visual identification.

  • NAS 1 –  14.54TB – RAID5, 5 x 4TB HDD. Used for primary file storage. RAM upgrade to 4GB. Com-link to UPS A for auto graceful shutdown if battery less than 20%.
  • NAS 29.08TB – RAID0, 5 x 2TB HDD. Used for backup/alt storage. Veeam backups, copy of some of NAS1’s data, decent IOPS for running a few VMs, etc.
  • NAS 3546GB – RAID0, 5 x 120GB SSD. High performance NFS storage used only for VM storage. RAM upgrade to 4GB. Com-link to UPS B for auto graceful shutdown if battery less than 20%.
  • NAS 4 – RAID5, 8 x 2TB disks. Used for backups and misc data storage. RIP April 2016; LSI raid card catastrophic failure.

Power & Cooling

Power and cooling is arguably the biggest obstacle with having a larger home lab environment, besides licensing. Virtualization however has vastly helped with this problem. The less pieces of physical equipment you need to have turned on, using power and generating heat, the better! I used to have my firewall as a physical box but switched over to a VM to save power and reduce the heat output.

During the summer months my lab keeps the air conditioner working at maximum capacity and during the winter months the home lab helps to keep the house a nice cozy warm temperature. Obviously this translates into higher power usage during the summer months, but not by too much surprisingly – only about a 100kWh difference.

000234_2016-01-08 11_41

 

My home lab uses approximately 600W~ of power concurrently on average.  This fluctuates based largely on server load. At peak load it runs at more like 650-800W+. I am lucky to live in western Canada where power is relatively cheap compared to other places in the world. I pay $0.0797 per kWh for first 1,350 kWh then $0.1195 per kWh over the 1,350 Step 1 threshold + $0.1764 per day. I get charged at mostly step 2 pricing, obviously. With all that said I pay about $220 – $290 CAD per 60-day billing period. So in the end about $150+ CAD per month which is pretty good all things considered!

Cooling is another problem I struggle with. In my currently setup I do not have a window available to hookup a portable AC unit. Luckily there is a ceiling vent fan that sucks air from the room and vents it outside but it’s contribution to keeping the room cool is trivial at best. I have looked into something like a swamp cooler but decided that was a bad idea. Swamp coolers use a evaporative cooling technique which requires water. You would just end up shooting moist + humid air into an area sensitive to moisture (which also could be disastrous for server equipment) and having to refill a water tank every couple of hours. No thanks!

My only option for cooling at this point is fans. They push and circulate the air out of the room. The door to the server room is cracked open slightly to allow cool air from the house to flow in, and the fan pushes the hot exhaust from the servers out. Ideally I would like to get a portable AC unit but without a window to vent out the hot air it just won’t work. I might have to try and rig-up some kind of contraption to allow it to exhaust out the ceiling vent. Problem is I would like my damage deposit back. I really don’t know and I am open to suggestions on this! I am sure I am not the only one who struggles with this.

Have a comment or question? Leave a comment below!

Changelog

04/28/2016:

  • Updates, Editing, Formatting, and Clean-up

04/18/2016: 

  • NAS4 died due to LSI 9260-8i catastrophic failure. HDDs moved into NAS2. CPU/Mobo/RAM swapped to MediaPC. Other hardware will be re-used in another project.
  • NAS2 was JBOD w/ mixed disks. NAS2 is now RAID0 w/ 5x2TB HDD (WD Green and Seagate Barracuda), for non-redundant backup storage.

02/04/2016:

  • Updated network diagram
  • Updated network overview

01/08/2016:

  • Updated power usage chart, includes 2014-2015 usage

12/19/2015:

  • Added lots of new pictures (and replaced some old pics)
  • Added NAS 4
  • Updated VMH02

10/30/2015:

  • Now using a VDS instead of VSS
  • VMH02 replaced with newly built Supermicro Whitebox. Pictures coming soon once build is finished.
  • NAS1 upgraded to 5x4TB RAID5, details updated.
  • NAS2 upgraded to 5x2TB RAID5, details updated.
  • Updated Power monthly consumption graph for January – October 2015

Comments

  1. Posted by Jack on December 19th, 2015, 09:50 [Reply]

    Hi,

    That was a good read. A lot of compute and a lot of storage. What most impressed me is your professional approach to your home lab.

    I don’t have nearly the same amount of equipment as you do, but I have also taken a professional approach to both documenting and setting up the logical and physical parts of the lab. It’s one of the parts that I think most homelab owners neglect. Networking is in my opinion the part most homelab owners struggle with and i believe this has to do with over complicating the problem and a lack of documentation.

    So just wanted to give you a thumbs up on a job well done :).

  2. Posted by Amon on April 16th, 2016, 23:38 [Reply]

    BSeeing this kind of homelab instigates me to build my own. Now im planning to build my own nas, but i could not find a suitable cabinet to build on.
    I saw you find a computer case with plenty of space to host 8 disks. Could you provide us with model and its brand?

    Congratulations and thans for your lab.

    • Karl Nyen
      Posted by Karl Nyen on April 17th, 2016, 10:09

      I used the Fractal Design Define R4 case for the 8-bay custom build. There are many interesting 8×3.5″ cases to choose from however.

  3. Posted by David on April 18th, 2016, 13:25 [Reply]

    Do you have the nitty gritty l ‘HowTo” of setting up your ESXI Hosts networking, the physical networking, storage subnets etc? Just wondering, as you have an awesome physical set up and it look so well organized.

    • Karl Nyen
      Posted by Karl Nyen on April 18th, 2016, 16:35

      Hi David. I currently don’t have a in-depth guide, but I do plan on making one as a blog post at some point. I am constantly changing and tweaking things in my lab so it’s hard to put this type of detailed info on this page since it would constantly be out-dated. 🙂

  4. Posted by David on April 20th, 2016, 13:09 [Reply]

    Awesome, Look forward to it! I have a somewhat similar home lab, all physical and it “works” for home lab, but Im way over my head on getting all the networking done “right”

  5. Posted by Homelabber on April 22nd, 2016, 05:00 [Reply]

    Hi Karl, how do you split out your WAN connection from the cable modem to both ESXi hosts without VLANS?

    • Karl Nyen
      Posted by Karl Nyen on May 3rd, 2016, 12:51

      Sorry for the delay. Each host has a private vSwitch dedicated to the “WAN” uplink NIC. The vSwitch hosts only the “WAN uplink” VM portgroup which is only assigned to firewall VM. This allows me run my firewall VM from any of my ESXi hosts without interruption what-so-ever. I can even live migrate (vMotion) the firewall between hosts while streaming Netflix (for example) without one dropped ping.

    • Posted by Homelabber on May 13th, 2016, 04:31

      So the physical WAN link out of your cable modem goes into a switch and from there connects to both hosts? Is it on a VLAN or is the switch just a small ‘dumb’ unmanaged switch? It doesn’t seem to be any of the three switches you detail, unless I missed something.
      Thanks!

    • Karl Nyen
      Posted by Karl Nyen on May 17th, 2016, 07:50

      The ISP’s modem is in bridged mode so that it’s no longer functioning as a router/gateway. Each ESXi host is physically connected directly to the modem (to vSwitch1) for the firewall WAN uplink. There is no physical switch between the modem and the hosts, besides the vSwitch. I could probably use a switch and/or VLANs in the future if I start to run out of ports from adding more hosts.

  6. Posted by Noaman Khan on May 28th, 2016, 21:21 [Reply]

    Karl,
    Will you be kind enough to tell what MB did you end up using for “VMH02 Supermicro X8DT3/X8DTi Whitebox”

    • Karl Nyen
      Posted by Karl Nyen on May 30th, 2016, 07:55

      It’s right in the name, the motherboard is the Supermicro X8DT3/X8DTi. 😀

    • Posted by Noaman Khan on May 30th, 2016, 21:52

      Karl, Firstly thank you for clarifying motherboard model. Secondly I’m confused as per your other post, where you mention replacement of VMH02 with I7 chip and 1366 X86 chipset motherboard. As this post got updated in 2016.

      I’m looking to build ESXi server that will need to run 8 VM’s 24/7 with total of 12-13 VCPU allocated. Any thoughts?

    • Karl Nyen
      Posted by Karl Nyen on May 31st, 2016, 07:49

      That’s an old post now. The LGA 1366 / i7 board at the time of this writing is used for my MediaPC and VMH02 is the VMware host.
      You could probably get away with any single or dual socket configuration with a half decent CPU. Really depends on your VM load and your back-end storage capabilities.

Reply

Your email address will not be published. Required fields are marked *