Storage Refresh 2016 – Time to Build! (Part 2)

IMG_0254

Part 1: http://www.vskilled.com/2015/07/storage-refresh-2016-the-plan/

The hard drives have arrived today from NCIX and it’s now time to build it out to finally increase the storage capacity in my home lab. I’ve made only minor changes to the original plan; I ended up shying away from the Seagate 8TB archive hard drives I had originally planned on buying to use strictly for backup purposes. Much like 3TB drives, I just don’t have any confidence in them long-term.

DeviceCurrent Drive LayoutCurrent CapacityDesired Drive LayoutDesired Capacity
NAS 1 (Thecus N5550)5 x 2TB (RAID 5)7.21TB5 x 4TB (RAID 5)16TB~
NAS 2 (Thecus N5550)1x4TB, 2x2TB, 1x1TB, 1x640GB (JBOD)8.9TB5 x 2TB (RAID 5)7.25TB~
VMH01 (Dell C1100)1 x 1TB1TB2 x 8TB, 2 x 2TB (JBOD)20TB~

The end result stays the same. I’m looking to end up with two large data/media storage pools with about 22TB of usable storage. A considerably large increase from my existing data capacity of 7.21TB.

The challenge now is performing a safe and successful data migration to the new storage. Normally I use NAS2 as my backup/archive NAS. I am going to remove the drives from it and move some of them into my VMH01 (Dell C1100) and create a temporary datastore to backup of all the data on there. That way I can safely create a backup of the data and still be semi-protected by RAID.

After some careful scavenging through some documentation I found that I would probably be able to swap the disks from NAS2 and move them into NAS1 without losing any configuration or data. However this is risky, so I will store a 3rd copy of my data on a JBOD  on “BackupSrv”. In this case the risk is worth the reward if it pays off because I will be saving myself from having to copy the data from the BackupSrv JBOD again, and worst case scenario I still have the data on the drives so I can just swap then back if I needed to roll-back the change.

The Step-by-Step Action Plan:

  1. NAS2: Destroy JBOD, power-off, remove drives
  2. VMH01: Add 1x4TB, 1x2TB, 1x1TB. (Add to BackupSrv VM)
  3. BackupSrv: Create JBOD datastore for backup
  4. NAS2: Add 5x4TB NAS drives, build raid, re-configure NFS, rsync, ftp, etc
  5. NAS1: Full rsync backup to BackupSrv & NAS2, verify data
  6. Power-off both NAS1 and NAS2.
  7. Swap disks from NAS2 into NAS1, NAS1 into NAS2. Power on, cross fingers.
  8. Verify data and shares. It works!
  9. Data Migration Completed
  10. Cleanup: Reconfigure Rsync Backup Schedules
  11. Cleanup: Update Home Lab page, CMDB, Wiki
  12. Cleanup: Permissions on shares

* – Veeam Backup Repository moved temporarily to NAS1 (approx 600GB~)
* – NFS datastores + permissions will be lost during a RAID rebuild
* – Printer Scan-to-FTP Setup

 

Lets take a look at our storage now:

000204_2015-11-09 08_47

 

Excellent. Now I have a large RAID5 14.5TB share for media/data storage, another RAID5 7.26TB share for more data storage, and another 7TB of disks in JBOD for archive/backups. I have a LSI MegaRaid MR SAS 9260-8i 8 Port SAS Raid Card on the way to properly archive/backup JBOD the drives so that I can present the disks more cleanly to a backup VM.

New ESXi Server Build – VMH02 Replacement

IMG_0184

This build was originally meant to be a remote ESXi server for my parents place, but I’ve ended up liking this new build so much I’m going to have to keep it for myself. So what I’ll be doing is finishing up this build for my lab and swapping my current 2nd ESXi host (VMH02) to be my MediaPC, and finally re-purposing the MediaPC hardware as an ESXi host for the original plan of the remote lab.

I sort-of figured in the beginning of this remote lab project that I could end up falling in love with the build and deciding to keep it, and well… here we are. I really like the new case (Cooler Master HAF XB EVO ATX) and I’ll be buying another of them for the remote ESXi lab. It’s big/open, lots of fan slots, easy to use and cable manage. That and now that I know how to work with the case properly on the next build it will be super easy to plan out and execute.

ComponentPart NameCost (CAD)
CPUIntel® Core™ i7-950 Processor$50 (used)
MotherboardASUS Rampage III Extreme LGA 1366 Intel X58 SATA 6Gb/s USB 3.0 ATX Intel Motherboard $50 (used)
RAMKingston HyperX Fury Memory Black 16GB 2X8GB DDR3-1866 CL10 - and -
Corsair Vengeance 16GB 2X8GB DDR3-1866
$120
Power SupplyThermaltake TR2 500W Power Supply Cable Management ATX12V V2.3 24PIN With 120mm Fan$50 (have)
CaseCooler Master HAF XB EVO ATX$110
NetworkIntel I350-T4 PCI-Express PCI-E Four RJ45 Gigabit Ports Server Adapter NIC$60
Fans / MiscNZXT Hue 3 RGB Color Changing LED Controller, 2 x 80mm (buy), 1 x 200mm (have), Thermal Compound$50
CPU CoolerCorsair Cooling Hydro Series H60$70
~$560

Once replaced the new VMH02 will be an Intel i7-950 with 32GB of RAM. A small upgrade from the previous i7-920 with 20GB of RAM. I was able to get the used Motherboard, CPU, and 16GB Corsair RAM of RAM (see table above) from a buddy for $120 total.  That alone saved me easily about $600, compared to buying new.

Build Progress:

I’ll another update in the coming days on build progress. 🙂

Storage Refresh 2016 – The Plan

homelab-bottom

The time has come to increase storage capacity in the home lab. I expect that before the end of this year that I will have less than 1TB of free space left on my primary data NAS. That is a problem, and an expensive one at that. At the time of this writing I have 1.66TB of 7.21TB free (77% used). My data growth rate is currently between 3-5% on average per month. That gives me about 2-3 months before I’m in a critical state.

Adding storage to the primary data/media pools means also means adding storage to the backup pools. You won’t catch me without a backup – you only need to be burned by that once before you learn that harsh lesson. Seagate has come out with a 8TB drive meant for backups only which will help with backup capacity. Overall have been pretty skeptical of these 8TB drives. It is strongly advised not to use them in a RAID setup, they use SMR (shingled magnetic recording) that allows the tracks on the platter to be layered on top of each other to increase platter density or tracks per inch (TPI). With that said they seem to be fairly robust. While one could argue that I could (should?) delete some stuff, I strongly disagree. I am a data hoarder. Do you literally throw out all your books after you’re done reading them? Probably not. Same goes with data.

Upgrading the primary NAS means I’ll need to rebuild RAID arrays, use NAS 2 as “swing” storage, move data onto the upgraded NAS 1, rebuild NAS 2, and so on. This will take a couple of days of just moving data around and ensuring I have a backup at all times. During the swing process I am particularly vulnerable to drive failure. Currently my backup NAS 2 is in a JBOD configuration. If any one of the drives fail during this read/write intensive transfer process – game over. For that reason I will be making a second backup onto the 8TB seagate drive, just in case.

The plan is to switch NAS 1 into 5 x 4TB RAID 5, NAS 2 into what NAS 1 is currently (5 x 2TB RAID 5). I’ll then be leveraging my VMH01 (Dell C1100) for the backup pool drives (2 x 8TB, 2 x 2TB in JBOD) served up by a NAS4Free virtual machine. To help wrap my head around what I am doing I like to draw things out on my whiteboard. Here is my “draft” design. Apologies for the chicken scratch.

storagerefresh2016-draft

I’ll be re-purposing an existing 4TB drive in NAS 2 and moving it into the NAS 1 raid pool (hence why only purchasing 4 x 4TB drives as seen below). This saves me the cost of buying another 4TB drive.

I will be using a multi-vendor setup using a mix of Seagate and Western Digital drives. That will make things a little more robust in the long term. Currently I just have desktop rated drives in the primary NAS which, by manufacture guidelines, are only rated for a maximum of 2 in RAID 1/0 and they are also only rated for 8×5 use. The WD Red and Seagate NAS series drives are designed for use in home NAS and servers. They offer a good price to performance ratio, and possess a few features which make them more suitable for RAID arrays such as TLER, higher vibration tolerance (which should result in a longer lifespan), consume less power and are rated for 24/7 use.

western-digital-red-4tb

DriveQuantityCost (CAD)
Seagate ST8000AS0002 8TB 5900RPM 128MB Cache SATA3 Archive Hard Drive OEM - for Backup Data Only2 x $319.99 ea$319.99
Western Digital WD WD40EFRX 4TB Red SATA3 6GB/S Cache 64MB 3.5in Hard Drive2 x $209.99 ea$419.98
Seagate ST4000VN000 4TB 64MB SATA 6GB/S 3.5IN Internal NAS Hard Drive2 x $199.99 ea$399.98
$1,459.94

All said and done I will end up with two large data/media storage pools with 22TB~ of usable combined storage.

A considerably large increase from my existing data capacity of only 7.21TB. The idea being for this to last at least 3+ years. NAS 2 which is currently a JBOD for backups only will now be another usable RAID5 protected data pool. Each NAS backed up to VMH01’s backup storage JBOD.

DeviceCurrent Drive LayoutCurrent CapacityDesired Drive LayoutDesired Capacity
NAS 1 (Thecus N5550)5 x 2TB (RAID 5)7.21TB5 x 4TB (RAID 5)16TB~
NAS 2 (Thecus N5550)1x4TB, 2x2TB, 1x1TB, 1x640GB (JBOD)8.9TB5 x 2TB (RAID 5)7.25TB~
VMH01 (Dell C1100)1 x 1TB1TB2 x 8TB, 2 x 2TB (JBOD)20TB~

I’ll be sure to post updates with pictures on the build and upgrade process when the time comes. For now I’ll be trying to saving up some cash to make this plan come together.

Home Lab Upgrades, Network Standardization

A quick update here for May 2015:

Yesterday I upgraded from a 24 Port NetGear GS724Tv3 Switch to a 48 Port Netgear GS748Tv3 switch. Along with the switch upgrade I completely re-cabled everything with new CAT6 cables. They are now color coded based on function for my new network design. I really like the idea of knowing what might be plugged into the other end just based on visual inspection, despite having all cables labeled at each end.

2015-05-02 10_39_38

I’ve added a new LED lighting system to the rack.  I normally use one fixed color at a time but for the video I had it cycling through for demonstration, otherwise it’s a bit too much like having a night club in the house. Replaced many power cables with shorter length cables 1ft / 3ft to reduce excess cable mess. Not everything needs to have a 6ft power cable!

20150501_211607

Took me about 3 hours all said and done. My home lab needed a good cleaning anyway so it was a good opportunity to fix some things. The recent addition of NAS3 left me in a scramble for ports. Previously I had it temporarily running on a single link, yet it was my main VM storage. It is now setup properly in a LACP port channel.

Video above shows a quick tour of the changes.

SSD Emulated Virtual Disks for Nested ESXi

I came across a gotcha scenario when trying to deploy vSAN in my home lab. When adding disks to the nested ESXi server all of the disks are detected as regular ol’ spindle disks regardless of the actual underlying storage. So I was in need of a method to emulate an SSD device. Truth be told there are actually many other reasons why you might want to emulate a SSD disk.

The solution is easy! It’s just one simple edit to the virtual machine’s configuration file (VMX). As long as you’re running virtual machine hardware version 8 or later you can configure a specific virtual disk to appear as an SSD.

scsiX:Y.virtualSSD = 1

X represents the controller ID and Y is the disk ID of the virtual disk.