Since the swap over to my new 4U storage server ZION it has already had a number of upgrades and I have used it quite heavily. My old N5550’s now all sit unused. In this post I will cover the updates to Zion 4U storage server and my Synology NAS 1817+.Continue reading…
I am excited to reveal my latest 2020 custom server build – ZION. See my previous planning post for Zion here.
Built from the ground up to be a custom 15+ drive bay rack-mountable storage server and virtual machine host. Using AMD Ryzen Zen+ architecture and X370 chipset. Zion was designed to be powerful enough to run KVM virtual machines and Docker containers.
Zion runs Unraid Pro with SSD caching and dual parity. It has capacity for 25 drives total: 24 HDDs (16 HBA + 8 On-board Mobo) and 1 x M.2 SSD/NVMe slot.
Named after Zion from the The Matrix movies.
Zion is the last human city on the planet Earth after a cataclysmic nuclear war between mankind and sentient machines, which resulted in artificial lifeforms dominating the world.https://matrix.fandom.com/wiki/Zion
This post is specific to owners of the Thecus N5550 but will apply to almost all of the Thecus NAS lineup as well. Anyone with Thecus NAS devices knows that the operating system “ThecusOS” has been practically abandoned. The vendor came out with “OS7” but it never fully finished and development died somewhere along the way. It also did not fully support the N5550 and many other models. Owners were left with unsupported devices.
This was fine with me for a while since I never really used the management interface except when configuring or troubleshooting issues. However after a couple years now the interface is very outdated and I was looking for something more modern.
I did upgrade one of my N5550’s to “ThecusOS7”, a beta version. It was able to install and it appeared to be functional however I was unable to create a RAID volume. Which means I could not use it for much of anything. Not to mention it was clunky and the older UI actually seemed to be more functional. It was clear that OS7 was just trying to mimic Synology’s DSM, but lacked all the polish. My attempts to downgrade were unsuccessful and I had pretty much bricked the NAS.
So now what? I have three N5550’s and don’t want the hardware to go to waste.Continue reading…
This week NAS3 got a storage upgrade from 128GB SSD’s to 500GB SSDs. NAS3 is my SSD NAS which is used for hosting virtual machines.
NAS3 has been running with the 128GB SSD’s for many years now. In fact I paid more for the 128GB SSD than I did for the 500GB SSD. However that was exactly my reasoning for waiting so long to upgrade. NAS3 is not intended to be a performance beast since it’s limited to 1Gbps networking. It would literally be a waste to use high end SSDs in this machine.
Choosing an SSD is sometimes a difficult decision. You have to weigh the performance, cost, and endurance (quality) of the drive. Especially so in a NAS or RAID environment where SSD’s “total bytes written” or “TBW” endurance rating will become a factor.
I paid $90/each CDN for five Crucial MX500 500GB 3D NAND SATA 2.5 Inch Internal SSD – CT500MX500SSD1(Z).
I don’t like using a parity RAID with SSD’s because you will wear them out faster, but in this case I simply don’t care. I would rather have a little redundancy with the trade off of a faster wear rate. Considering the old 128GB SSD’s have been running in RAID5 for many years now as well, from my experience it’s not a big issue.
In RAID5 the disks give me 1.81 TB of usable SSD space. Compared to about 477 GB before. So big upgrade in comparison.
This upgrade should last a few years at least. The next storage upgrade for the SSD side will be getting rid of the ancient Thecus N5550’s and replacing them with a Synology NAS. But that’s a future wish-list.
Hope you enjoyed reading. If so please drop a like or share.
By the end of 2017 nearly all my NAS servers were close to reaching full capacity. I had already pre-decided on getting a Synology DS1817+ but it was just a matter of when. I wanted something that was more than five bays and would be upgradable to 10G networking in the future. The DS1817+ seemed to match all of my needs and my budget. Continue reading…
This past weekend we had a power brownout for about 4 hours. This caused my servers to fail-over to battery power. The batteries don’t last long with servers running. I guess something went sour with the automatic shutdown of my NAS3 which is used only for my VMware virtual machines and it did an improper shutdown. The RAID has crashed.
I don’t have anyone to blame other than myself and I knew eventually this day would come. NAS3 was in RAID-0. That means striping with no redundancy. A failed array on RAID-0 typically means total data loss. I take daily backups of this entire NAS nightly so I am aware and prepared for the risk of using striping. That does not mean that it’s a fun time recovering from it.
Adding additional redundancy for blackouts
Currently, one of the hardest things to recover from in my current home-lab environment is a total power blackout. Everything right now is planned & designed around losing certain components like 1 disk, 1 switch/network cable, etc. However when everything is off and I need to bring things back online it’s a painstaking and very manual process. Over time my environment has also become more and more complex. This latest outage has me scratching my head at how to recover faster & simpler from a power blackout.