Skip to content

Category: Uncategorized

Upgrading the Home Lab

Home labs are always a hotly contested topic. Do you need one, do you not? Can you host it or should it be physical? Can you make do with hand-me down gear or should you buy new? Will it stay powered on or only powered on for lab work? And this goes on and on until you’re absolutely tired of thinking about it.

In my case, I’ve had a home lab for around 4 or 5 years now. I went the new physical route. I found a Dell tower (T110) with a Xeon processor and 16GB RAM for pretty cheap and made it work. It was good enough for multiple VCAPs, a bunch of blog posts, and enough learning to earn a couple new social designations.

However, slowly over the years, my home lab was being shutdown less and less. There were things now running in it which had an uptime requirement (mostly self-imposed) and that 16GB of RAM was almost always over-subscribed.

This all means that it was time for a home lab upgrade! There were lots of characteristics to take into account such as the lab’s footprint, cooling, power, connectivity, noise, and so forth. After a couple months battling back and forth between the new Intel NUCs and SuperMicro Xeon-D systems, I ended up going the SuperMicro route. I also decided to go with the rackmount option. The rackmount cases have slightly less depth than most closet shelves and plenty of space to adding in a bunch of extra drives or PCIe devices.

Next was choosing the board. This offers its own complexity because the X10SDV boards have gained popularity over the years and there’s a bunch of different combinations. I ended up landing on the board with the Intel Xeon D-1528 processor. It has 6 cores (12 threads), goes up to 128GB RAM, and only requires 35W of power. I also skipped the sticker shock of going straight to 128GB right away and went with 64GB.

The Build

Primary Components:
SuperMicro CSE-505-203B Case – LINK
Intel X10SDV-6C+-TLN4f Motherboard – LINK
Samsung 32GB DDR4 RAM x 2 – LINK

Accessories:
I/O Shield – LINK
40x28mm fan x 2 – LINK
Fan Hold x 2 – LINK
Hard Drive Retention Bracket – LINK

As the links indicate, I used WiredZone for the purchase. They’re an authorized reseller and had everything in stock. I would definitely order from them again.

Assembly

Couple days later the parts started arriving and it was time for assembly! This was a little more frustrating than I anticipated. Pain points were the power connection and the fans.

The power problem was due to the case PSU including a 20-pin connection while the motherboard had a 24-pin power connection. I had read a couple places that these boards could be powered multiple ways, one through the 24-pin connection and the other through a 4-pin connection. However, they should NOT be used at the same time. After some discussion on the OpenHomeLab slack group with Mark Brookfield, we got it sorted how the power should be on pins 1 through 10 and 13 through 22. Big thanks to the OpenHomeLab folks and Mark especially. I was also able to confirm this configuration through WiredZone, whom responded back in less than 24 hours.

PSU Connection
Motherboard Power Connected using 20 of 24 Pins

The other issue was with the fans. I ordered some fans since the server would be sitting on a shelf somewhere in my house. Basically, the more fans to move the hot air out the better. The fun with the fans was because there’s not really any instructions on what to do with them or the fan holders. The fan holders did end up connecting together and then they basically sit on top of these pins. From that point, the fans literally just sit in the holders and are connected to the fan accessory ports on the motherboard. The case top is what helps hold everything together. Certainly not what I expected, but it’s moderately better than zip tying the fans to the case.

SuperMicro Dual Fan Holder Assembled
SuperMicro Dual Fan Holder Assembled

Fan Holder Pins Inserted
Fan Holder Pins Inserted in the Case

Holder and Fans Installed
Fans and Holder Installed in the Case

Everything else was extremely straight forward. Only other wish, the case could use some extra room for better cable management. You’ll see why in the assembly images below.

505-203B Case
SuperMicro 505-203B Case

Case and Motherboard Installed
SuperMicro 505-203B Case and Intel X10SDV-6C+-TLN4f Motherboard

IO Shield Installed
IO Shield Installed

Install Completed
Installation Completed

In Action

It’s all cabled up and I can finally hit the power button and watch it spring to life, which it did with no fuss.

First thing I did afterwards was configure the BIOS settings as per Paul Braren‘s advice over at TinkerTry: Recommended BIOS Settings for Supermicro SuperServer SYS-5028D-TN4T

Next up, install ESXi to an attached USB thumb drive. I did this by way of the motherboard’s KVM and it was quite easy, just like it would be via an HP iLO or Dell DRAC.

Lastly, install the VIB to make the 10GbE ports work properly. Paul has a great write-up on that as well over at TinkerTry: How to download and install the Intel Xeon D 10GbE X552/X557 driver/VIB for VMware ESXi 6.x

I had the system running for 90 days straight before I shut it down to start the upgrade to vSphere 6.5. No hiccups or any issues whatsoever. I’ve been extremely happy with the purchase, almost to the point of wondering why I hadn’t done it sooner.

Server Running Status via ESXi
Home Lab Server running for 88 days!

Server Installed
Current Living Quarters for my Home Lab

If you have any questions as you’re reading this, feel free to use the comments or find me on Twitter. Also check out the OpenHomeLab site and their slack channel, and definitely browse through Paul Braren‘s blog TinkerTry. Paul has done an immense amount of research and countless hours of hands-on work with these SuperMicro motherboards.

Now to start thinking through the next home lab upgrade whether I should do vSAN or upgrade my Synology…

Nutanix CE Now Available on Ravello!

Two companies that have been doing some pretty cool things on their own have now joined up to offer something very awesome. As of November 17th, Ravello will now be offering a preconfigured Nutanix Blueprint for use with their HVX nested hypervisor offering.

Things to know:

  • The blueprint being offered is the Community Edition (CE) and has all the existing limitations therein. More information can be found on: Nutanix’s site
  • A Nutanix NEXT account will be required to use the appliance on Ravello. Sign up is quick and easy, my account took under 5 minutes to create and gain CE access. You can create a NEXT account on: Nutanix’s site
  • A Nutanix NEXT account requires a “business email”, ie. non-gmail/live/hotmail/etc
  • The appliance asks for 4 CPUs and 16GB RAM. In my account, it was estimated at roughly $1.13/hour to run out of Amazon’s cloud.
  • If you happen to be a vExpert with a Ravello account, you may have noticed there’s an 8GB/VM limit. With this release, that limit has been bumped to 16GB… on all VMs!

You may be asking “why is this a big deal?” at this point. Nutanix has offered their CE offering for a fair amount of time and the minimum requirements are pretty relaxed, right? While both are true, it’s never been this simple. You don’t need any hardware at all, just a Ravello & Nutanix NEXT account and 15 to 20 minutes and you have an operational single node cluster available whenever you need it for whatever reason you need it.

Let’s walk through getting started:
First, log in to your Ravello account and select the Ravello Repo in the top right-hand corner. There will then be a new window where searching for “Nutanix” will bring up the “Nutanix Community Edition” Blueprint which can be added to your Library by simply clicking “Add to Library”.
Ravello Repo

Once complete, head to the “Applications” area and select “Create Application”.
Give the new application a meaningful name, check mark the “From Blueprint” option, select the “Nutanix CE…” Blueprint, and select “Create”.
Create Nutanix Application

When the application creation completes, you’ll be brought to the “Canvas” page for that application. Let’s take a second to review the specs on the newly created VM:
Nutanix VM Specs

When reviewing the VM is complete, as I wouldn’t recommend changing anything at this point, click on the publish button to push the system out to the desired cloud and get it started.
Note: it’s configured to auto-start, so don’t walk away. It doesn’t take long at all to deploy and it just unnecessarily uses resources as you’re paying for it’s use or, as a vExpert only given a specific allotment.
Publish Nutanix Application

Once it’s deployed, how do you get to it? Click on the Nutanix VM again and off to the bottom right-hand side you’ll notice the status for the VM. Within that box, look in the “External access for” box for the NIC which lists in the “Ports” area “9440 (https)”. The DNS name and IP is where the PRISM management interface can be loaded. Example:
VM Status Information
Nutanix PRISM Interface

An item to note if you’re new to Ravello, the IPs persistent only while that Application is running. Once the Application is shutdown, the IPs are recycled quite quickly. However, the DNS names are persistent and are easily copied out of the VM Status area. I used the IP in these screenshots for simplicity and to hide the public name of my instance.

Logging in is simple, username and password are both “admin”. It will prompt for the user to then change the password. Once that’s complete, it will ask you to login via the user’s NEXT credentials. Once authenticated through, you’re all done! Here’s a peek at what it looked like for me:
Nutanix Web Console

Definitely want to say thanks to both Ravello and Nutanix for making this happen, and also another thanks for offering a special briefing and early access to the vExperts!

Testing out OpenFiler iSCSI Targets… Is it worth seperate targets?

So I’ve been playing around with OpenFiler in the dev environment I’ve cobbled together all made up of systems all nearing or already out of warranty support.

What I wanted to test was whether or not there was a difference between disk performance for an ESXi 5 VM with disk drives having a volume assigned to their own iSCSI target or could I map all the volumes to one iSCSI target and call it good.

In this situation, I have a Dell PowerEdge 2950 with OpenFiler installed along with 6 SATA drives in a hardware controlled RAID5. (PERC6i, I think)

Summary of the settings:
PowerEdge 2950
RAID 5 – 6 SATA 7.2k drives
2 iSCSI connections from Broadcom NetXtreme 5708 NIC
2 iSCSI connections from Intel 82576 NIC
Openfiler installed (latest distro)

Then I have a Dell PowerEdge 2950 with ESXi 5 installed.

Summary of the settings:
PowerEdge 2950
3 iSCSI connections from Intel 82576 NIC
ESXi 5 installed

So I added the Volume Group, added 2 separate volumes each worth 500GB. Then, after starting the iSCSI Target Service, I added a single iSCSI target and mapped both volumes.

Now I get into the ESXi host and add the IPs from the OpenFiler connection to the iSCSI software initiator and rescan the HBA. I configured all 3 iSCSI connections on the host in a Round Robin connection to the devices.

Part of the fun I found was that the switch I’ve obtained (an out of warranty 10/100/1000 24 port HP) does not happen to support jumbo frames. So all of this is ran with the MTU set at 1500.

So I take a VM already running on local storage of the ESXi host, add 1 2GB drive from each of the newly added volumes.

I opened up IOMeter, configured 2 workers (one for each volume) to run all the tests for 5 minutes and let them rip.

Surprisingly, I found that it performed at 1158.879 IOPS at 14.87 MBps total. Volume 1 was 580.365 IOPS and 7.437 MBps and Volume 2 was at 578.514 IOPS and 7.432 MBps.

Now I removed the drives from the VM, unmapped both volumes from OpenFiler and deleted the iSCSI Target. I created 2 new iSCSI targets and assigned one volume to each target and rescanned the HBAs on the ESXi host. Then I added 2GB drives (1 from each volume) to the VM again and ran the same test.

Another surprise, it was slower. The separate targets performed together for 1101.91 IOPS and 14.156 MBps. Volume 1 was 547.439 IOPS and 7.039 MBps and Volume 2 was at 554.471 IOPS and 7.118 MBps.

While the difference isn’t drastic, it’s enough to not bother configuring separate iSCSI targets when dealing with OpenFiler environments.