Tuesday, May 18, 2021

EUC Lab Build

As a EUC professional I require a lab in order to stay current with new features, prepare for presentations, test new versions, help customers with questions, perform demos, replicate issues etc... Initially I was using a Type 2 Hypervisor on my laptop for my lab needs, however my lab scenarios and use cases have become more and more complex to the point I could no longer execute them on my laptop Hypervisor.

Henceforth I decided to invest in a lab comprised of a 3 vSphere Host Cluster using Intel 3 NUC PCs. I also leveraged the NUCs' internal disks and created a vSAN cluster for the storage tier. I used regular Distributed Switches for Networking. On top of this SDDC stack I installed my Horizon lab. 

This post is going to present the lab specs, build process, gotchas / unforeseen issues and the final result. 

Before building the lab I came of with a list of Requirements to drive the lab's logical and physical designs. These were the requirements:

  • Cost effective
  • Scalable
  • SDDC! Support Software Defined Datacenter
  • Capable of servicing all required EUC components

I decided to go with Intel NUCs for the Compute tier because NUCs are currently the de facto hardware chosen for home labs and I wanted to ensure my investment was going to pay dividends. Once I decided to use NUCs I then did some research to identify a specific NUC model which has been used successfully with vSphere. I also had to purchase SSD disks for the vSAN Cache Tier. I have to admit I did not invest much (if anything) on networking. I used a regular Gig switch for intra-host communication and uplink to my router. 

This is the current Lab specs: 

 

  • 3 USB Flash Drives
  • 3 Intel NUCs - NUC8i7BEH
    • 8th Gen Intel
      • Quad-Core i7-8559U
      • 4.50 GHz
    • 64GB DDR4 RAM
    • 2TB SATA HDDs








  • 3 120GB SDDs 
 

 
 
 
 
 

  • 4 Cat 8 Ethernet Cables
  • 1 5-port GB Switch
 


 
 
 
 
 
 
 
  
SDD Physical Installation

 

The NUCs I acquired already had the necessary amount of memory I needed, yet I still had to install the 3 SSDs to be able to leverage vSAN in Hybrid mode. These were the main steps of the installation:

  • Unscrew the four screws at the bottom of the NUC 
  • Turn the NUC face up and pull the cover
  • Unscrew the SSD slot Screw
  • Install the SSD card
  • Place the SSD slot screw back on 
  • Place the four screws at the bottom of the NUC back on 
  


 
 
 
 
 
 
 
 
 
 
 
 
  
 
 NUC BIOS Optimization

I did not test different permutations of the settings for my lab. Instead I leveraged this useful Blog which already had gone through this effort and suggested the best NUC BIOS settings for vSphere:

Link



 


vSphere (ESXi Host) installation

When time came for me to start installing my Hypervisors I chose to install ESXi on USB Flash Drives since the NUCs internal HDDs were going to be used for vSAN.

Besides one USB Flash drive to be used for each NUC for the ESXi OS I also used a fourth USB Flash drive for the ESXi install media.

  • I used Rufus to format the USB install media as a bootable Flash drive and copy the ESXi installer to the Flash drive. 

I also used Rufus to create a Windows 10 install media bootable USB Flash drive to remove the existing partitions in my NUCs internal HDDs. The NUCs came with Windows 10 installed originally. 

  • I used Rufus to create a Windows 10 bootable USB Flash drive, 
  • I booted up each one my my 3 NUCs using this new Windows 10 USB Flash Drive
  • Selected Repair PC
  • Started Diskpart







  

 

vCenter Installation
I had a Chicken or the Egg situation with the vCenter installation...
  •        vSAN needs the NUCs internal HDDs to be clean of partitions
  •        vSAN also needs vCenter to be installed
  •        A VMFS datastore is necessary to store the vCenter appliance
  •        What can I do?
    •        USB Datastores to the rescue
    •        I used a USB Datastore to host the vCenter appliance
    •        I performed a storage vMotion to move the vCenter appliance to the vSAN datastore once it was up and running
    •       Use the following link to read more into how to mout USB datastores into vSphere

vSAN Configuration

Once I had vCenter installed and created my 3 node cluster I took these steps to set up vSAN

• I started the vSAN installation wizard
I selected the 3 HDDs for the Data tier and the 3 SSDs for the Cache tier
I created 3 Fault domains, one for each ESXi Host (NUC)
I selected my vSAN data storage profile 
    Default is Raid 1.- Mirroring  
Encryption and Deduplication as well as RAID 5 are not available in Hybrid environments such as this, only in All Flash environments.  
 
Once the SDDC was up and running... 
 
Once I have my vSphere and vSAN clusters deployed I performed one last setting to optimize the environment. I enabled Transparent Page Sharing for better memory performance. This feature used to be enabled until ESXi 6.0. You can read more on why it is no longer enable by default here and how to enable it here 
t on All Flash environments

Initial Impressions, Final Thoughts...

 

The end result of this journey is that now I have a lab to work on all the EUC products I need. 

  • I have deployed on it 2 PODs, one on Horizon 7.13 and the other on 8. Each POD on a different AD domain with a two-way transitive trust between them. 
  • I deployed a UAG for external access
  • I created a Horizon Composer server for linked clone pools. 
  • I have instant clone pools deployed for both VDI pools and RDSH farms.
  •  I have the 2 Workspace One access connectors to integrate my Horizon PODs to my Workspace One Access Tenant
  • I also have a Workspace One UEM Cloud Connector deployed for Workspace one testing using AD accounts and certificate authority integration. 
  • I have one Horizon Cloud Connector deployed to connect one of my PODs to the Horizon Cloud Control Plane.
  • I have deployed a App Volumes manager and Dynamic Environment Management in the lab.
  •  In total I have over 40 VMs deployed in this environment. 
  • I am even using the lab to test other solutions such as AVI (NSX Advanced Load Balancer) and NSX-T, although these VMs generally remain off in the environment.
 
In hindsight one aspect that I would like to have reviewed more carefully was the use of SSDs instead of HDDs in my cluster.
  • To be clear resiliency and affordability were part of my list of requirements, performance was not. Having said that the clear point of resource contention on this home lab is storage performance. For example when pushing a new snapshot to an instant clone pool it may take several meetings for the process to be concluded but as long as I am patient the the pool is updated and I can continue with my lab tasks.
 
The vSAN cluster works flawlessly and I have tested it resiliency already. 
  • One of the ESXi USB flash sticks failed and I effectively lost a host in my vSAN cluster. But since the vSAN policy I used allowed me to lose one host without losing VMs, I was able to built another ESXi USB flash boot disk, and rejoin the Host to the vSAN cluster. 
 
Whilst I have only one NIC in each NUC PC once I got the lab up and running I also segrated the production traffic from the vSAN traffic and vSphere management/vMotion traffic by using a VMware fling which allows USB / NIC converters to be used with vSphere. I will write a separate article on this topic in the future. 
 
 
I may also write other follow-up small articles showing how to use Rufus to create a bootable ESXi installation USB flash drive and a ESXi OS bootable USB flash drive. Stay tuned.

 

 

EUC Lab Build

As a EUC professional I require a lab in order to stay current with new features, prepare for presentations, test new versions, help custome...