My #HomeLab: Q4 '19 Update

Opening

Most of you might have noticed the hiatus I’ve taken from the WS1 + Azure AD series; not to worry, this is only in anticipation of several interesting product changes that are relevant for the series. We will resume once those are in place later this year. In the interim, I’m taking advantage of the down time to do some infrastructure and architecture housekeeping. You might recall from my #homelab post that I mentioned a few tasks I’d like to see to completion before the end of the year. Namely,

  1. Add a fourth host to the vSAN cluster in order to support automatic rebuild in the event of a host-level failure
  2. Migrate the NSX-T 2.4 install base to the new Corfu structure – in effect, migrating the configuration to the Simplified UI
  3. Upgrade vSphere from 6.7U1 to 6.7U3

An item that I hadn’t originally planned, but ultimately implemented, was removing many of the overlay uplinks in each host to a consolidated N-VDS hosting vSAN, vMotion, the host TEPs, and edge VLAN uplinks. So let’s jump into some of these changes and talk about how they were implemented, why it was important, and maybe a few lessons learned that you can avoid.

vSAN Cluster

We’ll start with the simplest of the changes: adding a fourth host to the vSAN cluster. In Duncan’s 4 nodes is the minimum for a vSAN cluster post, he talks about why the magic number is 4 (sans 2-Node ROBO use cases). Simply put, a 3 node cluster will create an at-risk situation in the event of hardware failure or network isolation. I’ve found that in lab deployments the most common at-risk scenario is cluster level upgrades. Rolling hosts in and out of maintenance mode is very routine when using VUM, but oh my, just hope that we don’t experience a critical failure during that exercise. The fact that I only have one disk group per host exacerbates the threat, so I quickly began looking at options after the initial purchase a year ago.

So how does adding a fourth host to the cluster mitigate this concern? Automatic rebuild and data evacuation…. Continuing with the VUM example, a fourth host allows me to choose automatic host maintenance mode with data evacuation to the remaining three hosts in the cluster; FTT=1 can now be satisfied while the host is upgraded. This rolling procedure takes a bit longer given the lead time to evacuate a disk group, but provides peace of mind in the somewhat-unlikely event that a disk crashes or a PSOD craters a remaining host.

Another example that will likely resonate with my fellow #home-labbers is the fact that you might not log into vCenter that often. If a host is down for some unknown reason, it might be days or weeks before you are aware of the failure. Four nodes doesn’t relieve you of your ongoing maintenance duties, but they might save you from an irrevocable loss of data in the event that you experience a failure. I have my iDRACs setup to send an email when hardware fails, but it might be days before I’m made aware of a PSOD.

On that note, I should mention that a four node cluster does not provide an FTT=2. It provides an FTT=1, and in the event of failure after an automatic rebuild has been performed, another survival of FTT=1. To me personally, two unrelated failures over a longer period of time seems more likely than two unrelated failures encountered within an hour or two.

I won’t bore you with the ‘how’ behind adding a fourth host to an existing vSAN cluster. It’s the same as the additional two you would’ve added before, and very well documented on both VMware Docs and various blog posts.

NSX Simplified UI

Ah, the ‘Simplified UI’. Under the hood, the NSX-T 2.4 Manager UI enhancement is powered by the Corfu data structure. What I didn’t realize while upgrading from 2.3.1 to 2.4 was that the ‘Advanced’ settings (or in my opinion, the configuration you most likely had in place prior to the upgrade) would not migrate to this new structure automatically. For those already using Policy Manager in 2.3.1, you’re in good shape; that configuration set will come over automatically. All in, as documented here, be prepared to recreate your configuration in the new UI if that means something to you. Otherwise, your configuration will remain in the ‘Advanced Settings’ tab.

One of the great things in v2.4+ is the consolidation of the management and control planes into one cluster. The migration utility steps you through this sequence, and as a result, you’re left with a single 3-node cluster that handles all things NSX Manager and Controller; highly conducive to distributed configuration and native high availability. The stability of the cluster has been impeccable, but there is a node resource view on the Manager dashboard that seems to skew resource consumption of each manager node. I can confirm that 2.4.2 more accurately depicts resource consumption, and that the upgrade from 2.4 to 2.4.2 was smooth.

vSphere 6.7U3

Upgrading vSphere was fairly straight forward and didn’t warrant much content to blog about. One thing I will note: if you’re running Dell OpenManage, you should be aware the the 9.3.0 VIBs are unsupported on 6.7U2 and 6.7U3 (documented here at Dell and here at VMware). I encountered this issue and initially determined that I had an incompatible NIC driver. As it turns out, it was OpenManage all along. Dell has released 9.3.1 which addresses the issue, so if you haven’t upgraded yet, be sure to upgrade OpenManage first.

Consolidated Underlay

Now that I’ve been running the #homelab v2 for about a year now, I’ve begun considering the design aspects that I hadn’t accounted for previously when building the core functions. One of those aspects was power consumption. I’m fortunate to live in an area that receives power on co-op grids at a fraction of market rates, but nonetheless, running two HPE 3500s got me thinking: is there an architecture that gets me closer to enterprise hyper-converged and saves power? The answer to that question is a resounding yes, and NSX-T would be the key to making it happen.

If you recall from my initial #homelab post, I mentioned that each host had several uplinks to the underlay in support of isolating unauthenticated v. authenticated traffic patterns. The move to zero-trust doesn’t exactly warrant that concern which has since been mitigated with several IDS appliances running at the global underlay uplinks to my ISP. My sights were set on collapsing all of the HCI and data paths into the BCM57800 NDC on each R720, and fortunately, my friends from the NSBU have a collapsed architecture documented here that outlines exactly how you can achieve this with one N-VDS on each host (note: I chose to leave management connectivity on a DVS using the two remaining 1Gb uplinks). vSAN, vMotion, and all of TEP/Off-Ramp traffic is on the 10G pair of uplinks. Moving to this model eliminated the need for HPE 3500yl-24G which was augmented with the sister 48G core switch for all things home, lab management, and data layer. It would, however, warrant an investment to maintain a level of high availability in the event that an underlay switch went offline (planned or unplanned). This meant a new Brocade VDX was about to find its way into the rack…

To no surprise, I purchased an additional Brocade VDX 6720-24 and configured a VCS fabric (Brocade’s equivalent to a “stack”) with the pair so that one logical chassis could be presented to both the hosts and the remaining HPE core switch. I must say, the physical side of this configuration was dead simple. A VCS and RBridge ID was assigned to each chassis, dual ISLs plugged in, and some STP applied on the HPE. I then extended the transport and off-ramp VLANs into the fabric so that highly available connectivity could be plumbed to each host on the 10G uplinks (one per physical chassis for redundancy), all viewed as a single logical switch. An HPE J8694A was also purchased for the 3500yl so that I could uplink the VCS fabric to the core switch over redundant 10G vLAGs.

With the new physical underlay plumbing in place, I then turned to NSX for a major overhaul of the host N-VDS. Three distinct things needed to happen in order for the new N-VDS to host vSAN, vMotion and the NSX Edges:

  1. One overlay backed transport zone and a VLAN backed transport zone needed to be created on the same host N-VDS
  2. The host uplink profiles were modified to include one default policy, a named vSAN policy, and a named vMotion policy
    • As a good measure, I also included a NIOC policy to limit vMotion bursts to 20% of total NIC bandwidth
  3. The new host N-VDS was attached to the BCM57800 10G NICs

With these components in place on the hosts, I could then migrate vmk1 (vSAN Kernel NIC) and vmk2 (vMotion Kernel NIC) to VLAN backed segments in the VLAN transport zone. I chose opposite active/standby teaming policies for these two services so that I could isolate vMotion bursts from the steady vSAN replication traffic. Additionally, I migrated the host TEPs to the overlay backed transport zone. Take note that because the N-VDS will also host the NSX Edge TEPs, you need to choose separate transport VLANs for your Host/Edge TEPs. A brief visit to vCenter in order to swap out the Edge NIC portgroups (moved to the new VLAN transport zone and segments) and I had the HCI and data planes both on one N-VDS. It’s worth nothing that because of preference, I reconstructed the host and edge N-VDS which required swapping out portgroups on each VM. I already had the hosts in maintenance mode with every VM powered off, so this wasn’t a major ordeal and provided an opportunity to clean up old configuration that was no longer needed in this architecture.

Speaking of new architecture, what was accomplished with this exercise? I was able to eliminate 16 physical uplinks to the core switch across a four host cluster, consolidated a campus switch into a common core in favor of a redundant fabric switch, and consequently saved a few bucks not running an extra HPE 3500 at almost idle 24/7. Very cool stuff, and straight forward to manage from a common management plane (NSX).

For reference, the “how” of this exercise can clearly be followed using this blog post.

Things I’m Looking Forward To for Q1 2020

The wonderful thing about all of the changes above is the “set it and forget it mentality”. This infrastructure design is very stable, easy to manage, and saves some coin on the light bill. It also frees up time to focus on the exciting things yet to come. My first wave of “new-ness” in the lab is one of our latest acquisitions: Avi Networks. I’m very excited about Avi being a part of the VMware family and wasted no time getting in contact with the Product Management team so that I could deploy the bits in my environment. That has been completed, and in an upcoming post, we will cover how to load balance Workspace ONE with Avi Vantage! Stay tuned for further updates….

Tags: ,