#HomeLab Networking

Preamble

I mentioned in my inaugural post that attempting to cover the network design for the #homelab in one post wasn’t going to cut it. Alas, here we are; I’ve successfully migrated to NSX-T 2.4 Policy Manager (the Corfu structure) and made a few tweaks to the underlay, so it’s time to dive deep. Be prepared, this is a lengthy post and stands to evidence my previous statement about “being sensitive to security architecture”. I don’t claim to be a networking expert, and much of what I’ve picked up outside of NSX has been self taught using the wonderful #homelab.

Overview

Truer words have never been spoken than the image above. What started in the beginning as one layer 2 boundary has now become various layer 3 zones all peered over a mix of eBGP and iBGP. Before I go into the gory details of rails, uplinks and overlays, a quick view at what we’re building up to:


The top half of this view shows physical uplinks and VLAN rails within the physical boundaries, while the lower half shows the overlay and on-ramp/off-ramp from a host perspective. So let’s define some high-level elements:

Physical Host Uplinks

This is where many are left scratching their heads. I fully appreciate the fact that the number of uplinks is overkill, but it provides the necessary isolation that I’m after when mixing unauthenticated and authenticated traffic on a collapsed compute/edge cluster:

  1. iDRAC Uplink (1) – Provides IPMI and DRAC access to the host
  2. Management Uplinks (2) – Active/Standby pair servicing the vSphere, monitoring, services, management and NFS/iSCSI rails
  3. SAN 10G Fiber Uplinks (2) – Parallel Active/Standby pair servicing vSAN and vMotion rails
  4. TEP Uplinks (2) – Active/Active LACP pair servicing the GENEVE encapsulated traffic
  5. Edge Uplinks (2) – Active/Active LACP pair servicing the on-ramp/off-ramp to/from the overlay, as well as edge services such as load balancing

For provisioning, each host is configured to PXE boot on initial deployment that will lay down the latest ESXi image. The host is then joined to the NSX management plane after being added to vSphere, which yields a short runway to on-boarding new compute capacity into the existing cluster and fabric; just plug the right cables into the right ports!

Logical Zones

Zones in my lab consist of VLAN rails that live within an autonomous system. Each zone can communicate with one another over a dynamically routed layer 3 boundary and is locally fault tolerant with redundant LACP uplinks. As much as I would love to, I can’t afford the power and noise that come with a full spine/leaf architecture (by “I”, of course, I mean my better half…). Here is a breakdown of the four primary zones:

  1. Core Datacenter: AS 65000 – Provides networking and storage services to the rack, home and in limited cases, remote sites. It is the primary link point for the overlay, campus underlay and edge firewall.
  2. ATL Campus: AS 65002 – Provides simple DHCP and mDNS to several layer 3 subnets within the home network. It also hosts home surveillance and provides management access to all of the above.
  3. Global Overlay: AS 65001 – This is where all virtual workloads reside and where internet sourced traffic is initially routed. Classification VNIs provide layer 2 boundaries that attach to the vNIC of each workload.
  4. Storage Fabric: AS N/A – The fiber network providing storage and vMotion bandwidth is layer 2 only, and as such, doesn’t peer with any of the aforementioned.

Policy

Each of the routing appliances (both HPEs and the Netgate) operate using a whitelist connectivity strategy; that is, everything is dropped to the bit bucket unless otherwise allowed. The DFW running on the overlay operates in the same fashion, but I’ll save those details for a future how-to on NSX Distributed Firewall. Lastly, connecting to or moving across the network requires some form of AuthN. That requirement is fulfilled by functionality such as EAP-TLS, captive portals, reverse proxies and good ol’ fashioned VPN.

Details

So let’s get into the weeds. I’ll walk through each of the VLANs and add some context around why it exists. Every segment in this network has a specific purpose, and in some cases, a highly specialized purpose due to security requirements. The dynamic routing protocols are also a mixed bag; partially due to intent and partially due to limitation. I caught some serious flak on reddit when I teased a v1 of the above diagram, so I’ll do my best to incorporate the feedback I received and explain why certain things are the way they are.

Routing and Edge Networks

What all do we have here? Working from the top down:

  1. AS 65000: The Netgate and 3500yl-48G are peered over iBGP using OSPF Area 0 (backbone) to exchange loopback interfaces used as BGP neighbor addresses.*
    • VLAN 999 – Uplink interface between the 3500yl-48G and the Netgate
  2. AS 65002: Both 3500yl switches are peered over eBGP using a directly connected LACP uplink in a /30 subnet to exchange summary routes
    • VLAN 1499 – Uplink interface between both 3500yl switches
  3. AS 65001: NSX Edge Nodes on ECMP uplinks peering summary routes over eBGP to 3500yl-48G
    • VLAN 2551 – First ECMP uplink rail provided to NSX Edge Node Cluster
    • VLAN 2552 – Second ECMP uplink rail provided to NSX Edge Node Cluster

* I received quite a few questions along the lines of “why would you run OSPF and BGP at the same time? That makes no sense.” Actually, it makes perfect sense. iBGP demands an IGP (i.e. OSPF, IS-IS) due to the requirement for ‘attaching’ BGP to a connected interface. Unlike OSPF flooding, if you lose the connected interface, bye bye BGP routes. Redundancy is achieved by using the IGP to flood the loopback interface as the neighbor address, which is always ‘up’, so that a backup route can still be used to access the network in the event of physical link failure. Furthermore, remember that iBGP requires full mesh since BGP broadcasts intra-AS have a TTL = 1. This is why you’ll find that most spine/leaf architectures run eBGP, with all spine switches in the same AS.

Management and Storage Networks

The management VLANs in the lab are likely the ones to have received the most attention. Again, security for me is paramount, so certain configuration items are in place to limit traffic patterns:

  1. All Autonomous Systems: Let’s start with the CoreInfra VLAN, which is configured as a ‘Secure Management VLAN’ on the 3500yl switches. TL;DR, when a VLAN is configured in this fashion, it disables all layer 3 functionality in and out of said VLAN. You must use management stations on the same layer 2 segment to access the management interface of the switch, hence why ‘Management Stations’ and ‘Net Services’ are dual homed
    • VLAN 998 – Secure Management VLAN allowing management access to HP switches, pfSense WebGUI, iDRACs, Brocade VDX, APC NMCs, as well as dual homed bastion hosts and net services provider hosts. Serves as a great glass-break homelab saftey net in the event of an ID-10-T error, while also limiting the management blast radius to the protection of bastion hosts
  2. AS 65000: The Datacenter core management and storage rails are split into purpose-oriented VLANs
    • VLAN 92 – Critical network services such as DNS, NTP, TFTP, RADIUS, etc are all running here
    • VLAN 96 – Dual homed bastion hosts reside here to allow layer 3 connectivity to the isolated CoreInfra 998 zone
    • VLAN 100 – As the description states, this is for all things VMware that need a management interface. Largely vSphere and NSX
    • VLAN 104 – This is the SIEM network hosting Splunk and two IDS hosts. Port mirroring from just about every subnet is dumped here, ingested, inspected and actioned accordingly
    • VLAN 108 – External NAS storage over NFS and iSCSI. Mounts are mostly isolated to ESXi hosts, but some NFS mounts extended into overlay workloads (i.e. Plex, NVR)
  3. AS 65002: Much simpler architecture, and only extends management capability to access points across the house
    • VLAN 8 – UniFi management network for the access points to use when calling back to the WiFo Controller
  4. Other: As stated earlier, the storage fabric is layer 2 only and doesn’t peer with the core network
    • VLAN 802 – vSAN replication
    • VLAN 803 – vMotions for all clusters

Data Networks

Last, but most certainly not least, are the data path rails. This is a combination of VLANs and VNIs depending on the underlay/overlay context.

  1. AS 65000: Layer 2/3 connectivity inter-host within the transport zone requires a transport VLAN to carry the encapsulated GENEVE traffic
    • VLAN 200 – This is the transport network where host and edge TEPs reside
  2. AS 65002: This is the home network and is characteristic of a typical topology deployed by the IT enthusiast/professional
    • VLAN 16 – This is where all of the scary, yet so convenient, IoT devices reside along with the printer. mDNS is bound to this interface to support AirPrint
    • VLAN 24 – UniFi Video Surveillance cameras are powered by POE
    • VLAN 32 – Trusted domain and MDM devices, forced to use EAP-TLS. mDNS is bound to this interface to support AirPrint, AirPlay and other Bonjour related protocols
    • VLAN 40 – Guest and work devices, or any other device that is otherwise untrusted but not serving a smart home purpose. mDNS is bound to this interface to support AirPrint for visiting guests.
  3. AS 65001: The segments in the global overlay are simply to break up IP space by workload classification. Security boundary is not tied to layer 3 boundary, so the VNIs below are purely cosmetic
    • VNI x1 – LBaaS VIPs for the F5 LTM VEs and NSX LBaaS CSPs
    • VNI x2 – Web tier for all ‘internet facing’ workloads. None of these virtual hosts actually face the internet, but they are the first processing stop after LBaaS from an internet sourced request
    • VNI x3 – App tier houses workloads that either execute code-based processing, such as (Python, PHP, etc) or hosts which serve as SaaS enterprise integration connectors (AD LDAP, etc)
    • VNI x4 – Database tier is self explanatory, hosting both MS and MYSQL shared clusters
    • VNI x5 – VDI and published app hosts all reside in this /23. Mostly instant clones and RDS hosts for testing.
    • VNI x6 – General services provides largely Active Directory and any ancillary components, i.e. RW/RO DCs, Certificate Services, NPS, etc

Wrapping Up

As promised, a lengthy post, but a complete top-down view of the networking architecture, design and rationale for why things are the way they are. My colleagues love to give me grief for the complexity, but I like to think that the reason I was able to pick up what I have in the networking space along the way wouldn’t have been possible without it. Plus, I now deeply empathize with my customers when deploying a solution such as Workspace ONE that vastly touches many enterprise components across various network boundaries. If you find yourself nodding your head ‘yes’ right about now, I challenge you to explore the possibilities of virtualizing your network ecosystem. Despite the complexity, on-boarding a new workload or stack in this environment is as simple as attaching a vNIC to the correct VNI; no underlay modifications, no VLAN maintenance, no switch stacking.

Go forth and prosper. As always, I welcome and encourage feedback. Don’t hesitate to reach out direct with questions and/or comments you might have!

Tags: , ,