HPVirtual Connect Multiple tunneled VLANs with Active/Active Uplinks and 802.3ad (LACP) Ethernet and FCOE

 12 April 2017         Leave a comment

Overview

This scenario will implement two VLAN-Tunnels per Virtual Connect Module to provide support for multiple VLANs. The upstream network switches connect VLAN-Tunnels to two ports on each FlexFabric 20/40 F8 modules, LACP will be used to aggregate those links. By using VLAN tunnels for MGMT and Production we remove the need for separate VLAN administration on the VC Domain.

One VLAN-Tunnel will be used to provide connectivity for the Management and VMotion networks, The other VLAN-Tunnel will be used to service the Virtual Machines on the Blades. As multiple VLANs will be supported in this configuration, the upstream switch ports will be configured for VLAN Trunking/Tagging. The upstream switches will also provide a Native VLAN to support PXE boot for the ESXi hosts which will be deployed with Auto-Deploy from VMware.

When configuring Virtual Connect, we can provide several ways to implement network fail-over or redundancy. One option would be to connect TWO uplinks to a single Virtual Connect network; those two uplinks would connect from different Virtual Connect modules within the enclosure and could then connect to the same upstream switch or two different upstream switches, depending on your redundancy needs. An alternative would be to configure TWO separate Virtual Connect networks, each with a single, or multiple, uplinks configured. Each option has its advantages and disadvantages. For example; an Active/Standby configuration places the redundancy at the VC level, where Active/Active places it at the OS NIC teaming or bonding level.

We will review the second option in this scenario and build a situation with 4 Tunneled VNET links to the individual blades . In addition, several Virtual Connect Networks can be configured to support the required networks to the servers within the BladeSystem enclosure. These networks could be used to separate the various network traffic types, such as iSCSI, backup and VMotion from production network traffic. This scenario will also leverage the Fibre Channel over Ethernet (FCoE) capabilities of the FlexFabric 20/40 F8 modules and will connect two fabrics, one to each of the FlexFabric 20/40 F8 modules using 2 Uplinks per Fabric

Requirements

This scenario must support both Ethernet and fibre channel connectivity. In order to implement this scenario, an HP BladeSystem c7000 enclosure with one or more server blades and TWO Virtual Connect FlexFabric 20/40 F8 modules, installed in I/O Bays 1& 2 are required. In addition, we will require ONE or TWO external Network switches which in our case are two Cisco Nexus 9k switches configured as a single VPC domain. The Fibre Channel uplinks will connect to the existing FC SAN fabrics. The SAN switch ports will need to be configured to support NPIV logins. Two uplinks from each FlexFabric 20/40 F8 module will be connected to the existing SAN fabrics.

Figure 1 – Physical View; The image shows two Ethernet uplinks from ports X5 and X6 on Module 1 to Port 1 on the Nexus switches and two Ethernet uplinks from ports X5 and X6 on Module 2 to port 2 on the Nexus switches. It also shows two Ethernet uplinks from ports X7 and X8 on Module 1 to Port 3 on the Nexus switches and two Ethernet uplinks from ports X7 and X8 on Module 2 to port 4 on the Nexus switches, all Ethernet uplinks use 10 GB SFP’s to connect to the Cisco Network. The SAN fabrics are also connected redundantly, with TWO uplinks per fabric, from ports X1 and X2 on module 1 to Fabric A and ports X1 and X2 on module 2 to Fabric B, all FC Uplinks use 8 GB SFP’s to connect to the SAN Fabrics.

Figure 196 – Logical View; The server blade profile is configured with four FlexNICs and two FlexHBAs. NICs 1 and 2 are connected to MGMT which are part of VLAN Tunnel 1 to support ESXi management and vMotion. The VLAN-Trunks are configured to support 1-10Gb port speeds. NICs 3 and 4 are connected to VLAN-Tunnel-2 which is supporting the VM guest VLANs. The VLAN-Tunnels are configured to support 10-20Gb port speed. Two Flexhba’s provide access to the storage platform. and are configured to support 4-8GB port speeds.

Installation and configuration

Nexus Switch configuration
As the Virtual Connect module acts as an edge switch, Virtual Connect can connect to the network at either the distribution level or directly to the core switch. In this situation the Virtual Connect is connected at the distribution level. For more information on how to configure a vPC domain on the Nexus 9k click this link:

Whether connecting to a Shared Uplink Set or Tunnel, the switch ports are configured as VLAN TRUNK ports (tagging) to support several VLANs. All frames will be forwarded to Virtual Connect with VLAN tags. One VLAN on the MGMT Tunnels will be configured as Native on the Nexus switch.

Note: When adding additional uplinks to the Tunnel, if the additional uplinks are connecting from the same FlexFabric 20/40 F8 module to the same switch, in order to ensure all uplinks are active, the switch ports connected to each Tunnel will need to be configured for LACP within the same Link Aggregation Group.The network switch port should be configured for Spanning Tree Edge as Virtual Connect appears to the switch as an access device and not another switch. By configuring the port as Spanning Tree Edge, it allows the switch to place the port into a forwarding state much quicker than otherwise, this allows a newly connected port to come online and begin forwarding much quicker.

The following port configurations are an example. On the MGMT ports VLAN 6 will be used for ESXi MGMT and VLAN7 will be used for VMotion. For the production network the following VLAN’s will be defined: VLAN 2-11, VLAN 16-43, VLAN 46, VLAN76, VLAN100-220 and VLAN 360-425.

interface Ethernet1/1
description VC1_Bay1_MGMT_X5
switchport mode trunk
switchport trunk native vlan 6
switchport trunk allowed vlan 6,7
spanning-tree port type edge trunk
storm-control broadcast level 1.00
ch
annel-group 1001 mode active

           

interface Ethernet1/3
description VC1_Bay1_PROD_X7
switchport mode trunk
switchport trunk allowed vlan 2-11,16-43,46,76,100-220,360-425
spanning-tree port type edge trunk
storm-control broadcast level 1.00
channel-group 1003 mode active

Configuring the VC module

– Physically connect Port 1 of Nexus 1 to Port X5 of the VC module in Bay 1
– Physically connect Port 1 of Nexus 2 to Port X6 of the VC module in Bay 1
– Physically connect Port 3 of Nexus 1 to Port X7 of the VC module in Bay 1
– Physically connect Port 3 of Nexus 2 to Port X8 of the VC module in Bay 1
– Physically connect Port 2 of Nexus 1 to Port X5 of the VC module in Bay 2
– Physically connect Port 2 of Nexus 2 to Port X6 of the VC module in Bay 2
– Physically connect Port 4 of Nexus 1 to Port X7 of the VC module in Bay 2
– Physically connect Port 4 of Nexus 2 to Port X8 of the VC module in Bay 2

Note: If you have only one network switch, connect VC ports X5, X6, X7 & X8 (Bay 2) to an alternate port on the same switch. This will NOT create a network loop and Spanning Tree is not required.
– Physically connect Ports X1/X2 on the FlexFabric in module Bay 1 to switch ports in SAN Fabric A
– Physically connect Ports X1/X2 on the FlexFabric in module Bay 2 to switch ports in SAN Fabric B

Since we use only use the FlexFabric 20/40 F8 modules and no other 1GB Virtual Connect modules are present there is no need to expand the VLAN Capacity in the VirtualConnect domain. A total of 4096 VLAN’s can be handled by the Virtual Connect Modules and is the default for new VC Domains as of version 4.30

We will create four Ethernet networks MGMT_IB1, MGMT_IB2, Prod_IB1 and Prod_IB2 using the connected interfaces as described below.

After creating the four networks the list of networks will look like this.

The SAN connection will be made with redundant connections to each Fabric. SAN switch ports connecting to the FlexFabric 20/40 F8 module must be configured to accept NPIV logins.

Now the enclosure is connected to the rest of the infrastructure we must create server profiles

All server profiles will look like somewhat like this. The values for MGMT_IB1 an MGMT_IB2 in the picture are set to 1Gb-6Gb, in the real life scenario this is set to 1Gb-10Gb. SAN connections are set to static 8 GB.

When we review the server bay connections you can see there is an additional adapter available on each Ethernet Adapter port.

 

 

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.