Welcome back to my short series on our journey of testing Nutanix and Mellanox.  Following up on Part 1 of the Nutanix and Mellanox Series, I’m going to dive deeper into the Nutanix network configuration for use with the Mellanox SX1012.

Nutanix Network Configuration

Following the Nutanix AHV Best Practices Guide for networking (image below), we want to make sure that the CVM and VMs are using the 10Gbe networking (they also recommend to leave the 1Gb connections unplugged if not needed).

If you read Part 1 of this series, since the dedicated IPMI port is 100MB only and the Mellanox switch does not support 100MB on the ports, I’m going to use the shared 1Gb port for IPMI and simulate User VM traffic.  So, in order to accomplish as close to Nutanix best practices as I can, I am going to do the following steps (watch the video by Jason Burns of Nutanix here to see what we’re doing – in fact check out his entire Light Board Series!):

  1. Create Second Bridge for 1Gb interfaces
  2. Update Bridge0 to use ONLY 10Gbe interfaces
  3. Update Bridge1 to use ONLY 1Gb interfaces
  4. Change Bond configuration to use balance-slb for load balancing
  5. Check MAC Address learning to make
One item of Note:  AHV uses Open vSwitch (OVS) for all networking components, so pretty much it’s all command line in AHV for networking other than creating new networks.

With AHV by default all interfaces are within the same bond, which uses an Active/Backup bond, where only a single nic is active at a given time – so you can’t guarantee that CVM/VM traffic will only use the 10Gbe adapters by default.  This is why we want to split the 10Gbe interfaces from the 1Gb interfaces, pretty simple concept.

In addition to the separate bonds, we want to make sure we’re actually taking advantage of our dual 10Gbe links upstream, which is why the active/backup bonding configuration isn’t optimal, we’re wasting 50% of our potential upstream bandwidth.  AHV
bonds will support both Balance-SLB as well as LACP, in our case here we’re going to stick with Balance-SLB, with a possible update in the near future.

Running Commands against Nutanix CVM and AHV

Nutanix makes a lot of the configuration items very simple via the CVM and the allssh command.  This command will take any command after it and run it against each of the CVM, as well as against each AHV host.
Looking at our already configured Nutanix block, here are the results of the command allssh manage_ovs show_uplinks.

As we can see, the allssh command runs our command and outputs the results from each of the CVMs.  We can also use the allssh command to run commands against the AHV hosts directly, as shown below.  Notice the command uses root@192.168.5.1, which based on our image above is the direct connection between eth1 (CVM) and vnet1 (AHV).

So let’s get started with the reconfiguration of the 2nd host!

Creating Second Bridge for 1Gb Interfaces

To create a second bridge on each AHV host, you could do it as previously covered using the allssh command via the CVM, or login directly to each host – the easier method is via CVM of course.
We’re going to create the secondary bridge with the name br-1gb, on each host, with the below command:

allssh “ssh root@192.168.5.1 "ovs-vsctl add-br br-1gb”"

As your can see, our results don’t provide us a lot of information, other than that it was ran against each of our CVMs without error.  Let’s take a look using the manage_ovs show_uplinks command to see what actually happened.  Notice
we now have our second bridge named br-1gb configured on each host, but there aren’t any interfaces associated to it.

Modify Bridges to separate interfaces

Now that we’ve created our second bridge specifically for 1Gb interfaces, we need to modify Br0 to use only the 10Gbe interfaces.  As from the last image, we can see that eth0,eth1,eth2 and eth3 are all associated to the bridge br0.  Our goal is to have the 10Gbe interfaces associated to the first bridge, and the 1Gb interfaces associated to the second.

If you didn’t know which interfaces were which, you could use the command mange_ovs show_interfaces  to see the interfaces and link speeds.
The command manage_ovs  has a parameter named interfaces, which allows you to specify 10g or 1g.  First, we want to force ONLY the 10Gbe interfaces to be associated to bridge br0.  To do this, run the command allssh manage_ovs --bridge_name br0 --bond_name bond0 --interfaces 10g update_uplinks.
 
As we can see in the image below from one of the CVMs, eth0 and eth1 are no longer associated to the first bridge.
Now, we need to add eth0 and eth1 to our newly created bridge.  Just copy and paste the command we did for the 10g networking, and replace the 10g value with 1, like this: allssh manage_ovs --bridge_name br-1gb --bond_name bond1 --interfaces 1g update_uplinks.
 
Running the show_uplinks command can show us how our uplinks are broken out by bridge.  We have our 10Gbe interfaces on the first bridge, and our 1gb interfaces on the 2nd bridge.

AHV Bond Configuration

As I stated before, we’re going to be using the bonding mode of Balance-SLB for now, since that does not require any switch modification like using LACP does, and it provides us full bandwidth upstream from our hosts.
To validate our bond configuration is using the default of Active/Backup, run the following command:

allssh “ssh root@192.168.5.1 “ovs-appctl bond/show bond0””

Using Balance-SLB doesn’t require any modification on upstream switches, whether that’s a single switch or multiple switches – think Nexus vPC pair, Catalyst VSS or even Mellanox MLAG.  OVS will look at load on each interface and balance traffic  evenly based on the source MAC.
To set both of our bonds to use Balance-SLB instead of the default Active/Backup, run the following command on a CVM:

allssh “ssh root@192.168.5.1 “ovs-vsctl set port bond0 bond_mode=balance-slb”” 

Validating our bond configuration is as simple as running the bond/show bond0 command again.  Repeat the previous command for bond1 to change the load balancing model there as well.

One other recommendation from Nutanix for the OVS Load Balancing configuration, is to change the rebalance interval from the default time of 10 seconds to 60 seconds, which will keep MAC addresses from bouncing between the uplink interfaces.

To set the rebalance interval on the bonds, run the following command on a CVM for each bond:

 allssh “ssh root@192.168.5.1 “ovs-vsctl set port bond0 other_config:bond-rebalance-interval=60000””

Validating Interface Configuration via MAC Addresses

To make sure our changes are working as we’d expect, I’ve ran the following tests:
  1. Ping IPMI address of 10.0.101.11
  2. Ping AHV address of 10.0.101.21
For result 1, I expect to see that the SX1012 switch has learned the ARP of 10.0.101.11 on VLAN 101, and the MAC address over interface E1/11, which is my trunk link to the LabMGMT3560 switch.
Results look good!
So to recap, we modified the Nutanix AHV host networking to provide segregation of the 10Gbe and 1Gb interfaces and modified our load balancing method to take advantage of both 10Gbe interfaces upstream to our Mellanox switch.
Another validation that we can to check to make sure our configuration changes worked, is we should no longer see the below warning message in Prism about the CVM being on a slow network interface.
 Hope this post was informative, thanks for reading!  Again, thanks to Mellanox and Nutanix for providing the demo gear for use with this project!

 

Deploying Nutanix and Mellanox – Part 2
Tagged on:                 

Leave a Reply

Your email address will not be published. Required fields are marked *