Nutanix CE Version 5.6 is out, and it’s hot!!!


With the release of Nutanix Community Edition version 5.6, Nutanix has also provided a new installation mechanism as an alternative to the previous dd imaging method, now allowing for a .iso installer.

Previous CE Installs

With previous Nutanix CE installs, it required the use of the dd utility to take the .img file and write this out to either a USB drive or another boot drive.

Per the Nutanix CE documentation, the following methods were supported:

Imaging software

  1. Linux: use the included dd utility
  2. Mac OS X: use the included dd utility
  3. Windows: ImageUSB from PassMark Software (freeware)

While the use of the dd utility wasn’t too complex, it was a simple as using the command below to write the image to a drive, where the X was the USB or another drive letter.

dd if=ce.img of=/dev/sdX

Using this method with another type of drive would take some tinkering with a drive caddy or some other methods. Not easy, but not too bad in the long run.

Rebuilding CE Lab

WIth the release of CE 5.6, I wanted to go ahead and rebuild my 3-node CE cluster, and once I saw the new installation method I was super excited that I could get rid of using USB drives as my boot drives – for some reason either I buy really crappy USB drives for boot drives, or I’m just unlucky. I have a bad track record of them failign on me.

Using the .iso file with CE is more akin to what we’re used to when installing Windows, ESXi or another OS.

My AHV (CE) lab is a 3-node SuperMicro WhiteBox cluster, so I figured I’d give the .iso installer a test run and see how it goes thru IPMI connectivity.  I removed the USB boot drive from each of the hosts, and since I had 3 120GB SSDs laying around, I added those to the hosts.  That provided me with 4 drives per host:

  • Boot Drive:  120GB SSD
  • SSD: 256GB SSD
  • Capacity:  500GB and 1TB drives

 

Installing CE using .iso installer

I mounted the .iso file to the IPMI connection, and rebooted the host, and happily I was presented with the below image upon reboot.  I choose the CE Installer, and it was time to get down to business!

2018-06-22_09-55-34.png

Watching the host boot, we can see in the below image that the installer is discovering the available drives in the host.

2018-06-22_09-56-03.png

There is a note that the current state of the .iso installer will automatically choose the smallest drive to install CE on, which in my case was exactly what I wanted, to use the 120GB SSD that was newly installed.  YMMV – my ‘boot’ drive was listed as sdd, which maps to the 4th drive in the system.  I could have moved the drives around, but laziness struck.

Once all the drives are discovered, we’re presented with our familiar CE setup screen.  In my case, I’ve repeated this 3x for each node, but not creating the single-node cluster.

2018-06-22_10-32-26.png

Once you start, the Phoenix process to install and configure the install of AHV and configuration of the CVM kicks off – pretty standard stuff here.

2018-06-22_10-39-12.png

Finally, we’re presented that the installer has completed the imaging process, with the needed reboot.

2018-06-22_10-48-48.png

Validation!!!

Validating that the .iso installer worked, first we know by the fact that we’re sitting at the login prompt after a reboot, so far so good!

2018-06-22_10-50-38.png

Next, we can validate that the boot drive is indeed the drive we were expecting to be used, thru the commands df -h and fdisk -l.

Using the command df -h, I wanted to ensure that the drive I expect to be listed is listed and in use for / – looking good here.

2018-06-22_10-52-25.png

Next, I want to validate using fdisk -l that the drive is marked as the boot drive – again, looks good!

2018-06-22_10-54-56.png

Next Steps

So where do we go from here?  Well, we need to create the cluster from one of the CVMs, using the command cluster -s <cvmIPs> create.  In my case, the below statement is used.

cluster -s 10.10.200.200,10.10.200.205,10.10.200.210 create

Once that completes, we have a working CE environment, ready for all the gloriousness that is Nutanix!  One item I did notice (and maybe missed from earlier installs) was that when I went to log into Prism and provide my Nutanix credentials to activate CE, it wouldn’t connect due to DNS missing.  Easy fix though thru ncli commands:

ncli cluster add-to-name-servers servers=10.10.20.254,10.10.201.254

Final Thoughts

The new CE installation process is awesome, and I look forward to some more modifications here, such as being able to choose the drive – but really since the boot drive is probably the smallest drive in the system it might not be that needed.

I love the new CE process, and especially for those systems with IPMI or OOB connections, this makes the rebuild or reinstall that much easier!

Thanks for stopping by!

Installing Nutanix CE Version 2018.05.01 (5.6) with .iso installer
Tagged on:                 

Leave a Reply

Your email address will not be published. Required fields are marked *