Nimbles release of 2.0.6 which includes compatibility with the newly released NCM toolkit for windows and the ESXi installable tool. This is a leap forward both in terms of leveling the playing field by eliminating a flaw exposed by limitations in VMWare but also in terms of rapid expandability.
The new release supports auto-targetting and connection of volumes ( LUNs ) via the VIP which was previously (and continues to be) the discovery IP you assign your Nimble array. Auto-targeting allows you as the admin to force VMWare AND windows ( with the tools installed ) to reduce the amount of connections each uses and ensure that the connections are redundant( as all system connections should be when dealing with iSCSI ).
As previously noted on my blog, Nimble has a flaw in its network connection design. While the flaw does not hurt most small businesses, those who desire ensure the most bandwidth, redundancy and protection flexibility will quickly hit the VMWare limitation of 1024 connections per ESXi host. Granted you need to be setup to share all the connections for all the volumes, so mainly people with high availably, DRS and vmotion are mainly affected. Their one to one vkernel requirement ends up costing you the multiplier of connections you use, so if you use 4 IPs with 4 cards to connect one to one with their 4 cards, you end up using 16 connections per volume and thus are limited to 64 connected volumes. The only work around for this issue was to connect your volumes to the guest OS directly via iSCSI and if you are using VMWare, Microsoft doesn’t support it. There is also a flaw in windows 2012 servers with the e1000e network driver that will cause corruption of data if used as an iSCSI connector. The work around for that is to use the e1000 network card instead. While I didn’t discover this fix, I did identify this problem to Nimble who after a deeper dive worked with VMWare on a solution( You’re welcome ).
With this new offering, you can now limit the connections on the ESXi hosts and thus eliminate the need for directly connecting to volumes via iSCSI over VMWare connections. But if you need to do this for clustering purposes, the new windows toolkit makes this much easier to manage. The installation is quick and the configuration is a cinch. There is no longer any need to mess with the windows iSCSI configuration, the tool does all the work for you. You can also upgrade to 2.0.6 without having to first upgrade your windows toolkit either( Nimble Connection Manager as it was previously called ).
If you wan to take advantage of this in VMWare, you will need to install the tool on each ESXi host first before enabling the automatic connection auto-targeting. The installation requires each host to go into maintenance mode first. When we did our installation, we installed VMWare 5.5 as well, so all the work would accomplish two project goals with one task. Once everything was in place, we turned auto-targeting on. After a few minutes the connections to the volumes dropped to 4 (which is what we set NCM to for both setting, the minimum setting is 2 and the maximum setting is 8 by default). After all five hosts were down to the set 4, we did find that two of our host machines did not properly disconnect 1 path. When this happens, you can either leave it alone or reboot the host to clear out the dead connection. We rebooted the two hosts to prevent any incidents with VMWare ( it doesn’t like letting go easily and a mass failure can crash a host ).
I should note that you are NOT required to upgrade the windows client prior to upgrading to 2.0.6 or enabling the auto-targetting feature. You are required to verify all of your windows iSCSI connections are pointed to the discovery IP instead of directly to the data IPs. There is also a problem currently for those of you who have Xen host machines. Nimble is trying to fix the issue from their side, but based on what they told me, it seems something that must be fixed in Xen first. If you have any Xen hosts, don’t upgrade to 2.0.6 until you verify with them. They are also actively blacklisting arrays that have anything on them that has XEN in the name. We use XenApp and two of the volumes have XenApp in the name, so they blacklisted our Array, which is why I know about this. We don’t use Xen hosts though, we are of course on VMWare.
So while I celebrate the conclusion of an ongoing problem for my organization and the ability to now proceed with some projects I had on hold, this upgrade does have another benefit. That benefit being that you can now point to multiple Nimble Array units for a single volume. This means you can now cluster these fast little arrays to further increase the speed they offer. This also offers more redundancy as the units can now be mirrored which provides immediate and hot fail-over.
So my congratulations to Nimble Storage for solving the issue and taking the opportunity to expand the functionality of your units! Perhaps next year when our lease is up, the replacement solution will be a simpler choice. Keep innovating, the big guys can’t keep up.