How to get massive cache for Cisco UCS blades

Quite a few customers have been asking how they could get massive cache on their Cisco UCS blades.  It is really quite simple – with three steps to follow in UCSM:

1.  Configure “Local Storage Configuration Policy”

The B200M4 blades are equipped “FlexFlash”, dual SD slots for installing ESXi.  Ensure “FlexFlash” is enabled for the local storage configuration policy.  Two SD cards are recommended for redundancy.  Leveraging FlexFlash to install and boot ESXi will eliminate the need to configure RAID for the local SSDs.  Therefore, “No RAID” can be selected as the the mode of operation for the local SCSI controller.  There is absolutely no need to log into the SCSI controller BIOS during blade boot up – just configure the policy once, and use it across the board with each blade server in the chassis.


2.  Attach policy to service profile template

Two simple settings to apply:

a)ensure the template is created as an “updating template”: any updates to the template will automatically roll out changes to all blades that are attached to the service profile template


b)ensure the local storage configuration policy defined in step 1) above is selected within the service profile template

Screen Shot 2015-08-17 at 12.22.50 AM

3.  Power on the blades and enjoy massive cache for your VMs

Two blade slots with 1.6TB SSD drive each will give your VMs ~3.2TB of raw flash capacity.  With DVX data reduction (inline dedupe and compression) of 5X, each ESXi host can enjoy over 15TB of effective cache.  Don’t worry about setting application policies to enable/disable cache, or pin certain applications volumes/LUNs to flash – just use more, and think less.

Want to learn more, come visit our booth @VMworld!


OpenStack with Shared Storage Series: No-dupe > dedupe

I have gotten lots of questions about openstack lately, and interestingly, most questions revolve around the value of using shared storage.  Instead of doing a boring checkbox comparison between shared storage and software on top of local DAS (I avoid using the term ‘software defined storage’ because shared storage could very well be classified as ‘software defined’), I decided to start a series on “OpenStack with Shared Storage”.  First up on the list is my favorite topic: NO-dupe is always better than dedupe.   Read on if you want to find out why…

One of the top use cases for OpenStack platform is private cloud for dev/test environments.  In such environments, VM instance cloning happens very frequently.  For example, the base software build is captured as an image in glance.  Each developer in the team can get his/her own instance based on the image – this can easily be done through Horizon by provisioning an instance based on an image.  To ensure code modification is not lost due to instance reboot, it is best to boot the instance from a volume through cinder.  In order to do that, one can simply select “Boot from Image (creates a new volume)” option in the instance launch wizard:
Continue reading OpenStack with Shared Storage Series: No-dupe > dedupe

Practical Tips to Get Started with OpenStack (Part 1)

OpenStack Design Summit 2014 ended successfully last month in Atlanta – Nimble was very fortunate to be invited to speak in the block storage design considerations breakout session.  In this blog post, I’ll share key observations, learning from the summit, as well as some quick practical tips to get started with the OpenStack journey.  If you missed the summit or the breakout session, no problem, the OpenStack Foundation has already posted all the session videos on their Youtube channel.   Below is the session recording featuring Nimble’s Jay Wang (our OpenStack R&D lead) and myself:

Continue reading Practical Tips to Get Started with OpenStack (Part 1)

Helping Customers Automate Infrastructure Deployment with Cisco UCS Director

Cisco is having its 25th Live! conference this week in SF – I am fortunate to be part of the team behind our UCS Director integration. Unlike certain flash-only vendor announcing UCS certification as “integration” in a press release, we showcase how we solve real customer challenges jointly with Cisco UCS Director team. This is a teaser post for now and I promise to post a demo video in Nimble Storage youtube channel after the show ends! As always, let’s start with the “why”, follow by “how”:

Why UCS Director:
• Tired of toggling between different UIs for device info/status in your environment? UCS-D provides unified view of your converged infrastructure across compute, network, storage and virtualization layers
• Tired of writing custom scripts through ssh/powershell/API to automate repetitive tasks? UCS-D simplifies complex operations by clicking & dropping tasks to create end-to-end workflows spanning physical compute such as UCS, network such as access layer switch/router/firewall/LB, storage such Nimble Storage and virtualization hypervisor such as ESXi
• Tired of responding to endless cloud consumer requests to get VMs, VLANs, storage, etc? UCS-D enables self service by exporting orchestration workflow as catalog item

Common storage tasks to automate:
From our in-depth discussions with our customers, we discover that the following tasks from storage side could really use automation:
• volume provionsioning
• volume grow
• snapshot
• clone
• obtain ESXi boot volume UUID from Nimble for UCS service profile cloning
• removal of snapshot, clone or even volume

Unified View of SmartStack Converged Infrastructure
Let’s take a look at how Nimble + Cisco SmartStack would look under single pane of glass:


Continue reading Helping Customers Automate Infrastructure Deployment with Cisco UCS Director

Space Reclamation in vSphere 5.5 with Nimble Storage

Several customers have inquired about space reclamation in vSphere environment – let’s dive into this interesting topic for a bit.








Why does space reclamation matter?

Reason is fairly straightforward, after a VM gets deleted or storage vMotion from one datastore to another, it’d be a good idea to free up the space on the storage array so they could get used by others.

How does one perform space reclamation and keep track of the status?
Continue reading Space Reclamation in vSphere 5.5 with Nimble Storage

vSphere + Nimble: Standard vSwitch migration to vDS

More and more customers are asking about using the VMware vDS (vNetwork Distributed Switch) with Nimble Storage – let’s spend a few minutes going over some common questions as well as usage best practices. This info will also fold into the next version of our VMware best practices guide.

Continue reading vSphere + Nimble: Standard vSwitch migration to vDS

Nimble 2.0 PSP Integration with VMware vSphere Part II (NCS + PSP deep dive)

Now that we have seen the Nimble PSP VIB getting installed on the ESXi host, let’s peel the onion on what happens behind the scene. When you list the VIB for more information, you will notice two components are installed on ESXi:

Continue reading Nimble 2.0 PSP Integration with VMware vSphere Part II (NCS + PSP deep dive)

Nimble 2.0 PSP Integration with VMware vSphere Part I (VIB Install)

I finally got a chance to sit down and play with ESX 5.5 and Nimble 2.0 array – instead of installing yet another Windows VM and SQL server for vCenter, I decided to try out the Linux vCenter VA.  It was actually quite fast and easy.  Total deployment time for vCenter was 10 minutes (including time to download the VA!).  I simply did the following:

-install ESX5.5 + vSphere 5.5 client (you’ll need an ESX host to place your vCenter VA, and you’ll need the VI Client to import the OVF)

-import Linux vCenter VA via vSphere Client connected to ESXi 5.5 host

Only odd thing I ran into was the inability to login to vCenter server via web client.  It couldn’t authenticate the root login.  It turned out I had to manually enter a password for the single sign-on administrator account.  To do that, remember to stop vCenter Server service (vpxd) and then click on “Save Settings” in the SSO tab with the newly entered password.

Continue reading Nimble 2.0 PSP Integration with VMware vSphere Part I (VIB Install)

VMworld 2013: New Stuff, Technical Nuggets

After a week long of geek’ing out at VMworld, it’s time to do a quick recap on key learning from the show, as well as detailed look of what Nimble had showed off.

Let’s start with new stuff + useful links:

Software Defined ‘X’

Starting with ‘Networking’: Carl and Kit put on a very good show; they made SDN quite simple to understand.  Just like ESXi hypervisor creating virtual entities such as vCPU, vNIC, vSCSI emulation, VMDK, NSX creates an abstraction layer to provide L2-L7 services for anything networking.  Decoupling network identity and services tied to the identity from physical hardware means policies follow the virtual machine, within the same datacenter or across datacenters.  More importantly, this abstraction allows for policy based provisioning and QoS enforcement, all with automation.  I can already imagine how much easier it is to deploy a multi-tier application spanning multiple VLANs, plus needs for firwewall, load balancer, NAT’ing (of course, that is after you have made the right level of investment in both hardware, software & thorough design of the SDN infrastructure).  Simply put, I can’t wait to try this out in the datacenter!

Now let’s talk more about storage, Software Defined Storage (SDS) – this is an area where you need to put in more thoughts.  There are so many storage vendors touting their solution being ‘software defined’.  I can summarize the legacy vendor’s pitch with the following picture:

Continue reading VMworld 2013: New Stuff, Technical Nuggets

“KISS RAS” = Software Defined Storage – Come see Nimble Storage’s tech previews @ VMworld next week!

If you are in the virtualization space, you must have heard of the term “Software Defined Storage” – probably more so from storage vendors than VMware.  Why?  Because it is a hot marketing theme to promote storage technologies.   More and more customers & partners are asking what is Nimble’s take on  “Software Defined Storage”.  Here’s my unofficial response – “K.I.S.S – R.A.S = Software Defined Storage”:

Keep It Simple Storage

Reliability Availability Supportability

Continue reading “KISS RAS” = Software Defined Storage – Come see Nimble Storage’s tech previews @ VMworld next week!