SDDC: VMware announces EVO:RAIL

With VMworld 2014 just kicking off, VMware already made an awesome, and somewhat unexpected,  announcement being VMware EVO:RAIL!!!
EVO:RAIL seems to be the first product in the EVO family, so it looks like VMware probably got more surprises up their sleeves in the near future.

Let’s take a quick look at this new product range and what it’s first family member RAIL is all about…

evorail

 What is EVO:RAIL

A nice read about this new product from VMware’s SDDC division is read in Duncan Eppings “Daily dispatches VMworld 2014”  post. As Duncan stated; EVO:RAIL is a Hyper-Converged Infrastructure (HCIA) offering by VMware and qualified EVO:RAIL partners.

The partner list already containing some major hardware vendors like:

  • Dell
  • EMC
  • Fujitsu
  • Supermicro

VMware bundled it’s already proven software pieces together in one, easy-to-deploy, solution. Hardware vendors will deliver on the physical side and with an easy installation procedure customers will have a Hyper-Converged Infrastructure in a blink of an eye! The VMware software bundle contains:

  • vSphere (Enterprise+ licensing)
  • vCenter
  • vCenter Log Insight
  • VSAN

I, for one, am very curious how this major step from VMware in the HCIA market will stand up to the Nutanix and Simplivity solutions. How will they respond to VMware’s EVO family? Especially Nutanix, after Dell signed an OEM deal with them, they are now one of the first hardware vendors to support RAIL!

You will be able to buy specific hardware ‘appliances’ to support your EVO:RAIL environment. As stated by VMware, minimal hardware specs for one node should be as follows:

  • 2x Intel E5-2620 v2 CPU
  • 192GB memory
  • VMware VSAN compatible hardware:
    • VSAN certified pass-through controller
    • 3x SAS 10K 1.2TB harddisk
    • 1x 400 MLC SSD
  • ESXi bootdevice
  • 2x 10Gbit NIC interfaces
  • 1x 1Gbit remote managed interface

The hardware appliance will, at least, contain 4 nodes in a 2U or 4U housing making it fault tolerant from the beginning. Very curious about how the hardware vendors will differ from each other with the specifications of their RAIL appliance. Note that version 1.0 will ‘only’ support 4 appliances to be put together, resulting in a 16 node environment.

 

Compute

The minimal required specs for an EVO:RAIL node is sized to run 100 averaged-sized datacenter VM’s as stated by VMware. Off course depending on the resources being assigned to the VM’s. A general VM profile would be a redundant VM containing 2 vCPU, 4GB memory, 60GB vDisk. As EVO:RAIL is optimized for VMware Horizon View

Note that VMware shares some averages, real-life environments will have their own sizing and workload requirements.

 

Network

Each EVO:RAIL node will contain at least two 10Gbit network interfaces. Each node must be connected to your Top-of-Rack switches (ToR). One ToR switch won’t do in most cases as this will be a single point of failure. Because your network is of essential importance, you should always create a high-available network topology. You could, in example, put Cisco vPC (virtual Port Channel) on their Nexus family to use in order to facilitate high available 10Gbit connections for your RAIL nodes.

A quick drawing of an EVO:RAIL architecture would look like this:

evorail-network

Note: Of course each node within the appliance has two 10Gbit interfaces and one 1Gbit management interface.

When configuring your appliance network, only four types of traffic is supported:

  • Management
  • vMotion
  • Virtual SAN
  • Virtual Machine

Makes you wonder if Fault Tolerance is not supported, doesn’t it?? Especially when FT is capable of VM’s using multiple vCPU’s in the near future.
Update: My bad, my bad… of course FT is not supported because VSAN does not support FT!

Pretty obvious is that IPv6 is supported and that logical (VLAN)  separation for vMotion, VSAN and Virtual Machine traffic is recommended. Should be a requirement imho!

Storage

As stated in the datasheet, EVO:RAIL creates a single VSAN datastore from all local disks on each vSphere node. VSAN will utilize the SSD capacity for read caching and write buffering.

Support

A big advantage for customers running EVO:RAIL is support. Customer will have a single entry regarding customer support, meaning EVO:RAIL will be licensed by a single SKU for hardware, software and support. This sounds a bit like the flexpod concept, but then the enhanced equivalent of it because there are less vendors involved! 🙂

Resources

Be sure to check out VMware’s EVO:RAIL resources. A great resource that I used to learn more about the product.

I can be found right here: http://www.vmware.com/products/evorail/resources.html.
My favorite being this video providing you with the ‘full end-to-end experience’:

 

evorail-video

Other great write-up’s on EVO:RAIL;

http://www.yellow-bricks.com/2014/08/25/introducing-vmware-evo-rail-new-hyper-converged-offering/
http://wahlnetwork.com/2014/08/25/evo-rail/

 

Hooray

I can’t wait to test-drive an EVO:RAIL implementation. I am looking forward to be able to demonstrate this technology. Next to being easy to deploy and to maintain, it should be easy to sell!
Hooray for VMware on their SDDC perspective and innovation!!

evorail-hooray

 

The following two tabs change content below.
I am a virtualization enthusiast with a love for virtual datacenters! About 15 years of experience in IT. VMware VCDX #212. Working at HIC (Hagoort ICT Consultancy) as fully independent consultant/architect!

Latest posts by Niels Hagoort (see all)

Comments (1)

Leave a Reply

Your email address will not be published. Required fields are marked *