containers

Containers, VMs and unikernels

Last week I had an interesting discussion with a colleague on containers (Docker mostly), VMs, as well as a more recent development in this space called unikernels. Regular geek speak. I’ve mashed up the most interesting parts of the discussion, together with some background information.

 

Containerization

Containerization is lightweight OS virtualization that groups and isolates certain processes and resources from the host operating system and other containers. Containers share the operating system kernel and may share binaries and libraries.

The following image depicts the difference between VMs and containers.
VMs versus containers

(more…)

Read More

POODLE SSLv3

Host disconnect after ESXi 5.5 U3b (SSLv3 POODLE)

Today I was preparing a new blade chassis in an existing vCenter environment. After applying the predefined Critical Host Patches baseline (default task for new hosts), the hosts would not reconnect to vCenter.

Turns out VMware decided to disable SSLv3 for ESXi 5.5 Update 3b and higher, because of the POODLE vulnerability. The dependency is clearly stated in the release notes and in the VMware Product Interoperability Matrix below. (more…)

Read More

VMware Virtual SAN

VMware Virtual SAN 6.0 benchmark

Last week I was going through ‘What’s New: VMware Virtual SAN 6.0‘, it seems like VSAN 6.0 is bigger, better and faster. The latest installment of VMware’s distributed storage platform provides a significant IOPS boost, up to twice the performance in hybrid mode. The new VirstoFS on-disk format is capable of high performance snapshots and clones. Time to put it to the test.

 

Disclaimer: this benchmark has been performed on a home lab setup, components used are not listed in the VSAN HCL. My goal is to confirm an overall IOPS and snapshot performance increase by comparing VSAN 5.5 with 6.0. I did so by running a synthetic IOmeter workload.

VMware has a really nice blogpost on more advanced VSAN performance testing utilizing IOmeter.

 

Hardware

My lab consists of 3 Shuttle SZ87R6 nodes, connected by a Cisco SG300.

 Chipset Z87
 Processor Intel Core i5-4590S
 Memory 32 GB
 NIC 1 1 GE (management)
 NIC 2 1 GE (VSAN)
 HDD 1 Samsung 840 Evo (120GB)
 HDD 2 HGST Travelstar 7K1000 (1TB)

 
 

ESXi/VSAN versions

  • ESXi 5.5 Update 2 (build 2068190)
  • ESXi 6.0 (build 2494585)

(more…)

Read More

IBM SVC

Stretched Cluster on IBM SVC (Part 3)

This is part 3 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2     (split I/O group, deployment, quorum, config node)
PART 3    (HA, PDL, APD)

 

I explained how a SVC Split Cluster reacts to certain failure conditions in part 2. Now that we know how the storage layer behaves, let’s take a closer look at how this all ties in with the VMware layer. This is by no means a complete guide to every setting/configuration option involved, more of an excerpt of the ones I consider to be important. This post is based on vSphere 5.5.

VMware Stretched Cluster isn’t a feature you enable by ticking some boxes, it’s a design built around the workings of HA, DRS and a couple of other mechanisms.

First, I would like to briefly explain the concepts APD (All Paths Downs) and PDL (Permanent Device Loss).

 

APD

In an All Paths Down scenario, the ESXi host loses all paths to the storage device. The host is unable to communicate with the storage array. Examples of failures that can trigger APD are a failing HBA or a failing SAN.

APD All Paths Down

figure 1. APD

(more…)

Read More

Queue

vSphere 6: mClock scheduler & reservations

“Storage IO Controls – New support for per Virtual Machine storage reservations to guarantee minimum service levels.” Is listed as one of the new features of vSphere 6.

The new mClock scheduler was introduced with vSphere 5.5 and as you might have guessed, it remains the default IO scheduler in vSphere 6 (don’t mind the typo in the description).

 

mClock advanced setting

 

Besides limits and shares, the scheduler now supports reservations. Let’s do a quick recap on resource management.

(more…)

Read More

DataGravity

Let DataGravity shed some light on your data

By now you’ve probably heard of Paula Long’s new startup, DataGravity. You may know her from that other storage company she co-founded, EqualLogic.

One of EqualLogic’s goals was to put a storage administrator in every box, you shouldn’t have to pay extra for management tools. DataGravity is taking it further by saying you bought the storage, you should know what’s in it, all for the price of primary storage.

DataGravity array

(more…)

Read More

Affinity

VMware HA & DRS affinity rules

Last week I was studying for the VCAP5-DCA exam and I came across an advanced DRS setting, I just can’t seem to wrap my head around: ForceAffinePoweron. I’m hoping one of our readers (yes, that’s you) can help me understand.

VMware recommends you configure this option when you are using Microsoft Cluster Service (MSCS) in combination with DRS.

To ensure that affinity and anti-affinity rules are strictly applied, set an advanced option for vSphere DRS. Setting the advanced option ForceAffinePoweron to 1 will enable strict enforcement of the affinity and anti-affinity rules that you created.
Source (more…)

Read More

IBM SVC

Stretched Cluster on IBM SVC (Part 2)

This is part 2 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2    (split I/O group, deployment, quorum, config node)
PART 3     (HA, PDL, APD)


SVC split I/O group
It’s time to split our SVC nodes between failure domains (sites). While the SVC technically supports a maximum round-trip time (RTT) of 80 ms, Metro vMotion supports a RTT up to 10 ms (Enterprise Plus license).

You can split nodes in 2 ways; with or without the use of ISL’s (Inter-Switch Link). Both deployment methods are covered in detail in this document.


Deployment without ISL
Nodes are directly connected to the FC switches in both the local and remote site, without traversing an ISL. Passive WDM devices (red line) can be used to reduce the number of links. You’ll need to equip the nodes with “colored” long distance SFP’s.

SVC no ISLSource

(more…)

Read More

IBM SVC

Stretched Cluster on IBM SVC (Part 1)

This is part 1 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2     (split I/O group, deployment, quorum, config node)
PART 3     (HA, PDL, APD)

 

ibm-pc

Last year I was the primary person responsible for implementing a new storage environment based on IBM SVC and V7000 and building a VMware Stretched Cluster (a.k.a. vSphere Metro Storage Cluster) on top of that. I would like to share some of the experience I gathered, caveats I encountered and other points of interest. This is by no means a complete implementation guide (go read the Redbook 😉 ). I’ll discuss some of the implementation options as well as failure scenario’s, advanced settings and some other stuff I think is interesting. Based on the content, this will be a multi-part (probably 3) blog post.

Stretched Cluster versus Site Recovery Manager
If you’re unfamiliar with the concepts Stretched Cluster and SRM, I suggest you read the excellent whitepaper “Stretched Clusters and VMware vCenter Site Recovery Manager“, explaining which solution best suits your business needs. Another good resource is VMworld 2012 session INF-BCO2982, with the catchy title “Stretched Clusters and VMware vCenter Site Recovery Manager: How and When to Choose One, the Other, or Both“, however you’ll only be able to access this content if you’ve attended VMworld (or simply paid for a subscription).

(more…)

Read More

Backing up the vCenter Server Appliance (vCSA)

vcsa

<update 13 november 2014>

VMware does not support backing up the individual components and restoring them to a newly deployed appliance. Use of image-based backup and restore is the only solution supported for performing a full, secondary appliance restore (KB2034505).

Thank you, Feidhlim O’Leary (VMware), for pointing this out in the comments.

</update 13 november 2014>

 
Last week I was upgrading an old ESX 4.1 environment to ESXi 5.5 and the vCenter server needed to be replaced by the vCenter Server Appliance (vCSA). The new limitations on the 5.5 release should suffice for a lot of environments, the only downsides being no Heartbeat, Linked Mode, VUM and IPv6.

After deploying and configuring the appliance, I soon realized there wasn’t a single article that listed all the necessary post-installation backup tasks. Someone not as smart as you will eventually break your appliance, so make sure to perform all following tasks.

1. Back up vPostgres database
Reference: KB2034505
(more…)

Read More