IBM SVC

Stretched Cluster on IBM SVC (Part 3)

This is part 3 of the VMware Stretched Cluster on IBM SVC blogpost series.

PART 1     (intro, SVC cluster, I/O group, nodes)
PART 2     (split I/O group, deployment, quorum, config node)
PART 3    (HA, PDL, APD)

 

I explained how a SVC Split Cluster reacts to certain failure conditions in part 2. Now that we know how the storage layer behaves, let’s take a closer look at how this all ties in with the VMware layer. This is by no means a complete guide to every setting/configuration option involved, more of an excerpt of the ones I consider to be important. This post is based on vSphere 5.5.

VMware Stretched Cluster isn’t a feature you enable by ticking some boxes, it’s a design built around the workings of HA, DRS and a couple of other mechanisms.

First, I would like to briefly explain the concepts APD (All Paths Downs) and PDL (Permanent Device Loss).

 

APD

In an All Paths Down scenario, the ESXi host loses all paths to the storage device. The host is unable to communicate with the storage array. Examples of failures that can trigger APD are a failing HBA or a failing SAN.

APD All Paths Down

figure 1. APD

(more…)

Read More

Blog growth and VMware vExpert 2015!

Today we achieved the VMware vExpert 2015 status!! All three of us!! We are very happy with the fact we are being recognized as contributors to the VMware community.

vexpert15-vexpert

It is funny to see ourselves enthusiastic about blogging as we are! We used to make kinda fun of bloggers… don’t know why exactly, probably jealousy speaking back then. 😉
However, as we started Cloudfix in April 2014, we rapidly became more and more involved with each other as members of Cloudfix and the VMware community. Our hangouts chat, containing the three of us, is mega often used! We constantly discuss tech and other stuff with each other. Next to that, we track each others progress in our professional careers in an non-healthy competitive environment. I think we supplement each other. 🙂

So, setting up Cloudfix was a great idea and it’s really fun to do. It is time consuming, but worth it. We saw a growth in numbers of visitors we did not expect in the beginning. Heck, it even got better beginning 2015!! Pretty awesome for a beginning blog like ours…

(more…)

Read More

vSphere 6: vMotion enhancements

Don’t we all remember when we witnessed our first vMotion and realized which awesome things this made possible in virtualization?!  In vSphere 6 vMotion even got better!

vMotion version history

First I’d like to give a short overview about what was already achieved in previous versions of vSphere.

vSphere 5.0:

  • Multi-NIC vMotion
    Migration time reduced by making Multi-NIC vMotion possible.
  • Stun During Page Send
    Stunned source machine if needed to progress pre-copy phase of vMotion, so the memory modification rate stays below the precopy transfer rate and the pre-copy will eventually succeed.

vSphere 5.1

  • vMotion without shared storage
    Allows for moving the VM simultaneously to another computing & storage resource at the same time.

(more…)

Read More

Queue

vSphere 6: mClock scheduler & reservations

“Storage IO Controls – New support for per Virtual Machine storage reservations to guarantee minimum service levels.” Is listed as one of the new features of vSphere 6.

The new mClock scheduler was introduced with vSphere 5.5 and as you might have guessed, it remains the default IO scheduler in vSphere 6 (don’t mind the typo in the description).

 

mClock advanced setting

 

Besides limits and shares, the scheduler now supports reservations. Let’s do a quick recap on resource management.

(more…)

Read More

vSphere 6: Multi-Processor Fault Tolerance (SMP-FT)

VMworld 2008; VMware announced Fault Tolerance (FT) in ESX 4 as a new feature that allows continuous availability for selected virtual machines (VM). FT is a technology that allows continuous availability virtual machines with literally zero downtime and zero data loss, even surviving server failures, while staying completely transparent to the guest software stack.

While it was a great new feature; FT enabled VM’s were not a very common sight in datacenter environments.

Legacy FT

FT not being a very common sight in datacenters was mostly due to the restriction of only 1 vCPU per FT virtual machine. This limitation was quite limiting the usability of FT in your datacenter. Most business critical VM’s, that could benefit from FT the most, were in need of multiple vCPU’s in order to meet the performance requirements. Further challenges were the limited options on how to back-up your FT enabled VM’s as creating VMware snapshots was not possible.

Other cluster and host requirements for legacy FT, or UP-FT, were:

  • A HA enabled cluster is required.
  • Shared storage is required.
  • VMDK’s must be eager zeroed thick provisioned.
  • Host CPU’s must be VMware FT capable and belong to the same processor model family.
  • Ensure that all ESX hosts in the VMware HA cluster have identical ESX versions and patch levels.

SMP-FT

(more…)

Read More

vSphere 6: New features!

Updated:  As of today (12th of March 2015) vSphere 6 is downloadable! Login to the VMware portal in order to download the following newly released products:

  • vSphere 6
  • vSOM 6
  • vCloud Suite 6
  • SRM 6
  • VSAN 6
  • VMware Integrated OpenStack 1.0

 

Hear, hear!!!  VMware vSphere 6 is here! 🙂

After a period of extensive testing, including a ‘public’ beta test, VMware vSphere 6 is launched today (February the 2nd)!!!

vsphere6-meme

It’s been a while since the last major version release as listed in the table below. vSphere 6 should provide the next step of innovation in server virtualization.

VersionRelease date
vSphere 612 march 2015
vSphere 5.522 sep 2013
vSphere 5.110 sep 2012
vSphere 5.0 24 aug 2011
vSphere 4.1 13 jul 2010

New features

Check all the new features listed (as found on the VMware website) below:

(more…)

Read More