Part 1: My take on PernixData FVP

Having posted an article on Software Defined Storage a short while ago, I want to follow up with some posts on vendors/products I mentioned.

pernixdata

First of all we’ll have a closer look at PernixData. Their product FVP, stands for Flash Virtualization Platform, is a flash virtualization layer which enables read and write caching using serverside SSD or PCIe flash device(s). Almost sounds like other caching products which are out there, don’t it… Well, PernixData FVP has features which are really distinctive advantages over other vendors/products. With a new (2.0) version of FVP coming up I decided to do a dual post. Version 2.0 should be released very soon.

What will FVP do for you? PernixData states:

Decouple storage performance from capacity

So what does that mean? Well, it means we no longer must try to fulfill storage performance requirements by offering more spindles in order to reach the much demanded IOPS. Next to that we must try to keep a low as possible latency. Doing so, what better place for flash to reside on the server! Keeping the I/O path as short as possible is key!!
When storage performance is no longer an issue, capacity requirements are easily met.

PernixData is a young company and came out of the shadows early in the year of 2013. The company is started by Satyam Vaghani and Poojan Kumar, both working at VMware prior to PernixData. Both were closely involved with data services within VMware. Since 2013 PernixData is making a pretty decent name for themselves by delivering a revolutionary product together with decent marketing including persuading Frank Denneman to become the technology evangelist.

I won’t bother you with any installation or configuration procedures as there are many blogs describing these parts already.

Why FVP?

So why is FVP. What makes this product so special?

  • FVP runs within VMware vSphere kernel; The FVP VIB installs as a kernel module per vSphere host.
  • Fully compatible with VMware services such as (storage) vMotion, (storage) DRS, HA.
  • Facilitates read and write server side cache.
  • Write-trough and fault tolerant Write-back options; One of my favorites!! Write-back caching can be replicated to 1 or 2 additional flash devices before being destaged to the storage array itself providing data protection.
  • Vendor independent backend storage, supporting protocols iSCSI, FC(oE).
  • Scale-out architecture by adding hosts and flash devices.
  • Very easy implementation; VIB per vSphere host and management software running on a MSSQL backend.

Future for FVP

Although FVP is an outstanding product, I wonder what the next level of development will bring. As said earlier, version 2.0 is about to go GA. I will dedicate part 2 of this post writing about FVP 2.0. Version 2.0 in short; NFS is supported. More important is that RAM can be assigned as cache repository! Further enhancements will be the possibility to create multiple flash replication groups.

But what about VMware and their development on vSphere APIs for I/O Filters, in short; VAIO? VMware’s intention with VAIO is to easily enable partners to hook on their products directly into the I/O path of VM’s. With this part becoming easier to implement, isn’t one of the key features of FVP (running in the vSphere kernel) disappearing in the near future? VMware has already partnered with SanDisk’s FlashSoft team to provide a distributed flash cache filter to provide read and write acceleration of virtual machine I/O.

I am curious to see how PernixData will develop their software post version 2.0. I hope to find out on VMworld EMEA 2014!!

See for myself

As always; writing and reading about something is nice, but it is an absolute necessity to try a product for yourself. PernixData provided me, being a PernixPrime, with NFR licences. I’ve got a nano lab at my disposal which is perfect for testing server side caching solutions.

The nano lab contains 3 Intel NUC hosts using a single Intel i5-4250U CPU and 16GB RAM each. Each NUC is equipped with a Intel DC S3700 series SSD. The NUC’s are connected by a Cisco SG300 Gbit switch. A Synology NAS is used as backed storage providing multiple iSCSI LUN’s.

I’m running vSphere 5.5 (build 2068190) and PernixData FVP host extension for vSphere5.5 version 1.5.0.5 build 30449. The FVP write policy is configured to do write back with 1 network flash device for data protection as shown in this picture.

pernixdata-config

Using this setup I used the VMware IO analyzer fling to do the benchmark testing. I used the max write IOPS workload which is predefined in the IO analyzer. Running just one instance, the performance enhancements are amazing!

First notice a warmed up cache repository with a hit rate of 100% during the write tests:

pernixdata-test-hitrate

 

During the write test we experienced a constant rate of ~56.000 IOPS and a latency of  0.22ms on the local flash and ~11ms on the network flash.

 

pernixdata-test-IOPS

pernixdata-test-lat

Considering this is a ‘nano’ lab one can still see a pretty awesome gain of performance!!

Other sources

While reading about PernixData, some blog posts stood out which are definitely worth reading:

http://willemterharmsel.nl/interview-poojan-kumar-ceo-of-pernix-data/
http://frankdenneman.nl/pernixdata/
http://vmwarepro.wordpress.com/2014/09/17/pernixdata-top-20-frequently-asked-questions/

 

Enthusiastic

I became an instant fan of FVP. It’s easy deployment and awesome performance gains… If you’re a PernixData enthusiast yourself, consider becoming a PernixPrime or PernixPro!!!
We will do a write up on FVP 2.0 as soon as possible.

 

The following two tabs change content below.
I am a virtualization enthusiast with a love for virtual datacenters! About 15 years of experience in IT. VMware VCDX #212. Working at HIC (Hagoort ICT Consultancy) as fully independent consultant/architect!

Latest posts by Niels Hagoort (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *