Today I want to talk a bit about VMFS versus NFS datastores. There are more options available to you to chose from like vVol’s on which I wrote a blog post a while back but today we will focus on VMFS and NFS.

Both VMFS and NFS are file systems with some major differences. First of all VMFS sits on top of Block based storage (iSCSI, FC but also on ESXi local storage) where NFS is a Network file system commonly used in Unix/Linux environments.

VMFS the quick version

VMFS or Virtual Machine File System was created by VMware and is a high performance cluster file system that provides virtualization optimised storage for virtual machine’s. Using VMFS datastores you can share your datastores across multiple hosts enabling you to leverage features like Storage vMotion and vSphere HA.

With VMFS you can massively scale you environments across storage systems and protocols like FC or iSCSI. There is also no need to manage VMFS, vSphere does this for you. The only thing you need to do is mount the LUN and vSphere will format it as an VMFS datastore. So all you need to do is keep an eye out on capacity and performance needs and VMFS will take care of the rest.

VMFS supports snapshot technology’s, DR and backup integration with from a wide range of vendors like VEEAM, NetApp, Commvault and Rubrik.

Some more in-depth info on VMFS can be found here.

NFS the quick version

NFS was created back in 1984 by Sun Microsystems as a shared network file system. The NFS file system works over TCP/IP and can be mounted as a file system within vSphere. The NFS mounts are hosted on NAS storage systems. Similar to VMFS NFS can be used as shared file system to enable vSphere features like HA and Storage vMotion. It also supports snapshotting and DR and backup integrations similar to VMFS (but with some differences).

As NFS is a file system and when mounted in vSphere as a datastore and will not be formatted to VMFS as NFS is already a file system.

Best practices for using NFS with vSphere can be found here.

VMFS versus NFS the differences

As stated before VMFS and NFS are two different types of files systems. VMFS is native to vSphere and NFS is it’s own file system. This results in some major differences but also in overlapping features.


  • External, non vSphere managed
  • Block storage
  • Can be scaled up and down (resize directly visible)
  • Storage savings (deduplication) aware
  • HA and DRS support
  • Does not support boot from SAN (I know sounds obvious)
  • Storage multipathing support


  • HA and DRS support
  • File storage
  • Scale up
  • Boot from SAN
  • Storage multipathing support
  • MSCS support
  • Site recovery manager support
  • RDM mapping to VM
Source: VMware

Main benefits of both technologies


  • Deduplication and compression aware
  • Up and Down scaling
  • Thin provisioning by default
  • Easy restore from Storage Snapshots (e.g. granular restore capabilities)
  • No HBA’s or other special adapters needed


  • Optimised Storage DRS capabilities
  • IO control
  • Disaster recovery support is easier
  • Automated CFS Capabilities
  • Provides Intelligent Cluster Volume Management

So with this in mind you might be asking your self what solution should I pick? Well the answer is not that simple, you need to way the pro’s and con’s and see what in your environment makes the most sense.

I myself have had a small preference for NFS due to the fact that in my system admin days we had nice NetApp boxes sitting in our Datacenter. This made provisioning NFS datastores easy and as we also had nice automation tooling in place storage growth and shrinkage was easy and quick. Next to that the direct impact of storage efficiency visibility helpt a lot with reporting usage.

And finally not having to deal with FC networking equipment was also an added bonus. Also simply using TCP/IP was a blast as well. Especially because a big part of our infrastructure was ethernet based.

So in the end it is up to you to decide what way to go so I hope you choose wisely.


By Arjen Kloosterman

Sr. Solutions Engineer @ Netapp

Leave a Reply

Your email address will not be published. Required fields are marked *