I've run into this issue with a couple customers the past couple weeks, and there isn't anything definitive out there that I could find, so I thought I'd write about it. These customers have a need for disks in a VM larger than the 2TB - 512 bytes that is currently supported. They have a couple options to get around this limitation but these solutions cause a tad bit more complexity in their environments.
One nugget of knowledge that I confirmed with @VMwareStorage last week was that an NFS volume size is only restricted by the disk array itself when presented to a vSphere host. e.g. If my NetApp running OnTap 8.x can do a 50TB volume, I can present that volume to vSphere.
Customer is using a NFS datastore, running Windows VMs on that datastore and they require a 6TB volume. They have a requirement for an application (a poorly written one) to be installed that requires local disk. (This application cannot map to an NFS or CIFS export/share, it must use a disk seen by the OS as local disk.) Currently with NFS we cannot use RDMs as a solution thus we are restricted to the maximum VM disk size of 2TB - 512 bytes. (@VMwareStorage has indicated to me that RDMs on NFS are coming in the future.)
NFS = we can create multiple disks (VMDKs) for that VM and present them to the OS as local disks. Once presented and added we can then use a Dynamic Disk within Windows to concatenate the disks using a GPT partition to enable the one large contiguous volume needed.
iSCSI = if available, we can create a large LUN and present it to the VM as a virtual RDM. Once added to the OS we simply format it with a GPT partition and away we go. You could also use a software iSCSI initiator in the VM itself to get around the hypervisor all together, but that will have other implications when it comes to backup/recovery.
There two obvious risks with these large volumes. Backup and recovery. The primary reason we are using a virtual RDM in the iSCSI solution is so that we can continue to use the vStorage APIs for backup, thus enabling some advanced backup technologies that ensure we can get the job done within the prescribed backup window. Obviously some sort of backend disk array snapshot and offload to tape would be best for backups here, but would complicate recovery. For the NFS solution the vCenter snaps of the VM could take quite a while, thus you would have to tune your timeout values accordingly. Putting an agent inside the VM for this solution may be best depending on the backup solution you're using. You have to weigh the good with the bad as always.
I hope this short article has helped a few people out there in the internets. Let me know what you think in the comments below. Have I missed anything?