VMware ESXi Using Netgear ReadyNAS NFS Connections
I successfully used a $400 NAS from Netgear as shared network storage for ESXi server. The key is NFS shares and a simple command line utility. The solution scales up well in their line of products which continues to be very affordable even on large storage capacity units.
I love VMware and virtual servers. They make life so much easier in the data center saving space and making new servers a much quicker and cost effective turn around than having to buy new hardware for everything. But the longer I use the technology the more I come to realize the big benefit requires good shared storage. These virtual machine files are so large and the files are held open during normal use. Having a backup copy of these server files is essential your “hardware” backup. And if the host machine dies it is a long time to copy the virtual machine files over to another host, assuming you have the disk space for it in the first place.
However, most shared storage solutions are expensive. Looking at the recomended setups was eye popping for a cheap guy like me. There had to be another way. At the local VMware users conference in Pittsburgh I learned that many large shops in VMware were using NAS with standard NFS shares to store and run machines. These didn’t require necessarily a large Fibre channel or iSCSI SAN to work. And some of the names mentioned were large financial institutions. Thus I started looking around for cheap NAS with NFS support.
Netgear’s ReadyNAS
I’m now in love with ReadyNAS from Netgear. They make a whole line of products starting with mirrored 500 Gig storage on up to 12 drive arrays. And all are really priced to move. I had been thinking about replacing my old home server (NT 4.0 so I do mean OLD). I only had less than 90 gigs of data and maybe 150 gigs in drives. My other problem was my DAT drive on the backup system failed. So I needed a new location to be saving my backups from the workstations and server. Dell had a sale early June with $100 (instant savings not a rebate) for the low end ReadyNAS Pro so I got the 500 Gig model and a second drive from CDW for $60. They don’t sell the mirrored dual drive by default probably to keep the price down. I then got a cheap $50 USB 320 gig drive to attach for the backups.
So this rig supports NFS, CIFS, AFP, FTP & even HTTP access to the shares. There is a built in backup script utility that can run backups to the USB drive attached to the built in port. Rsync is also available to sync with another ReadyNAS at an alternate site. And the same USB connection (3 ports; 1 front 2 back) can be used to network share a printer to both Windows and Bonjour. There are a bunch of niceties for the home network as well, auto-copy to a directory if you insert a usb key. This can get those pictures public fast. An application for sharing photos that can be published to the web, but real web services are probably better for this. A bunch of streaming servers are included (iTunes & Bit torrent among them).
For this device you have to manage your own users and groups and it essentially is your server. This is a Linux distribution specifically setup for this hardware and the Open Source components are available for download. Obviously, messing with it voids the warranty. The higher end of the product line offers full Active Directory integration and support for iSCSI protocols with the built in OS initiators for Mac, Windows and Linux.
ESXi Connecting to NFS
But my real question was not will this replace my aging home server with backups, but can I make ESXi use this to store and run virtual machines. The answer is yes. The key is NFS shares and some command line mounting on the ESXi side of the equation. I created a share without security that only advertised NFS. However, the create Data Store wizard in VMware Infrastructure Client would not attach and gave the error “Mount request was denied by the NFS Server.” I was disappointed to say the least but did not give up. Anything you run in Infrastructure Client is really executing commands under the hood so with a little help from my friend Google the command in question is esxcfg-nas.
- Enter the command prompt on the server or SSH session for the server.
- esxcfg-nas -a -o 192.168.1.10 -s /VolumeName DataStoreName
- Where:
192.168.1.10 is your NAS IP address
VolumeName is the NFS volume name on the NAS
DataStoreName is the ESXi name created by the command that will show up in your datastore list
The IP address of the NAS must be in the same subnet or VLAN as an active IP address for your ESX kernel port. This will not work across VLAN or subnets even if they are on the same switch fabric. They simply must be in the same network. I did not test running the same VLAN across a network, but I’ve read that this will work for remote connections but may require that multicast is supported across the VLAN.
This can be on a different network card and port than what you normaly connect to your ESXi server. My test server has two NICs one on the main production LAN where I have both a kernal port and the main VM network port. I connected the second NIC to another LAN and put a second kernal port and the NAS there. This connects fine to the NAS and runs the machines out of the other NIC. This is important here because I left the NFS share wide open so I don’t have to deal with security. But since the LAN is completely separated from all the others it is secured off the back end of the network without any direct routes there. But the data store itself is still visible to your SCP transfer utility to copy files on and off the NAS.
Help for esxcfg-nas
This is just a snap of the options you get from the help request at the console.
esxcfg-nas <options> [<label>]
-a|–add Add a new NAS filesystem to /vmfs volumes.
Requires –host and –share options.
Use –readonly option only for readonly access.
-o|–host <host> Set the host name or ip address for a NAS mount.
-s|–share <share> Set the name of the NAS share on the remote system.
-y|–readonly Add the new NAS filesystem with readonly access.
-d|–delete Unmount and delete a filesystem.
-l|–list List the currently mounted NAS file systems.
-r|–restore Restore all NAS mounts from the configuration file.
(FOR INTERNAL USE ONLY).
-h|–help Show this message.
Originally Posted June 24, 2009
Last Revised on November 20, 2010
Wed September 30, 2009, 23:46:05
Hi. I was very interesting in your comments re using a Netgear NAS instead of a SAN.
Our company is buying a new server and I am planning to virtualise with ESXi, however I would prefer not to have the storage in the same box so one can set up other server later for fallover.
Speed will be the big issue here- do you have any recomendations, advice, insight you can offer.
Many thanks
Dave
Sun October 04, 2009, 04:19:57
NAS speeds are limited by two things, the NIC connection and the number of disk spindles that can serve data. The setup I describe here has only two mirrored disks and would be the slowest in performance.
Basically, if a mirrored disk system is sufficient locally on the VM host then this will also work. But if you would require RAID 5 then you would need more. In my setup I use this in my home network and in a test bed situation at work. Both run only a few servers at a time that are not disk intensive.
Netgear also has higher end devices now like the 2100 that support iSCSI and are fully VMware certified for version 4. I am in the process of testing that over the next month.
This system has RAID 5 and has four drives available. There are also two ethernet connections that support link aggregation to double the network connection speed. Or you can separate management and SAN traffic by VLAN.
The newer yet 3200 has 12 drives for yet more performance.
In short, you need to measure disk IO on the servers to be virtualized to be sure how many drives in a system you need and go up the chain from there.
Last issue is support. The system I describe above is using unsupported Duo connection. In production I’ll be running the iSCSI on the 2100 since it is VMware certified and we can get help if there are issues.
Mon January 18, 2010, 18:37:08
Hello, thank you for your great article, it was very helpful. I am wondering what your opinion is, we are currently running 3 VMware ESXI 4 hosts, with local storage, and 6 production VMs running on each Host. Do you think we would be able to get at least equivalent to local storage speeds that we are currently experiencing if we got a ReadyNAS 3200 with 12 drives, and put all 18 VMs on it and ran them from there? Thank you very much for your thoughts.
Fri January 22, 2010, 15:51:43
Tim, I think the setup you describe would work fine. I have two hosts connecting to a 2100 with a three disk RAID now and sharing six servers.
Make sure to setup the iSCSI connections on a separate VLAN and NIC on the VMware host servers. This gives the traffic a free path with no competition from the production traffic.
I don’t see any performance difference between those machines and the ones on the direct storage with the same hosts.
If you want to be certain you can measure the Disk IO on your VMs and see if the total is too high for the system capacity.
For a Windows box create a perfmon counter for these parameters:
\PhysicalDisk(_Total)\Disk Reads/sec
\PhysicalDisk(_Total)\Disk Writes/sec
\PhysicalDisk(_Total)\Disk Read Bytes/sec
\PhysicalDisk(_Total)\Disk Write Bytes/sec
\PhysicalDisk(_Total)\Avg. Disk Bytes/Read
\PhysicalDisk(_Total)\Avg. Disk Bytes/Write
On a Linux server use this command to dump iostats to a text file for one day.
LINUX> nohup iostat -dkxt 60 1440 >iostat-for-a-day.”servername”
Mon January 25, 2010, 13:32:28
Any suggestions for a Windows 2008 Hyper-V certified or compatible “iSCSI” NAS? Not exactly vmware but the similar technology.
Tue January 26, 2010, 19:21:22
Well, I don’t see a Hyper-V certification program but the ReadyNAS did achieve certification with VMware for their iSCSI capable models. What I describe here is only NFS on the low end, but I’ve also now used thee 2100 and the iSCSI connection with both VMware and Windows Server 2003.
The iSCSI targets support both CHAP and persistent reservations. They have full support for Windows Server 2008 too and that client is suppose to be much better. The Windows 2003 client kind of sucks.
http://www.readynas.com/?page_id=77
Part of the key with iSCSI is to segment the traffic. This works best if you setup an independent VLAN for the connections and a dedicated NIC on the server to that VLAN. This way production traffic never gets in the way of your LUN connections.
The only server I have problems with is the one I didn’t have a spare NIC to connect to the iSCSI VLAN and made the connection to the production NIC.
The ReadyNAS has two NICs built in. I connect one to production for management access and the second to the iSCSI VLAN just for the LUN connections.
Mon November 23, 2015, 08:00:56
>Нет, он есть, например, у NetApp FAS2020, вполне себе entrylevel, за цену тысяч от 7.Роман, а я почему-то думал что VAAI только в ONTAP 8+ и его нет у 2020
Sat November 28, 2015, 06:47:24
Yes, these ReadyNAS are not on the level of a NetAPP solution but a much less expensive entry level product. And much less expensive as a result.