Skip to content

Playing with ESXi and Synology in the home lab…

So I finally bit the bullet and bought a NAS for my home lab. Ended up with a Synology DS411 due to some good timing from a NewEgg sale.

I went with Synology for two reasons: 1. I could supply my own disks (I don’t want the “Green” drives that came with most other NAS units) 2. I always hear REALLY good things about them, they’re always highly recommended.

I happened to already have a set of 4 Hitachi Deskstars sitting around so I tossed them in and started running some tests with IOMeter (Run all tests). I did a test on a RAID5 grouping of all 4 disks which were presented to the ESXi host as an iSCSI target and then I did the same test with a RAID 5 grouping of all 4 disks which were then presented to the ESXi host via NFS. I was amazed at the results.

From a personal standpoint, I fully expected iSCSI to blow away NFS. Wow, was I wrong. iSCSI pulled a whopping 270 IOPS from the IOMeter test. Then I ran the exact same test only via an NFS share… NFS blew iSCSI out of the water with a 1,070 IOPS. My jaw is still on the floor from that result. Here’s some of the rest of the results:

I’m also seeing some excellent regular performance as well. 45MBps worth of write during an SvMotion.

So to say the least, I’m quite impressed at what this little guy can do with 4 SATA 7200RPM drives and a single NIC.

One other tidbit to add in an unrelated fashion is how much cooler the new Hitachi Deskstar runs compared to the older ones. A full 12F cooler. That was another surprise…

Published inVMware


  1. Hey Kyle,

    Nice post – may I ask what RAID level your 4 hitachi disks were in? Also, it looks like you used IOmeter for your testing – which test specifically did you use if you remember?


    • Kyle Ruddy Kyle Ruddy

      I used a RAID5 with a single NFS volume.
      I ran the “All” test.

  2. Jako Bergson Jako Bergson

    If iSCSI is slower then maybe you created first big Volume and then iSCSI targets (on this volume)? Then you will have slow iSCSI.

    You will have fast iSCSI if you start without volume, creating iSCSI targets directly to disks. Speed difference is 2x by my tests.

    • Kyle Ruddy Kyle Ruddy

      Perhaps I’m not understanding, but you’re saying to turn on the iSCSI target service and then create the volume for the iSCSI target?

      I’m not sure I had the ability to do it in that order, but I’ll give it a chance the next time I’m doing some testing on it.

      • Jako Bergson Jako Bergson

        Defaul settings after first installing are that in Storage manager you have big volume over disks, and if you create iSCSI LUN, then you can use only first option: iSCSI LUN (Regular Files). This is dynamic – you have linux file system and SCSI LUNS are dynamic files on this file system.

        But if you delete your volume then you can create iSCSi LUN (Block level) – Multiple LUNs on RAID – directly on disks and it’t faster.

        • bw bw

          @jako, i just got a synology 1512+ and putting together an ESXi to run VMs from the NAS and to store me movies/music on it too.

          is it *better* to create seperate iSCSI LUNs per VMs ?

          there is nothing on this box now so deleting the volume on it now would be easy. is there much admin work involved with creating maintaining iSCSI LUNs (block level)? thanks

        • bw bw

          @Kyle: The theoretical limit for the NFS V2 protocol is 8K. For the V3 protocol, the limit is specific to the server. On the Linux server, the maximum block size is defined by the value of the kernel constant NFSSVC_MAXBLKSIZE, found in the Linux kernel source file ./include/linux/nfsd/const.h. The current maximum block size for the kernel, as of 2.4.17, is 8K (8192 bytes), but the patch set implementing NFS over TCP/IP transport in the 2.4 series, as of this writing, uses a value of 32K (defined in the patch as 32*1024) for the maximum block size.

          All 2.4 clients currently support up to 32K block transfer sizes, allowing the standard 32K block transfers across NFS mounts from other servers, such as Solaris, without client modification.

          question: what is the rsize AND wsize set on you system?

          [more data here]

  3. Nick Nick

    Hi Kyle

    Interesting results. Is the Synology DS411 VMware Certified ?

    I’ve been finding it difficult to choose a NAS and was looking at the Synology DS213.

    Should I just opt for a NAS that supports NFS ?

    Any advice would be appreciated.

    • Jako Jako

      I use Synology DS411 for VMware cluster backup. And it’s was disaster – first it was fast but when second backup started /simultanioisly) then it almost stoped. So I find out that Synology is not VMware compatible.
      Now I have physical backup server (Win2008) and Synology is connected as iSCSI to this Win server. And backup server is connected to VMware cluster storagei using FC.
      Now all works fine and fast. First full backup goes 2,5-3 GB/min, ~50MB/s, ~500Mbit/s. I thint this is good result.

      So don’t use this NAS as VMware datastore 🙁
      And also delete initial volume and create iSCSi LUN (Block level) – Multiple LUNs on RAID.

      Then you can alse smaller create volume for file share.

  4. Thomas Mun Thomas Mun

    I also had ‘slow’ performance on default install of iscsi. It seemed about 43MBPS (mega bytes per second) while all others were in the 110 range.

    Did you try the ‘blow away’ and use block level? Did it improve performance?


  5. Hi Kyle, thanks for your post. Are there any differences between the Synology DS411 and the <a href=" /de/products/neor is it the same NAynology-ds411slim/art-207815-synology-ds411slim-nas-server“>Synology Ds411slim or is it the same NAS?

    • Kyle Ruddy Kyle Ruddy

      Niklas: The biggest difference I can tell is that the one in the link you’ve provided only supports the 2.5″ drives. Everything else seems pretty comparable.

  6. […] or its configuration influence performance. For example I saw interesting reports that NFS share on Synology outperform iSCSI target (at least in terms of IOps), and in other source there were graphs showing negligible performance […]

Leave a Reply