Synology DSM 5.1 and Volume Migrations

For the past week or so, I have been experimenting with 2 Synology units, DS1812+ and DS1815+. I had about 5.5TB of shares and LUNs to move around. I have made some observations and wanted to share those with you.

At first, I thought that SHR with 1 disk redundancy would fit my needs. So I used 4 disks to build a SHR volume. Resulting performance for large files were acceptable, but when multiple users were accessing the system, it clearly showed additional latency.

From there, despite the disadvantage of not being able to expand later, I went with RAID10 volume of 4 disks.

In doing so, I have observed the following results:

Configuration

Read Throughput

Write throughput

Visual observations

Notes

(all disks SATA)

(no link aggregation)

Group of files 10MB or less

Group of files 100MB and more

Group of files 10MB or less

Group of files 100MB and more

Refresh thumbnails of 3000 photos in a single folder on a Windows 8.1 client. Photos are 5-10MB in size.

 

SHR (1 disk redundancy)

20-30MB/sec

95-120MB/sec

15-20MB/sec

85-110MB/sec

Paints about 10-15 photo thumbnails per second.

 

SHR (1 disk redundancy) + 1 x SSD read cache

20-30MB/sec

95-120MB/sec

15-20MB/sec

85-110MB/sec

Paints about 10-15 photo thumbnails per second.

I was able to get no benefit from the read-cache no matter what I tried. I’ll revisit this later perhaps. Maybe 2-disk SSD cache is the way to go.

RAID 0 of 4x1TB disks

40-70MB/sec

100-120MB/sec

30-60MB/sec

95-115MB/sec

Paints about 30-45 photo thumbnails per second.

 

RAID 10 of 2x4TB+3x3TB disks

80-100MB/sec

110-125MB/sec

70-80MB/sec

105-125MB/sec

Paints about 35-40 photo thumbnails per second.

Disks on this one were a bit faster/newer.

 

Concurrent Activities

I can say that RAID10 is significantly ahead of SHR in terms of ability to handle concurrent requests. For example, in-NAS file transfers are currently running. I’m migrating a 600GB iSCSI LUN from RAID0 to RAID10 volume (all on separate set of disks), simultaneously I’m moving a large video content from RAID0 to RAID10 volume. Let’s take a look at the statistics:

From there, let me show you some resource monitor views:

Above, disks 1-2-3-4 formed the RAID10, disks 5-6-7-8 are RAID0. Transfer is from RAID0 to RAID10 as I indicated earlier. Transfer rate seems pretty stable, although I have seen spikes going up to 400MB/sec writes. Note that the write figure (247.7MB/sec) is cumulative across all disks regardless of the RAID level. Meaning, RAID1 in RAID10 would require mirror, so there is double-counting going on. That’s why the writes are double that of reads in above screen.

Still, for handling smaller files as well as a whole LUN move concurrently, I’m pretty satisfied with the results. I think DSM is able to saturate the disks quite a bit.

Here’s another screenshot showing a bit higher levels.

This time pay attention to IOPS. Given these are merely 7200RPM SATA drives, I’m happy with the IOPS levels I’m seeing here as well as the effective throughput.

Concurrent Activities

I have moved this 5-6TB content in-NAS / out-of-NAS concurrently without any single hiccup whatsoever. For example, if I’m moving a volume, despite it warns “all services will stop”, service to other shared folders continue to work without any issue. iSCSI LUNs operate just fine as well. Only the resource being moved is taken offline *during* the move. Be that a volume, or iSCSI LUN. It’s good to see that everything else continues to work without any issue.

Decision Rationale for the Count of Disks per Volume

Although I could have used all 8 slots to form a RAID10 volume, for production use at home I chose not to. Honestly, my performance needs are met at this level. I like to keep the remaining 4 slots open/available for following reasons:

  • Less spinning disks -> less power
  • If I need to migrate to another volume, I can use those 4 slots and perform in-NAS volume move without much impact on users/apps. Less reconfiguration of cloud backups etc.
  • Should I ever want to try SSD caching, I will need 2 slots for that, this options gives me that possibility.
  • I will want to keep a hot-spare, will be using a slot for that as well.

I will keep posting my experiences as I continue to implement other capabilities of this Synology DS1815+ NAS.

I will say though, that the Windows Server Storage Spaces does provide RAID10-like performance on an expandable volume configuration (doesn’t shrink; just expand). That is an opportunity for Synology to improve.

Thanks for reading.

One response to “Synology DSM 5.1 and Volume Migrations”

  1. Thanx for sharing this.

    After upgrading DS1815+ to DSM6 I wanted to move everything from ext4 to btrfs because of snapshot support.
    I have enough free space on disk group, so I created a new btrfs volume and wanted to move folders.
    I stopped at “all services will stop”.
    Synology could be more clear what services will actually stop: all or just the one being moved 🙂

    Your post encouraged me to continue.

    Regards, Miro

    BTW, All moves went through without problems.

Leave a comment

Blog at WordPress.com.