We are a small creative production company. Our blog covers everything from the art to the tech of producing content. Our current project is the feature film Marriage.
10 Oct 2013 Paul techpostproductionreview
marriage image

View our latest production Marriage then please like us on facebook to keep in touch!

Indie Post Network, 10GigE and Synology

A big facilities house will spend a lot of time and money on their infrastructure and network as the performance of this will have a knock on effect on practically everything they do. Typically, networks will be SAN based and the costs can run quite stratospheric. Here's a quick run down of some typical options

Typical Network Choices

FibreChannel requires a FC capable server, FC switch and FC host bus cards in each machine running it. There are numerous varieties and speeds range from 2 to 16 gb/s (gigabits per second). For comparison a 1GbE ethernet network is, well, 1 gb/s. Of course the 1GbE rarely reaches it's full potential, typical speeds might be 80% of that. A FibreChannel SAN is expensive (relatively), FC cards can be in the £500+ region. A 8 port switch is £2000 for example. SAN servers more than that. Of course you can source older generation hardware on ebay, but some of that appears more trouble than it's worth and the speeds aren't that great. Also, each host needs FC driver software and that's an added expense. FC is quite complicated, requires a bit more support but it does offer the best performance. An iSCSI SAN is another alternative. It's basically running SCSI over ethernet links. It's cheaper to deploy and can occupy a nice half way house. However for our uses we have a number of workstations and render boxes currently running on 1GigE, we don't need an FC level of performance all the time or on all machines. A render box would be constrained more by the number crunching than IO speed for example. For us the most important features of our poor-mans SAN are redundancy, overhead and speed. We looked at a number of options, from biting the bullet for FibreChannel (just not justifiable), doing a DIY fibre channel solution (not enough time or reliability worries), iSCSI (too much hassle with host adapters and software drivers) other older technologies off ebay such as InfiniBand (fast but we'd be stuck if it didn't work). So we looked at NAS storage (prosumer ethernet based networking).

We'd been happily using a ReadyNAS from NetGear which was a solid performer but we were won over by the 8 bay synology DS1812 a while back. We loaded this up with 8 x 3GB drives with 2 drive redundancy (around 18TB of usable space). We used this on set to back up the camera originals. The box is very quiet and has proved reliable and fast. We found that on a tiny on-set network we could write and read at near full 1GigE speeds (120MB/s). Access times slow down a little the more full it becomes (an anecdotal observation).

Then back at base we have a Synology DS3612xs, which is a 12 drive NAS server.

Link Aggregation

Both these Synology products offer link aggregation which means that one or more 1GbE links can be combined into a larger single link. The 1812 has 2 ports, the 3612 has 4. Using a suitable switch, which understands the Link Aggregation protocol, these were combined into a 2Gig/E and 4Gig/E pipe. However Link Aggregation on a practical level doesn't make that much difference. It helps when there are multiple machines accessing the same server because individual machine loads can be spread a little over the different physical connections. We did put a 2 port ethernet adapter into a host machine and did a link aggregation across that but for various OS related reasons you don't see twice the performance. I think at best i saw 120% of the speed and slightly better response reading and writing at the same time. But it's not a way to get twice the speed at all.

Synology servers

10GbE

One of the reasons we went down the DS3612xs route was that it offered the ability to add a 3rd party Ethernet card to get 10GigE. This is the next generation of Ethernet and whilst quite expensive initially, the prices are beginning to fall. The newest cards (T520, T540 from Intel) are lower cost and can work with normal copper cables, in fact pretty much the same infrastructure as 1GbE. It's important to note that 10GbE can also run on fibre, the difference between copper and fibre at this level is the response of fibre is milliseconds better. However the cost of copper is quite a bit less. You should also be using Cat6 cabling which offers a bit more protection and shielding. 10GigE switches are still expensive so what we did in the short term was add a 2 port 10GigE card to the server and run off cables direct to cards in a couple of workstations to test and see the results.

One word though, it's important to ensure that within the Synology DSM (the management software) you have not only jumbo frames enabled (and on the network cards themselves) but also there is a check box to enable Jumbo frames through windows SMB sharing. Without this checkbox then the read performance of the NAS is compromised. This was a tricky one to track down. If you have slow reads and fast writes then it could be this setting making the difference. Also it's worth turning off write caching for tests because that can skew the write results and make them higher than they would be if sustained.

Our tests are skewed to what we're doing mainly. This would be reading and writing large movie files off the server or read/writing thousands on individual frames. Both these operations can place quite a load on the server and neither are solely about read/write speeds. There's a lot more to do with latency and seek times going on.

Some Results

The 1GbE NAS is our Synology 1812+ 8 bay, which is fully loaded in SHR-2 (Raid 6, so two disc redundancy). The 10GbE test is our 3612xs, the volume in the test case is a 6 drive SHR-2. So only having 6 drives is a limitation itself (actually 4 drives plus 2 redundancy).

The first test is using the free BlackMagic Disk Speed Test app. This app appears to read and write from RAM which means that the source disc is not a bottleneck, so it's a good test of theoretical performance. The test was run a few times and an average taken.

BlackMagic Speed Test 2 x RAID 0 1GbE NAS Local SSD 10GbE
Read Speed (MB/s) 75 53 178 300
Write Speed (MB/s) 75 79 96 330

Opening up an After Effects comp file with hundreds of referenced movies. This is quite an overload and very slow to open normally. After Effects will scan all the movies when the project is opened. This is why it's pretty time consuming and the 10GbE results offer a nice real world increase in speed.

Open AE Project 1GbE NAS 10GbE NAS
Open project (mins:secs) 12:38 3:41

The next series of tests are file system copies - these can involve multiple bottlenecks, so i take a single folder of files (50.7GB worth of quicktime movies) and copy from one source to another. In each case we see what is provding the bottle neck.

Source -> Destination Time Rate Bottleneck
SSD to normal drive 24:09 37.6MB/s write speed starts high then slows (write caching)
SSD to Array 10:45 84.5MB/s write speed of array (just two RAID0 drives)
SSD to 1GbE 11:30 79MB/s
SSD to 10GbE 4:22 208MB/s SSD read speed
10GbE to 10GbE 2:46 328MB/s

Resolve Rendering. Here we take a folder of EXR files (4500) and play them back, then render them out to uncompressed EXR files (11.8MB each), from and to the same destination. We set the timeline to 48fps, so it will play back the most it can. The render has a simple gain, we don't want to test the graphics card here, just IO performance. So real world performance comes into this, reading and writing lots of files from and to a single drive is not optimal but even so, performance relative to SSD is not bad at all. Especially compared to 1GbE or a normal hard drive.

Resolve Render Playback Render
From SSD to SSD 22fps (200MB/s) 11:58s
From 10GbE to 10Gbe 20.5 fps 16:05s
Drive to Drive 4 fps 33:13s
1GbE to 1GbE 10.5 fps 28:04s

Conclusion

What is interesting is that firstly, yes 10GbE makes a big real world difference, we're happy with the results. We're not maxing out the network connection at all, our bottlenecks are actually the drives either end. Going forward the 10GbE network makes so much more sense. Eventually the adapters should be on the motherboards, the switches should become more prosumer and cheaper. It seems a natural progression forward for wired networking. The only alternatives really are link systems such as thunderbolt, but that's not really being positioned in this market.

It does make me wonder why 10GbE hasn't actually taken off faster.

If you like this entry, please share!

comments powered by Disqus

© Inventome Limited. All Rights Reserved.

[Src ]