Need some opinions on a build that will replace a SAN.

Daniel Claesson

Dabbler
Joined
May 31, 2016
Messages
35
Hi,

I want to replace an older SAN solution for a team of video editors (Adobe Premier etc).

Today there are 5 video editing workstations (3 Mac Pro's and 2 PC workstations) that are connecting to a SAN with iSCSI over 10GbE and to a fileserver (SMB) over 1GbE.
The SAN hosts individual "scratchdisks" to each workstation and a lager volume for shared material (only read rights to 4 of the 5 workstations).

This solution is not ideal and do not perform good enough. Backing up all this data (about 50TB) is a Sony ODA system that works very well.
The network is newly installed full stack of Cisco Meraki with both 1GbE and 10GbE switches.

I plan to scap the whole SAN system and replace it with a FreeNAS fileserver instead and adding local NVME SSD "scratchdisks" to each workstation.

The build i'm looking for feedback on is the following:
- Server: Supermicro SuperServer 6028R-E1CR12L (https://www.supermicro.com/products/system/2u/6028/ssg-6028r-e1cr12l.cfm)
- CPU: 1x Intel Xeon E5-2620v4
- RAM: 4x 16GB DDR4 ECC REG
- Boot device: 2x Supermicro SuperDOM 32GB SATADOM
- SLOG: 1x Intel Optane SSD 900P Scp 280GB
- L2ARC: 1x Intel Optane SSD 900P Scp 480GB
- Storage: 12x Seagate Enterprise Capacity 3.5 HDD 10TB SAS 12Gb/s 256MB512e

Setup thoughts:
- Regarding SLOG & L2ARC: I picked the Optane 900P over the Optane P4800x because of price and in the case of a "meltdown" the working data is on the "scrathdisks" of each workstation and also the other needed data for each project can be accessed from the ODA system. Not as simple as from the fileserver. But in an emergency the data can be fetched from the backup archive while waiting for a full data recovery to the fileserver. ODA system is also connected with 10GbE and give reasonable good speeds.

A stupid scenario or a somewhat reasonable calculated risk?

I also don't really know if the SLOG or L2ARC are needed. I plan to first test the performance without them, i can return both units if not needed.

- Storage: I will need 70TB of storage to start with, that will cover the storage needs for the team for at least 12-18 months. Later on i will have to add a JBOD chassis. I will test a couple of pool layouts, 3 vdevs with 4x10TB Raidz1 and 2 vdevs with 6x10TB Raidz2.
My main concern regarding storage is performance, will a 12 disk system with a "sane" zpool config generate enough speed, ~400-500MB/s to 2 workstations simultaneously or should i start with a system with more (and smaller) disks?

Best Regards
Daniel
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
fileserver
- SLOG: 1x Intel Optane SSD 900P Scp 280GB
If you using SMB there is not much point in having a SLOG. SMB defaults to async writes. I looks like SMB does support "sync" writes but I am unsure of how they are handled in FreeNAS. If your running iSCSI but sure to set sync always on your dataset. I guess you could do this with SMB too but again, I'm unsure of how this is handled.

If you dont care about data loss in the even of a crash or SLOG failure there is no point in using one and you can set your dataset to sync disabled. You may lose a bit more data but if you have to recover anyway it won't make much difference.
will a 12 disk system with a "sane" zpool config generate enough speed, ~400-500MB/s to 2 workstations simultaneously or should i start with a system with more (and smaller) disks?
I have an 8 disk system with 32GB RAM that can push 1.3GB/s over two path 8gb/s fiber channel. The L2ARC does not do anything for me. I suspect it won't do much for you either unless you all working with the same files over and over. But it sounds like your going to copy the files to the local scratch disk and work with it from there so caching likely won't help much. Personally, I would spend the money on more RAM.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
On a side note, iSCSI is a bit slower than fiber channel and you will likly want to look into SMB 3 and all that multi-channel goodness. I know nothing about it but have heard good things.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
People place too much performance confidence in Raidz1/2. Yes, it's nice to have some redundancy. But you run into the RAID IOPS wall, where you have to wait for some number of devices to complete their tasks before you can move to the next. If you want to push the IOPS way up, you build a stripe of mirrors. You get redundancy, and the ability to do two things at once on each stripe.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Yeah, I dont think hes going to care much about IOPS. I would imagine in his case its all about throughput and capacity.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
As the muscle car people say "There's no replacement for displacement".
I'll give you a ride in my 300+hp 2.0 turbo. I'll bet you change your mind... My point was that he will not need RAID 10 to get those speeds. Granted he also doesn't want RAIDz3 12 drives wide either.
 

Daniel Claesson

Dabbler
Joined
May 31, 2016
Messages
35
Hi, and tanks for the input.

The SAN was installed before my time. So that "turd" is not my doing :D

I guess i have to pick 2 out of the magical 3. So to clarify my end goal. I want SMB speed (SMB3 multi i something that was planned to explore) and and storage, superduper safety is not that high on the list, more a nice to have than a must have. The backup system works very well, and are well tested and staff knows how to get files back without bugging the "IT guy" :rolleyes:
The team can continue to work even if the server is down for 2-3 days. They will of course complain a lot but the projects they are working on will not stop.

SLOG: I have done some reading and found some that recommend it and some don't. A bit confusing, but if the majority of comments to this thread state that it will not be beneficial i will not go ahead and order one.

Storage: I can of course test a bunch of scenarios before i put the server in to production (and i will of course do that), but i was more asking to see if a 12-bay chassis to start with was big enough for the storage pool i need and target speeds.
It is not that more expensive to pick a 16-bay or even a 24-bay server.

So the question is... Is 12 drives a good starting point to begin exploring different zpool setups or should i go with a bigger server and more drives, say 16 drives instead?
I have the budget for it, but i don't want to "overspend" on the storage server. There are more places where i can put $$ in to.

Best Regards
Daniel
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
staff knows how to get files back without bugging the "IT guy"
Wow they know what files are? I'm impressed, you have them well trained! haha

12 is a good starting point but does not leave a lot of room. 2 x 6 drive RAIDz2 vdevs with 10TB disks is "80TB" in reality more like mid 70TB. the nice thing is that you can add cheap disk shelves as you need. I was looking at playing with some old netapp boxes.
As for the speed, I cant speak to 10gbe and smb BUT over fiber channel, I hit 1,300GB+ on large sequential reads with two paths. Again im using RAID 10 but you can build it with the 2x6 z2 and test. Add more shelves as needed. :D

EDIT: @Stux I think had a great thread about testing SLOG devices and pointed out that you can test the maximum theoretical benefit by making a ramdisk and testing (please not in production!) to see if its worth spending anything on real hardware.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Wow they know what files are? I'm impressed, you have them well trained! haha

12 is a good starting point but does not leave a lot of room. 2 x 6 drive RAIDz2 vdevs with 10TB disks is "80TB" in reality more like mid 70TB. the nice thing is that you can add cheap disk shelves as you need. I was looking at playing with some old netapp boxes.
As for the speed, I can't speak to 10gbe and smb BUT over fiber channel, I hit 1,300GB+ on large sequential reads with two paths. Again im using RAID 10 but you can build it with the 2x6 z2 and test. Add more shelves as needed. :D

EDIT: @Stux I think had a great thread about testing SLOG devices and pointed out that you can test the maximum theoretical benefit by making a ramdisk and testing (please not in production!) to see if its worth spending anything on real hardware.

Was going to suggest it ;)

https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/


Personally, I think a 24 bay 4U enclosure is a bit of a sweet spot.

Raidz2 6-way is probably the best compromise between storage/capacity/redundancy and IOPS for large file storage

Start with 12 disks, then you can grow to 18 or 24 for more speed/Storage.
 

Daniel Claesson

Dabbler
Joined
May 31, 2016
Messages
35
Hi again,

Thanks for linking that thread, interesting read.

I will start with a bigger enclosure, the local Supermicro rep here gave me a good deal on a 24-bay system with the same specs as the 12-bay one. Only about 200$ more.

I will start out testing a 2 vdev 6x10TB setup first. If that don't give me a good enough result i will go for a RAID 10 setup with more disk.

I'm skipping the slog completely as it seems it will not benefit my use case at all.

I will keep this thread updateed with how it goes as soon as i have the parts etc.
Delivery time is estimated to about 3-4 weeks.

Regards
Daniel Claesson
 

SMnasMAN

Contributor
Joined
Dec 2, 2018
Messages
177
hey, how did your performance turn out? im pretty interested as im building a similar setup. thanks!
 

Daniel Claesson

Dabbler
Joined
May 31, 2016
Messages
35
hey, how did your performance turn out? im pretty interested as im building a similar setup. thanks!

Hi sorry for a late replay.

In the end we decided to go with a TrueNAS system, together with customer we decided that support from IX Systems was the way forward.
So we help out with deployment and setup of a TrueNAS M40 + expansion shelf, a total of 8U.

Regards
Daniel Claesson
 
Top