Dual Xeon 10Gb - All purpose office NAS

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
Hi there.
Building a storage server for a new build office.

The main purposes are:
- Users' files centralised storage
- Video surveillance storage
- Asterisk calls records storage
- Storage for DB backups and VM snapshots
- May be some cloud storage

The hardware:
Decided to base the system on a second hand server hardware.
There is a service in Russia, who are selling used server hardware with testing it before sending to a client. Slightly more expensive, than buying for Aliexpress or e-bay, but much more affordable, then new. Here is a link, if you are from Russia or somewhere nearby: https://abgreyd.servis2010.ru/
So, the components are:
 

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
So, the components are:
CPU: 2 х Intel Xeon E5-2650v2 (2x8 cores 2x16 Threds 2,6 GHz): ABCPU link
MB: Supermicro X9DRi-LN4F+: ABbreyd MB Link
RAM: 4 х Samsung 32GB DDR3 4RX4 PC3-12800 REG ECC 128Gb total: ABGreyd RAM Link

OS SSD: 2 x Kingston 120GB SSDNow SA400 stripped
SLOG: Samsung 970 EVO Plus M.2 NVMe - 250GB: Amazon.com Link
L2ARC: Samsung 970 EVO Plus M.2 NVMe - 500GB: Amazon.com Link
PCIe NVMe adapter: 2 x PCIe NVme Adapter with SSD Fan Cooler: Amazon.com Link
HDDs: 6 x Western Digital 6TB Ultrastar DC HC310 RAID Z2: Amazon.com Link

SAS Controller: LSI Broadcom SAS 9300-8i: Amazon.com Link
10Gb NIC: INTEL X520-DA2 (DELL XYT17): Amazon.com Link

Rack case: 19" 4U Exegate Pro 4U4139L: Nix.ru Link

Plus some extra caples, PSU, fans and etc.

The overall build is around $2500

Parts are on the way now. I'm not a complete noob in PC builds, but new in FreeNAS builds.
If you have any critics, suggestions or other thoughts - you are welcome.
 
Joined
Oct 18, 2018
Messages
969
Samsung 970 EVO Plus M.2 NVMe - 250GB
This is not a good SLOG device; it has no power loss protection. The purpose of the SLOG is to make synchronous writes faster. Synchronous write only return that the data has been received when it is in non-volatile storage; such as your pool (if you have no SLOG device) or the SLOG device itself. The issue here is that if you're not using power loss protection devices as a SLOG a sudden loss of power could easily result in data being lost because a non-PLP device is subject to some delay between when it tells the system it has the data and when the data has been fully written to a persistent store. I would suggest you either not use a SLOG and if performance is too slow turn off sync writes or buy a slog that has PLP. A fast model that fits M.2 is the Intel Optane SSD DC P4801X. You can find good PCIe rather than M.2 options as well.

L2ARC: Samsung 970 EVO Plus M.2 NVMe
This will certainly work for an L2ARC. If you didn't want to fiddle with PCIe to M.2 adapters you could pick up straight PCIe storage cards. or if you are willing to sacrifice speed you could opt for a SATA or SAS SSD.

PCIe NVMe adapter: 2 x PCIe NVme Adapter with SSD Fan Cooler:
I know nothing about this card. I do know that this Supermicro AOC-SLG3-2M2 PCIe3.0x8 -> 2x M.2 card works. If used in an 8x slot with port bifurcation it gives you the ability to add 2 NVMe drives. If used in a 4x port of an 8x port without port bifurcation gives you the ability to add 1 NVMe drive. I believe most dual socket supermicro boards support port bifurcation; I don't now about yours specifically. I would check the documentation and if you cannot find it there check with supermicro, sometimes their documentation is missing info about port bifurcation.

6 x Western Digital 6TB Ultrastar DC HC310 RAID Z2
These drives will not come close to saturating your 12Gbps per lane SAS card. Unless you're doing cascading with large numbers of HDDs or using tons of SSDs you likely won't saturate that link. Using a 6Gbps per lane HBA may offer some cost savings.
 

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
Thank you for your answer.
Thanks, I was hoping a server on-line rack-mount UPS will help me avoid sudden power loss in most cases.
Using a system without SLOG is not an offer. It will be too slow. But I will consider something with PLP. Intel Optane DC P4801X or Samsung 983 DCT. Can not find anything else with PLP with a reasonable price.

For native PCI-e SSDs - there are no enough of them on the market and those which are, are way too expensive. That's why I decided to use an adapter card.

For Supermicro AOC-SLG3-2M2 - A good thing, may be I'll think of using it for a newer build. If I find it in retail for a reasonable price.
From it's specs I see it works with newer MBs staring with "X10". Mine is a lil bit older.

For the adapters I'm using - generally, they physically transfer the contacts from M.2 format to PCI-e. Some like a riser card. There is nothing not to work in them. Plus a radiator with a fan.

For not saturating 12Gbs - yes, sure. Just was able to buy for a good price, not much different from a 6Gbs model. So, why not?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Using a system without SLOG is not an offer. It will be too slow.

Maybe? I am assuming you are concerned about the speed of sync write for the VM snapshots. Everything else appears to be storage that wouldn’t use sync.

Is that so? Which of your use cases requires sync write?

With a single raidz2, write speeds should be in line with the speed of a single disk. 100MB/s to 180MB/s, maybe. Is that the speed you expect?
 

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
Not so much about the the VM snapshots, more about the SMB Read/Write by 30+ office employes, also the design dept. with their huge files. Also the accountants' software with their file-based DB. So, what I'm concerned about is not a single file write, but simultaneous read/write from a number of users and and network applications.

Here are the speeds I'm expecting without SLOG:
6x 4TB, 3 striped mirrors, 11.3 TB, w=389MB/s , rw=60MB/s , r=655MB/s
6x 4TB, raidz2 (raid6), 15.0 TB, w=429MB/s , rw=71MB/s , r=488MB/s
Taken from here: https://calomel.org/zfs_raid_speed_capacity.html

Are you telling that this activity doesn't use SLOG?
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
I don't think that SMB using sync write by default.
 

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
I don't think that SMB using sync write by default.

Are you sure? "Not using by default, but can be enabled" or "SMB shares can not use sync write in no way"?
Is there some place where I can read about it?
 

Tony-1971

Contributor
Joined
Oct 1, 2016
Messages
147
In my Freenas I have:
Code:
% zfs get sync tank-big/Movie
NAME            PROPERTY  VALUE     SOURCE
tank-big/Movie  sync      standard  default

and if I copy a file of 6GB in this share from a windows machine, there is no activity on log device (checked with zpool iostat command).
I think that the best performance you can obtain is with sync disabled, so there is no reason to enable sync write.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Windows machines don’t use sync write on SMB by default; MacOS does.

If write speed without sync write and without SLOG is N, then write speed with sync write and without SLOG will be much slower than N, and write speed with sync write and with SLOG will, in a best case, approach N.

Sync write is never faster than not syncing.

I really recommend reading into how ZFS works, what a SLOG is, characteristic of file and block storage on ZFS. ZFS behavior is not intuitive, and having an understanding of RAID storage is likely to send you down the wrong path.

I’ll need to look at that speed link you posted; intuitively, I don’t expect write speeds that high from a raidz2.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
After a little bit more reading: The cases where speed matters are often also the cases where IOPS matter, such as database and VM access. You get the kind of bandwidth you outlined; you also get a single disk’s IOPS per vdev. Which storage structure is best for you depends heavily on the access patterns of your applications. If this is all “reading and writing large files”, then yes, you are not that concerned about IOPS.

This is an old and still relevant read: https://constantin.glez.de/2010/06/04/a-closer-look-zfs-vdevs-and-performance/
 

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
I really recommend reading into how ZFS works, what a SLOG is

Yes, sure. Read some articles about SLOG. It looks like you are right. As far as I could understand, for SMB it should no give any benefits, though for NFS or iSCSI it can. So, for that purpose it's not completely useless. Am I right?
 

Zemlen

Cadet
Joined
Apr 27, 2020
Messages
7
This is an old and still relevant read:

Thanks. It's rather big. Will need some time to read and understand, but I'm definitely going to do this.

Also found some articles which seem to be useful for me, and will leave the here for someone looking for the seme info:

What is the ZFS ZIL SLOG and what makes a good one

Exploring the Best ZFS ZIL SLOG SSD with Intel Optane and NAND

Aaron Toponce. ZFS Administration, Part III- The ZFS Intent Log

To SLOG or not to SLOG: How to best configure your ZFS Intent Log
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
VMWare on NFS will sync write, and I think iSCSI can as well, I am shaky on that. Anytime you have sync writes, a cap protected secondary log with blazingly fast IOPS will help.
I am unclear on the need to mirror SLOG devices, I am not sure how data corruption in a SLOG impacts the pool.

I know for L2ARC, ZFS will detect corruption and there is no need to mirror it.

Speaking of L2ARC: that is for reads. It will take space from ARC, and it will always be slower than ARC, which means that you will always want to max RAM before you even think about an L2ARC. Whether an L2ARC is beneficial depends on the size of the read dataset: If it’s too large to fit into ARC but would fit into ARC and L2ARC, then L2ARC can help. This is easily determined with something like ZFS-stats -a, and an L2ARC can always be added for testing and removed or replaced if it doesn’t do any good. This is, happily, a much easier discussion than SLOG.

And then there are fusion pools. Those are really cool, but you can’t just add a fusion vdev and remove it again if you don’t like it. A fusion pool uses a mirror of SSDs for meta data and, optionally, small files, for some value of “small” that depends on your use case and SSD storage space: Could be 16kb or as high as 64kb. Since this is a data holding vdev, it is just as permanent as any other vdev in your pool.

If you have the opportunity to run a simulation of real-world traffic against your NAS before you put it into production, this would help you a lot.
Alternatively, maybe someone here who has practical experience with tuning it for performance can speak up. All I have is theory, no real world experience: My FreeNAS is in my home and doesn’t need to do more than 1Gbit to a couple PCs.
 
Top