I might try FreeNAS

Netdewt

Explorer
Joined
Jan 19, 2021
Messages
98
I am looking into options for building a small-ish but fast small business NAS. I am also considering a TrueNAS Mini X+, but I'd like to check out the DIY options. I might be in over my head on this. If anyone has input or recommendations I truly appreciate it.

Situation:
- SMB share use only, all Mac OS clients
- 5TB minimum of usable space in a 'RAID 10' config
- 3-400 MBps (at least) disk speed for each user. ie, 1Gbe is not enough, 2.5Gbe might be
- backed up to mobile hard drive every night, is an all SSD overkill or too risky?

Possible setup:
4x4TB 3.5" HDD
1x1TB SSD read cache
1x256GB SSD OS disk
16GB ECC RAM
10Gbe RJ45 NIC
Intel processors are what I am used to

Supermicro Barebones 5029C-T
This might fit, plus would need to find a compatible NIC. Suggestions? I see that intel is highly recommended.

AsRock Rack E3C246D4I-2T
This looks really good. I don't know much about Oculink. Is it reliable? This has a 10Gbe NIC built in based on Intel X550. Does that mean it is compatible with FreeNAS?

iStarUSA S-35-B5SA
This is just a nice looking mini ITX enclosure alternative to the Supermicro.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
fast small business NAS.
If it is for a business, and you are buying good components, you probably won't be saving much money over the cost of going with a TrueNAS system ready made from iXsystems. Their prices are not bad and when you buy from them, you get pro-support, which is valuable thing.
3-400 MBps (at least) disk speed for each user.
More vdevs give more IOPS, so with only four disks, you could only have two vdevs, which is going to limit your speed potential. This is very work-load dependent. What is the purpose of the storage? Most likely, you should seek more disks.
backed up to mobile hard drive every night, is an all SSD overkill or too risky?
Not sure I understand. Backup to SSD or use SSD in the NAS for main storage?
1x1TB SSD read cache
Most likely, this is not what you think it is, I am guessing you mean L2ARC. Why do you want this?
16GB ECC RAM
Better to maximize RAM (which is used for cache) before adding a L2ARC.
plus would need to find a compatible NIC. Suggestions?
Depends on your network infrastructure. What kind of switch are you using?
If you want to get 10Gb speed to the desktop, you may need some expensive gear to make that happen in a Mac network.
This has a 10Gbe NIC built in based on Intel X550. Does that mean it is compatible with FreeNAS?
I use 10Gb Intel NICs in my systems at work. FreeNAS appears to work well with them.

From what you say, you are willing to spend for performance. That is another reason to talk more about your desire for performance because you are cutting performance short by going with undersized hardware.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. for clarity, you might want to review some terminology:
 

Netdewt

Explorer
Joined
Jan 19, 2021
Messages
98
I am looking at TrueNAS as well and getting a quote from them. It’s hard to understand. Working on it. Thanks for your help. I will try to get my terms right. I'd go back and correct but the forum won't let me edit.

We use this as image storage for retouching big layered Photoshop files and capture sessions. We open and save directly to the drive. Currently I use a Thunderbolt 2 Promise Pegasus, but would like to switch to a separate server.

Currently I get 400 MByte/sec on the RAID 5 Pegasus on Thunderbolt 2. My understanding is that anyone connecting to it over a 1Gbe connection will only see 125 MByte/sec (theoretically). I would like to at least get the entirety of the drive speed out of whatever connection I use. Fully utilizing a 10Gbe connection would be 1250 MByte/sec (theoretically) from what I understand.

This is the NIC I would get, or there are some Macs that have 10GBe built in.

Not sure I understand. Backup to SSD or use SSD in the NAS for main storage?
If I made the main storage out of SSDs, then I could easily maximize speed, but maybe the risk is not work it if SSDs are less reliable than HDDs over time. I backup to a 5TB mobile drive every night. It is the largest I can find that is bus powered, and that is my primary limiting factor.

Most likely, this is not what you think it is, I am guessing you mean L2ARC. Why do you want this?
My understanding is that it would increase the entire zpool. This is what TrueNAS told me when I asked about read and write caches.
 

Netdewt

Explorer
Joined
Jan 19, 2021
Messages
98
Last message posted before I was ready.

This is what TrueNAS told me:
"Read cache would help but no need for a write cache on SMB shares because all data is written asynchronous."

More vdevs give more IOPS, so with only four disks, you could only have two vdevs, which is going to limit your speed potential. This is very work-load dependent. What is the purpose of the storage? Most likely, you should seek more disks.

I could consider 4 mirrored vdevs (8x2TB HDD). I have no idea how to calculate the speed potential of that.

Depends on your network infrastructure. What kind of switch are you using?
My 1Gbe network is all Ubiquiti. I do not currently have a 10Gbe switch. I would look for something inexpensive and small like the MicroTik CRS305.

Alternatively, maybe I could use a dual 10Gbe NIC on the NAS and crossover cables to connect 2 workstations to the best speeds, then use a 1Gbe NIC to connect the NAS to the rest of the network. I haven't done enough research on this to know if it is a possibility.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'd go back and correct but the forum won't let me edit.
Don't worry. I am not trying to be difficult. It just makes communication easier when people all know the same words for the things.
Currently I get 400 MByte/sec on the RAID 5 Pegasus on Thunderbolt 2. My understanding is that anyone connecting to it over a 1Gbe connection will only see 125 MByte/sec (theoretically).
Correct. That is the bandwidth limit for 1Gb ethernet.
Fully utilizing a 10Gbe connection would be 1250 MByte/sec (theoretically) from what I understand.
The theoretical is almost never possible, but you would have disk limitations before you got close to that speed.
"Read cache would help but no need for a write cache on SMB shares because all data is written asynchronous."
True, but the L2ARC will decrease the amount of RAM available to the ARC cache. RAM (in FreeNAS and TrueNAS) is used as a cache, so it is first important to use the maximum RAM because it is a faster cache than any SSD. Here is an article that talks about it in more detail:

Don't get to bogged down in that though because the speed of the pool is probably more important when editing photos. I used to do some photography work and what I found was that the main disk access was when reading the photo in and writing it back out after editing, then you move on to the next photo. So, having a cache that contains the last photo is not going to help you with the next. So, a decent amount of RAM and fast disks are what I would suggest.
I could consider 4 mirrored vdevs (8x2TB HDD). I have no idea how to calculate the speed potential of that.
Very generally speaking, the speed of each vdev is equivalent to the speed of one of the disks that make the vdev. Keep in mind, this is a generalization, not an exact measure, but the "speed" and capacity of the vdevs is additive. So, based on 4 mirror vdevs, you would have the cumulative speed of four disks. For talking purposes, say each disk could read at around 130 MB/s, you would get (theoretically) about 520 MB/s access from the pool. Similarly, when writing, if each disk can do around 150 MB/s, you could see around 600 MB/s out to the pool. When reading image files, and writing them, it should be sequential writes, which are faster. Random IO is much slower.
 

Netdewt

Explorer
Joined
Jan 19, 2021
Messages
98
Considering an SSD array, I read that mirrors do not make sense because they fail predictably. Would RAIDZ1 be a better choice in that case?

Here's 2 options from the TrueNAS configurator I'm looking at. Which would be faster? Which carries more risk?
 

Attachments

  • hdd.png
    hdd.png
    205.4 KB · Views: 269
  • ssd.png
    ssd.png
    157.7 KB · Views: 265

Netdewt

Explorer
Joined
Jan 19, 2021
Messages
98
Mirrors fail predictably? Who said this and where did they say it, I think they need a little talkin' to.

Forums... there's always a wide variety of opinions on certain things.

The reason not to use RAID 1 isn't that SSDs don't fail. The reason not to use RAID 1 is that SSDs consistently fail the same way, at the same number of duty cycles. A functional RAID 1 guarantees that you're putting the exact same number of duty cycles on both drives! Congrats, you've borked performance for zero benefit. Our Ops guys actually demonstrated this on an NGINX video cache service. Basically, they were completely rewriting the disk every few hours. They lost multiple servers simultaneously at 2 months.

That doesn't mean there are not array solutions for SSDs. There are vendors that make flash based arrays. They are not using RAID 1. See below for an example. If you need to back up your data, back up your data! If you only have 2 drives for business critical data, you're already doing it so wrong that no level of RAID will ever be able to save you.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Considering an SSD array
Speed wise, there is just no comparison between Solid State and Mechanical disks... If you have the budget for SSD, well it is a lot of money...
I would also suggest going with larger disks because the optimum configuration would be keeping the pool under half full. That is a speed thing. As the pool fills, it slows down. Above 80%, it slows considerably.
I read that mirrors do not make sense because they fail predictably.
Hard drives fail in a fairly random way, regardless of if it is a RAIDz1 or mirror vdev. If you only have one disk of redundancy, I would suggest having spare drives on hand so if you do have a drive failure, you can replace the fail drive right away.
personally, I use RAIDz2, so I have two drives of redundancy.

More drives is better... More vdevs is better too.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Forums... there's always a wide variety of opinions on certain things.

The number of uninformed ignorant ones is amazing.


And I have to repeat, the number of uninformed ignorant ones is amazing. And a sales rep not pushing to sell more stuff? He should be fired.

I guess idiots will be idiots. Blowing through endurance rapidly and then whining about it is stupid. Blowing through endurance is what made their system fail, not the use of RAID1. The same thing would happen to those SSD's in RAID5 or if you just used a single SSD.

The physics of SSD's mean that they do have a limited amount of lifespan/endurance. If you actually do plan to blow through endurance, I absolutely agree you could be setting yourself up for a failure, which is why you would be best off using a heterogeneous pool if you do that. One of my businesses bought a bunch of Intel 535 back in 2015(?) on Black Friday when they were like $159 for a 480GB. With 73 TBW endurance and our engineering projections suggested we'd be writing that in the period of a year or two, we absolutely expected them to fail. However, flash prices had been falling rapidly, and it looked like it'd be cheaper to burn through them and replace them in a year or two with a lower cost higher endurance model. But they don't fail at exactly the same time, and you put a spare in so that when one fails, it is immediately replaced, ops gets notified, and a replacement is installed as a new spare in a day or two.

The hell of it was that we kept having the 535's shipped to Intel RMA, and Intel kept doing warranty replacements for them for Intel 545s's, brand new units.

But the other obvious thing to do is to pair them in a heterogeneous pool -- that is, use two different types of SSD with different controllers and flash memory vendors. This is much easier today. You can put an 860 Evo up with a WD Blue or Red, depending on your endurance requirements.

See, what you do is something like this (example is from a hardware LSI RAID system in a hypervisor)

Code:
----------------------------------------------------------------------------------
EID:Slt DID State DG      Size Intf Med SED PI SeSz Model                      Sp
----------------------------------------------------------------------------------
 :0       0 Onln   0 465.25 GB SATA SSD N   N  512B Samsung SSD 860 EVO 500GB  U
 :1       1 Onln   0 465.25 GB SATA SSD N   N  512B Samsung SSD 860 EVO 500GB  U
 :2       2 UGood  - 465.25 GB SATA SSD N   N  512B Samsung SSD 860 EVO 500GB  U
 :3       3 UGood  -  223.0 GB SATA SSD N   N  512B OCZ-VERTEX3 MI             U
 :4       4 Onln   1  223.0 GB SATA SSD N   N  512B SAMSUNG MZ7TD240HAFV-000DA U
 :5       5 Onln   1  223.0 GB SATA SSD N   N  512B OCZ-VERTEX3 MI             U
 :6       6 Onln   2  223.0 GB SATA SSD N   N  512B SAMSUNG MZ7TD240HAFV-000DA U
 :7       7 Onln   2  223.0 GB SATA SSD N   N  512B D2CSTK251M20-0240          U
----------------------------------------------------------------------------------


You'll notice that there's a warm spare available for both the 500GB RAID1 and one spare covering the two 240GB RAID1's. Notice the deliberate pairing of drives of different types. By the way, those 240GB drives are from like maybe 2012, whilethe 500's are brand new because I just got rid of a bunch of old 500GB HDD's. I've got several dozen hypervisors with configurations like this. As long as you aren't burning the endurance, they just keep running, plus, if one does fail, it just gets replaced and things keep working. If you have valuable data and you don't want to deal with downtimes, you absolutely DO put your SSD's in RAID, because you really don't know when they will fail, and when they fail, they do tend to just go *pop* and the entire contents vanish.

So I guess I wonder why they didn't replace the failed SSD in the RAID1 before the other one burned up, or why they didn't bother to spare a drive in the array. It really sounds more like incompetence than any failing with RAID1.
 

Netdewt

Explorer
Joined
Jan 19, 2021
Messages
98
Speed wise, there is just no comparison between Solid State and Mechanical disks... If you have the budget for SSD, well it is a lot of money...
Budget isn't unimportant, but speed is also important. Time is money. Also, at this small size, the cost difference isn't really that much considering the speed I also want.

I would also suggest going with larger disks because the optimum configuration would be keeping the pool under half full. That is a speed thing. As the pool fills, it slows down. Above 80%, it slows considerably.
I will keep 50-80% in mind. I have a business partner that likes to use up every last bit of space, and I am always clearing.

It really sounds more like incompetence than any failing with RAID1.
I am very glad to know that I am overthinking that.

See, what you do is something like this (example is from a hardware LSI RAID system in a hypervisor)
Do SSDs in a vdev not need to match? I know they should ideally match in size, but the brand and model don't need to match? My understanding is that with HDDs they should.

I like that idea of having warm spare for sure.

If TrueNAS would let me buy an X+ or XL without drives, I could do 6x2TB SSD SATA III for about $1200 (drives only). That should likely fully utilize a 10Gbe connection, I think.

If I were to continue looking into FreeNAS and building a server on my own, I am afraid of not having the tech support for an important business machine.

I don't see many hot swap 2.5" drive solutions, but IcyDock has a bunch that install into 5.25" bays.

They have an M.2 version as well that uses Oculink (the AsRock Rack E3C246D4I-2T also uses this). 12 drives in 1x 5.25' bay sounds really great:

With such a small drive size, a very small case is also possible:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Do SSDs in a vdev not need to match? I know they should ideally match in size, but the brand and model don't need to match? My understanding is that with HDDs they should.

Your understanding is wrong.

Many years ago, we had this thing called "spindle sync" which was an electronic signal that caused hard drives to spin in rotational sync, to make it possible for a RAID controller to issue multiple commands in parallel and receive responses from all drives expeditiously. That died early on in the days of SCSI as caches got larger and disks would cache entire tracks. But this absolutely required exact model matching and usually exact firmware too.

Server manufacturers and disk array manufacturers had several ulterior motives in perpetuating that model: if you only have a single part for "1GB disk", that's easy to stock, to maintain spares for, etc. You get better vendor discounts from your drive manufacturer if you buy more drives. You are less likely to run into a firmware glitch or a manufacturing run defect -- but if you do, you are in serious trouble, because all your eggs are in that basket.

You will hear several generations of kool-aid-drinking storage engineers swear up and down that matched drives are important, and who will be unable to explain a plausible reason why if challenged on it. It's just a religious-like belief.

For ZFS, it's best if the disks match in size, and for good measure they should be close in performance metrics. If not, it's OH MY GOD HUMAN SACRIFICE, DOGS AND CATS LIVING TOGETHER ... oh wait, no, the other thing, it's not really a big deal.

Lots of ZFS system builders will be careful and cautious to space out drive purchases to make sure they don't get all their drives from a single batch of drives. This one single thing is a really good idea, since sometimes drive manufacturers have ... challenges.

Unfortunately, the number of drive manufacturers out there has imploded in recent years, so if you want to build a heterogeneous HDD array of large drives, you only have two manufacturers to choose from. That is a practical limit, not a reflection on the concept of heterogeneous storage pools.

There are sometimes other practical considerations. For example, in the LSI RAID array design shown above, I paired two 860 EVO's against each other. The Samsung SSD's are very highly respected and not particularly prone to random unexpected failures. I had the option to make that one 860 EVO 500GB against a WD Blue 500GB SSD in RAID 1. I did not. The reason was simple; the WD Blue 500GB has endurance of 200TBW and a price of $50, while the 860 EVO has endurance of 300TBW and a price of $57. These drives aren't being used in a role where they are likely to be burned out, but that's an entire extra 100TBW for $7.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@jgreco What I observed is that in the last couple of years vendors seem to have aligned on drive sizes at least for rotating disks. Whatever I buy - e.g. 4TB - all the same sector count. Which eases replacement quite a bit.
In the old days all disks were slightly different and if your replacement disk was smaller - tough shit. That led to OEM firmware from Dell, HP and the like to force a particular size. Things are better in that regard now.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've observed that for more than the "last couple of years".

There's obviously been a winnowing of drive manufacturers in recent years, and also of drive models. It could be that drive manufacturers got the message that we were all tired of playing "The Price Is Right" with drive sizes, and they maybe realized that their own product lines were shrinking too.

It's still a thing with SSD's but at least it is better. I've got 480GB, 500GB, and 512GB SSD's (and 960GB/1TB) and it's annoying to have to be careful about that.

It was interesting, however, that current model 500GB SSD's are exactly the same size as 500GB HDD's from ten years ago.

It's generally an improvement.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It could be that drive manufacturers got the message that we were all tired of playing "The Price Is Right" with drive sizes
I recall seeing, at least a few years ago, a Wikipedia article explaining how capacities were now standardized by agreement. I probably even posted it here, but I'm not finding it now--for whatever my memory of "wikipedia said" is worth.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I recall seeing, at least a few years ago, a Wikipedia article explaining how capacities were now standardized by agreement. I probably even posted it here, but I'm not finding it now--for whatever my memory of "wikipedia said" is worth.

MONOPOLISTIC SIZE-SETTING CONSPIRACY!!!!
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
We're kind of wandering off the topic. I'll do my part by bringing up:
512 byte vs. 4k sector mismatches
Three manufacturers... I thought Toshiba still makes spinning rust drives, maybe I'm wrong.


One thing I haven't seen the addressed is the hot vs. warm storage requirements. Editing photo's will require bandwidth & IOPS. If there's a stable storage archive, you may be able to save some money by running two storage pools. A slow RAIDz2 pool built from conventional disks, and a smaller all SSD pool for your active editing tasks. This would allow you to use small inexpensive SSD's configured to maximize the IOPS, and set up replication tasks to archive to slower bulk storage at night.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
yes SSDs die. Yes SSDs might die more often compared to HDDs if you abuse them. HDDs will just give you bad performance if abused.

Personal track record of dead SSDs:
-4x OCZ -> just in workstations / laptops those are really shit
-2x Toshiba Consumer/Prosumer build in business laptops died around the TBW +/-100TB
-3x STEC ZeusIOPS died way after the lifespan of 11PB all went to 20PB and above. Died as predicted.
-5x Curicial Consumer stuff. All died in laptops / workstation before or after the freaking low TBW
-7x Intel Datacenter SATA SSDs. Most of them died way beyond the TBW
-3x Samsung Prosumer SSD died all way before the TBW some at as low as soon as 10% of the TBW
-2x ADATA Consumer died exactly at the TBW

it depends on what you do with them.
See those Samsung Prosumer ( the pro series ) if you load them all the time with 90% of data, there is not enough free space and they will die like tomorrow...

For your workload I would go with RAIDZ2+Hotspare. Would use some SLC cached prosumer SSDs in range of 1-4TB. Let's say 16x1TB at RAIDZ2 (Consisting of 3vdevs of 5 each plus 1 hotspare).
Set the dataset to to a quota of max 80% of the aviailbe space and be happy. That would grand you about 6TB of useable space.
Add 3x4TB HDD in RAIDZ1 and snapshot your produtive dataset to it.
Add 32-512GB of memory and enjoy the day.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Here's 2 options from the TrueNAS configurator I'm looking at. Which would be faster? Which carries more risk?

"Risk" depends on your pool setting, not on the hardware. 4-bay is enough to run one pool of two striped mirrors, as you want. 4*4 TB SSD (about $1600 in drives ?) gives you 8 TB raw; set a quota to cap it at 5 TB and don't tell your partner what's really inside. :wink:
8-bay gives you four mirrors for higher performance (especially relevant if you don't go for SSD), or enough for the hot+cold option of @rvassar in one enclosure, with one SSD pool for performance and 4 HDD in RAIDZ2 for security.
Otherwise, get more than 16 GB RAM and forget about L2ARC.
 
Top