Plans to build a 12 drive SSD array

Status
Not open for further replies.

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
I'd be interested to see the outcome of that test as I have seen many discussions on a theoretical level cover the fact that a RAID VDEV (RAIDZ1/2 or other types) is only as fast as one of the member devices, hence everybody is saying that a pool of many mirrored VDEVs is the best way to get performance and keep some redundancy.

According to the theory, RAIDZ1/2 should kill over 90% of your potential performance with 12 devices in a single VDEV.

Notably, RAID0 or in FreeNAS terms "stripe" should not be subject to that rule as it is the same as each device in the pool being its own VDEV.
VDEV I'm not sure what that is but raidz1/2 (for HDDs at least) does scale well beyond one drive performance with the more you add. My 8 drive raidz2 peaks around 650MB/s writes and 750MB/s reads

Now an OS that does what you just explained is UnRAID. That's the drawback to being able to throw together just any drives (but you can use an SSD cache so that negates the drawback, also drive failure only costs you the data on that one disk but you can still have parity to negate that, I'm getting off topic...)

Yes RAID0 would just give you ALL the performance but even with a backup array rebuilding the SSD array after drive failure would suck & I couldn't utilize that much throughput even with 20Gbit. 40Gbit couldn't even give me what it'd be capable of so I'd rather build in redundancy.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
@sretalla I think just for the sake of the numbers I'll test four configurations. Stripe, & raidz1,2,3.

Stripe I think will be so far off the chart I won't have a way to measure it's throughput. 1/2/3 should be in the realm of measurable. I'll probably create a baseline using my otherwise planned config (encryption, lz4, default block-size). I will definitely run a backup array but I don't know if snapshots is a real backup solution (I know that a REAL backup solution would be a separate machine entirely. That day will come, eventually.). I'm familiar with how it's used as a "time machine" in a way to recover deleted/edited files you didn't mean to delete/edit. I don't know if it can be setup to just backup all the data on one array to another. Say nightly probably. I'll have to read-up on what backup software options freeNAS has but snapshots might be it.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
@sretalla I think just for the sake of the numbers I'll test four configurations. Stripe, & raidz1,2,3.

Stripe I think will be so far off the chart I won't have a way to measure it's throughput. 1/2/3 should be in the realm of measurable. I'll probably create a baseline using my otherwise planned config (encryption, lz4, default block-size). I will definitely run a backup array but I don't know if snapshots is a real backup solution (I know that a REAL backup solution would be a separate machine entirely. That day will come, eventually.). I'm familiar with how it's used as a "time machine" in a way to recover deleted/edited files you didn't mean to delete/edit. I don't know if it can be setup to just backup all the data on one array to another. Say nightly probably. I'll have to read-up on what backup software options freeNAS has but snapshots might be it.
Look into replication jobs. You can replicate to a different pool on the same host or to a different host. Either would count as a backup.


Sent from my iPhone using Tapatalk
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
Look into replication jobs. You can replicate to a different pool on the same host or to a different host. Either would count as a backup.
Replication Jobs. Is this something that creates a copy as data is added to the array or would it copy pre-existing data too?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Replication Jobs. Is this something that creates a copy as data is added to the array or would it copy pre-existing data too?
A replication job uses snapshots to take everything and then keep it updated with changes as new snapshots are taken ( and can be scheduled in addition to this).


Sent from my iPhone using Tapatalk
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
A replication job uses snapshots to take everything and then keep it updated with changes as new snapshots are taken ( and can be scheduled in addition to this).
I'll read up on this as I'll probably be using it to use the mechanical array for backup if my research doesn't lead to any alternatives (I've never used replication or snapshots).

Since you wanted to know I'll post back when I have the array built with all the performance results. It is going to be some time though. I do have the money but I have to ensure that I have sufficient funds left over for everything else (life responsibilities).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll post back when I have the array built with all the performance results.
But, wait, you said:
I should mention that the whole server is built and has been running since January of 2016.
Which is it. Did you already build it, or are you going to build it? I am confused.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
Which is it. Did you already build it, or are you going to build it? I am confused.
The server itself is built and has been running an 8 HDD drive raidz2 up until now. The array of 12 SSDs that are to go into the built server are not installed.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
@sretalla Performance results are in. They're not amazing. RAID0/5/7 results were thrown out due to poor testing methodology. The final configuration RAID6 got testing using Bonnie++ from inside a jail. Results were:

Reads: 730MB/s
Rewrites: 545MB/s
Reads: 1.64GB/s

Amazing Reads but I had my hopes up to see similar writes and I didn't. I don't know if this is all you can get even with SSDs or if I'm missing a setting somewhere.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You probably still need a SLOG device to offload the ZIL write so that the data is not being written twice.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
VDEV I'm not sure what that is but raidz1/2 (for HDDs at least) does scale well beyond one drive performance with the more you add. My 8 drive raidz2 peaks around 650MB/s writes and 750MB/s reads

Sequential performance scales with drives in the group (vdev)

Random performance does not, and is often measured in IOPS (io operations per second)
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
You probably still need a SLOG device to offload the ZIL write so that the data is not being written twice.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
From what I was told a SLOG device is only good for sync writes. Being a file server that's useless since I'm only using async.

What someone has recommended is turning off sync. The downside of that isn't an issue to me but I don't know if I'll see a benefit either since I only found the sync setting applicable to when a SLOG is being used. Since I have none I don't know if it'll do anything.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Reads: 730MB/s
Rewrites: 545MB/s
Reads: 1.64GB/s
It could just be that is as fast as the SSDs are able to write because writes are always slower than reads. I was thinking that the intent log would be writing data to the pool and then it would be written again when the system flushes all the cued writes from RAM to disk which would have the effect of doubling the amount of writes to the pool. If it isn't a sync write, I guess the system wouldn't use the intent log, so there would be no double work from the pool.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
What are you using the pool for?
Primarily misc storage. I am considering running VM's off of it in the future.

It could just be that is as fast as the SSDs are able to write because writes are always slower than reads. I was thinking that the intent log would be writing data to the pool and then it would be written again when the system flushes all the cued writes from RAM to disk which would have the effect of doubling the amount of writes to the pool. If it isn't a sync write, I guess the system wouldn't use the intent log, so there would be no double work from the pool.
Working with someone from another forum we have come to the conclusion that this is simply as fast as it can go at least when using ZFS. It's possible it could go faster when using BTRFS or MD but I want to keep the benefits the come with ZFS so I have no plans of switching OS's.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Surprises me. I see faster speeds with a six disk hd array.

I’d suggest trying with sync disabled on a dataset. As if 11.1, sync with SMB is respected.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
Surprises me. I see faster speeds with a six disk hd array.

I’d suggest trying with sync disabled on a dataset. As if 11.1, sync with SMB is respected.
This other person and I did test sync=disabled and it made absolutely no difference. However in the past 30 minutes we just ran Fio with a block size of 1M, sequential writes, and got an output of 1.6GB/s with sync=standard. I tried an identical test with sequential reads and got 5.6GB/s though that might just be RAM. I don't know.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Status
Not open for further replies.
Top