Ram & HD Speed

Status
Not open for further replies.

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
Hey Guys,

Apologies if this has been asked before but I did a search and couldn't find a directly applicable answer.

So I'm planning to build a FreeNAS server. I've decided that I'm going to give up on MiniITX and go with an ATX variant Mobo due to increased options and features. Now I'm down to trying to figure out how much ram to go with and whether or not I can pull off what I want to do with HDD read/write speeds.

So I'm going to start with 8x 4TB drives (RAIDZ2) and 32GB of ram. I'm guessing this will be more than enough storage to meet my needs for the next 5-10 years. At that point, all I imagine all I will do is upgrade the drives from 4TB to 8TB. With that in mind, should I be planning on building a machine that will support 64GB of ram for the future expansion?

At this point my projected uses are file transfer/storage/backup. Media server, although I won't be multiplexing, just accessing videos/music files over the network with computers. Security camera surveillance system with 4 1080p cameras.

Right now I'm just running gigabit Ethernet. I'm guessing that with 8x 4TB reds (5400RPM) this configuration won't have any problem saturating gigabit Ethernet, is that right?

How can I calculate bandwidth useage for this setup? I'm anticipating that my max system usage would be (worst case scenario) 4x 1080p video cameras all recording at once while 2 or 3 people are watching 1080p videos over the network. Would that saturate gigabit ethernet? If not, while all of that is going on and lets say a backup script runs and FreeNAS is scrubbing drives, will that saturate the max I/O of my drives?

I will probably get a Mobo with a 10 gigabit ethernet card just for future proofing (or just for the link between the server and the switch). How can I calculate the bottlenecks in the system?

Thanks :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would consider getting a system that will support 64GB of RAM, especially if your intention is to keep the MB and CPU for at least 5 years.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
I would consider getting a system that will support 64GB of RAM, especially if your intention is to keep the MB and CPU for at least 5 years.
Thanks Cyberjock, I imagine that is a good way to proceed.

Is there any way I can do a max data/bottleneck calculation on the system?

Cheers,
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Not really. There's lots of things to consider such as the ability for data to be written/read from the hard drives, caching in the ARC and L2ARC, and things like sync writes (if applicable).

There's no "good calculation". It's more about what experience has shown and what you need the system to do. In that regard, I recommend 64GB of RAM as a good starting point for the information you've provided in this thread.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
Hrrmmm, ok. That's a little disconcerting. I read through some stuff about video security cameras on the Cisco site and they basically said it'll take about 10mbps per 1080p security camera using H264 video compression (Though I can't remember if that's for 30 or 60fps). I'm guessing that the playback of 1080p movies would be about the same. So is there any reason why I couldn't run say 40Mbps downstream onto the NAS while there is about the same amount of data going upstream onto the network? Would a guy be able to run a backup to the NAS while that was going on without causing jitter in the movie playback or the security camera footage?

I guess the biggest thing I'm wondering now is whether 5400rpm drives will do the trick or if I would need to go up to 7200rpm drives.

Thanks for taking the time to reply, Cyberjock, I appreciate it.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
*double tap*
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I wouldn't make a 10GbE card now to future proof the system a crucial decision point. As long as you have the PCI slot, you can easily add one later (and it will likely be cheaper).

As for bottlenecks, generally speaking, an 8 drive RAIDZ2 will have the sequential throughput of the sum of the data drives and the IOPS of the slowest drive in the vdev. You could try it, and if performance wasn't acceptable, then switch the pool configuration to either 2 4-disk RAID Z1(or 2) vdevs, or a stripe of 4 mirrors. Do you have backups?

And the system should support 64GB. You could go with 32GB now and add another 32GB when you upgrade your drives.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
I wouldn't make a 10GbE card now to future proof the system a crucial decision point. As long as you have the PCI slot, you can easily add one later (and it will likely be cheaper).

As for bottlenecks, generally speaking, an 8 drive RAIDZ2 will have the sequential throughput of the sum of the data drives and the IOPS of the slowest drive in the vdev. You could try it, and if performance wasn't acceptable, then switch the pool configuration to either 2 4-disk RAID Z1(or 2) vdevs, or a stripe of 4 mirrors. Do you have backups?

And the system should support 64GB. You could go with 32GB now and add another 32GB when you upgrade your drives.
Yeah, the ethernet card isn't holding me up but I am trying to figure out where the bottle necks would be as I'm wondering if the system would even be capable of 10Gbps.

I'm looking at WD Red 4TB drives. What do you mean by sequential throughput? The drives are rated at 6Gbps but the datasheet doesn't specify an IOPS.... I'm planning to hook them up via SAS. So do you think the drives will be the bottleneck as opposed to the mobo/ram/cpu?

Yeah, I'll go with a 64gb board and cpu and upgrade the ram as necessary.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
What do you mean by sequential throughput?
Writing a large string of data all at one time, vs lots of little files staggered.

The drives are rated at 6Gbps but the datasheet doesn't specify an IOPS
The interface is 6Gbps, but the drive speed is ~150MBps and ~145IOPS (http://www.storagereview.com/wd_red_4tb_hdd_review_wd40efrx)
So yes, the drives and more importantly the vdev configuration, will be a huge limiting factor on performance. Things like RAM and enterprise SSD for read (L2ARC) and write (SLOG) can help, but are limited.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
OK, so how do you use those drive speeds to calculate the max data read and write rates for a 8x 4tb Raidz2 array? Is there a calculator out there somewhere? I'm guessing there are different rates for write only, read only, read & write and then all of those functions for multiple files simultaneously....
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'm not aware of a calculator. I think it's more of a research project with a sprinkle of experimentation.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
I'm not aware of a calculator. I think it's more of a research project with a sprinkle of experimentation.
lol that's not what a guy wants to hear when he's lookin' a spending $3k on hardware. OK, I guess we're shooting from the hip.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
OK, that was somewhat helpful. So it sounds like I should be able to pull the max data throughput of a single drive, is that right? So should a guy be able to count on the NAS system being able to rock 150MBps for a WD Red RAIDZ2 array? That's only 1.2Gbps so it doesn't sound like there would be any benefit to 10Gbps LAN unless I went with 7200RPM drives. So possibly the HGST 4TB Deskstar NAS drives could improve performance but I can't find numbers for the internal data transfer rate.

This guy here is reporting data rates of w=993MB/s, r=1882MB/s, IOPS=884 for his RAIDZ2 array. How is that even possible? Sounds like this would be awesome and would definitely be able to take advantage of 10Gbps ethernet.

https://blog.pivotal.io/labs/labs/high-performing-mid-range-nas-server
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Well, "Yes & No"... ;)

I understand how it can seem frustrating, but there are other factors that could have significant impact. Is this going to be a CIFS Share? If so, what is your CPU speed. I ask this because CIFS is limited to a single thread per instance. So if you have a pretty fast CPU them for sure that is going to help. If you went with Mirrors, then you will get more IOPS. Are you going to be running a SLOG (or mirrored SLOGs)?

Of course performance will degrade if the pool gets above 80% filled, which is by design as well...

Others more knowledgeable than I have already mentioned pretty much the same things.

Not trying to confuse you, but it is like buying a car. There is not a straight forward answer if you design the car with a particular sized engine, transmission, tires, etc.; there is only an "educated guess" until the tires actually meet the road...
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
OK, I guess we're shooting from the hip.
The hard part is the blending.

The read and write is straightforward. 150MBps per data drive and they add up. (and as Mirfster mentioned, you have to also be aware of the sharing protocol limitations). So if you have an 8 Disk RAIDZ2 (6 data disks), the best case throughput is 6*x150MBps. And since there is 1 VDEV, you will have ~ 145IOPS (the IOPS of a single drive). The hard part to estimate is when you start mixing in read and write together, or read & write & a bunch of random IO.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
Well, "Yes & No"... ;)

I understand how it can seem frustrating, but there are other factors that could have significant impact. Is this going to be a CIFS Share? If so, what is your CPU speed. I ask this because CIFS is limited to a single thread per instance. So if you have a pretty fast CPU them for sure that is going to help. If you went with Mirrors, then you will get more IOPS. Are you going to be running a SLOG (or mirrored SLOGs)?

Of course performance will degrade if the pool gets above 80% filled, which is by design as well...

Others more knowledgeable than I have already mentioned pretty much the same things.

Not trying to confuse you, but it is like buying a car. There is not a straight forward answer if you design the car with a particular sized engine, transmission, tires, etc.; there is only an "educated guess" until the tires actually meet the road...
lol I actually have no idea, haven't got to that part yet. I will probably only ever connect to it with windows 7 computers (until a version of windows is released that is better than 7) so it will have to be some sort of windows compatible protocol for the computers. As for the security cameras, I have no idea what kind of protocol they are going to want to connect with (possibly TCP?). I haven't picked a processor yet as I'm trying to decide what I need but I have no problem buying an expensive processor if that's what I need.

I'm expecting to have multiple LAN connections running on the box but I'm not sure how I would set that up. Either 2+ ports connected to the switch and bridging them (or however you link them) or have one + port going to my router/switch for the computers and then a separate port connected to a 4 port POE switch for the video cameras.

I was very tempted to go with mirrors but I really don't like the idea of the potential for a second drive failure destroying my pool so I'll probably go with RAIDZ2 just for the redundancy. I get the feeling that the chances of 3 drives failing are pretty slim. It sounds like a SLOG is similar to a ZIL or something along the same lines and I'm probably not going to run a ZIL or L2ARC as apparently I probably won't need anything like that unless I was running more than 64gb of ram, which I won't be. So I'm guessing I won't be running a SLOG.

The hard part is the blending.

The read and write is straightforward. 150MBps per data drive and they add up. (and as Mirfster mentioned, you have to also be aware of the sharing protocol limitations). So if you have an 8 Disk RAIDZ2 (6 data disks), the best case throughput is 6*x150MBps. And since there is 1 VDEV, you will have ~ 145IOPS (the IOPS of a single drive). The hard part to estimate is when you start mixing in read and write together, or read & write & a bunch of random IO.
OK, so data throughput won't be the problem, it'll just be IOPS that's the limit. So is that more of an issue with something that's reading/writing a lot of small files or will it also hamper large single files being read and written? As I mentioned, it'll mostly be used as a media and security server so movies being watched while security footage is being written.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I was very tempted to go with mirrors but I really don't like the idea of the potential for a second drive failure destroying my pool
Many people share this concern and use 3 way mirrors to mitigate the risk.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I'm with you on RaidZ2, it is my preferred choice. Even against the sensible knowledge, I am testing a machine out with 2 - 6 Drive RaidZ2 vDevs to house some VMs. :eek:

But that is just me testing and wanting it to work out.
 

Wallybanger

Contributor
Joined
Apr 17, 2016
Messages
150
Many people share this concern and use 3 way mirrors to mitigate the risk.
Obviously those people are made of either hard drives or money.
I'm with you on RaidZ2, it is my preferred choice. Even against the sensible knowledge, I am testing a machine out with 2 - 6 Drive RaidZ2 vDevs to house some VMs. :eek:

But that is just me testing and wanting it to work out.
Why isn't RAIDZ2 sensible? Data rates may not be as high and resilvering may take some time but just think of the space savings!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Obviously those people are made of either hard drives or money.

Why isn't RAIDZ2 sensible? Data rates may not be as high and resilvering may take some time but just think of the space savings!

IOPS, but I will ignore that for now. Just testing anyways.
 
Status
Not open for further replies.
Top