RFC: My 48 TB system

Status
Not open for further replies.
Joined
Jan 27, 2014
Messages
26
Hello,

Will you please comment on this proposed FreeNAS system? I used jgreco's hardware suggestion post for guidance. I am building two of these which will replicate. I will not deduplicate.

Please keep in mind: I have put together many computers, but they have all had integrated SATA (or IDE) controllers. I have never used a "backplane", nor have I used a "expander", nor have I used any other kind of "controller" other than the ones that have been built in to my motherboard. Please let me know if my system is missing any of these components.

Case & Mobo $1640
1 x SSG-6037R-E1R16L (comes with X9DRD-7LN4F-JBOD mobo)
http://www.supermicro.com/products/system/3u/6037/ssg-6037r-e1r16l.cfm

Drives $2080
16 x WD30EFRX
http://www.newegg.com/Product/Product.aspx?Item=N82E16822236344

CPU $230
1 x Intel Xeon E5-2603, 1.8 GHz Xeon
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117271

Mem $480
1 x CT2K16G3ELSLQ8160B (ECC 32GB in two 16GB sticks)
(from crucial)

Thank you,

Chris
 

Durandal

Explorer
Joined
Nov 18, 2013
Messages
54
Since the motherboard supports alot of RAM, i would go for at least 64 GB memory for that amount of data.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Have you downloaded and read the MB User Manual? You must have a complete understanding of how the MB will operate and connect to your hard drives in order to make a smart selection for a ZFS file system, although if it was recommended then it may be just fine.

EDIT: Also you are selecting a lot of Red drives. WD says 1 to 5 drives in a system, not 16. If you do go this route, please post how much vibration you are getting from the drives. I run 6 drives with minimal vibration (very difficult to tell they are running).
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
More info on the intended purpose of the system and your projected drive layout could not hurt.
Some ideas however:
- Do you really need a dual cpu system?
- CIFS is single-threaded and likes high per-core speeds.
- You might get a very noisy psu with the chassis, better check on that if you are building a home-server.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
- Do you really need a dual cpu system?
Only buying a single CPU according to the first message. Not sure if this is for home use but it seems a bit overkill for a home system but if someone has money then they can spend it. I just watched a show on TV where someone was buying an island. Wish I had that kind of money :).
 
Joined
Jan 27, 2014
Messages
26
Thank you all for your comments. I'll attempt to address them here:

1. Use-case. This system will be storing raw radar data for engineering work. It is a lot of data. It will be served via CIFS. It will not be read/used very frequently, but it is important for it to be available when we need it (perhaps in total 16 hours/week read and 2 hours/week write).

2. CPU. I do not need a dual-CPU system and I intend to not install the second CPU. I only considered this mobo due to jgreco's recommendation.

3. Memory. I can up the memory 32->64GB, but is the only reason to increase the amount of ARC available? I will not run deduplication.

4. Drives. Yes, I am planning on filling up with WD Red drives. I did not know that WD says "install no more than 5 WD red drives into a system". I also did not know that there is "vibration" concern. I will look into this. Is there a drive you recommend for a 16+ drive system? I was hoping to take advantage of the "I" in RAID (Redundant Array of Inexpensive Disks).

BTW, after reading the ZFS tuning guide, I plan to bump to 18 drives so I can run two RAIDZ3 zvols. I increased the case size to accomadate and added two SSDs for ZIL/L2ARC:

Code:
Case   1900 (1 x SSG-6047R-E1R24L)
Drives 2250 (18 x WD30EFRX, RAIDZ3 @ 54 TB, 36 TB Usable)
CPU     230 (1 x Intel Xeon E5-2603, 1.8 GHz Xeon)
Mem     480 (1 x CT2K16G3ELSLQ8160B, ECC 32GB total)
Boot     20 (2 x SDCZ33-008G-B35, Sandisk Cruzer Fit 8GB)
Cache   200 (2 x SSDSC2CT080A4K5, Intel 80GB SSD)
Ttl:  $5080


Thank you again for your help.

Chris
 

craigyb

Dabbler
Joined
Jun 9, 2013
Messages
19
I can't see why vibration would be an issue, I have 24x 4TB drives in my chassis and no vibration issues.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Normally enterprise drives can handle all these drives but for some reason WD puts a limit on it at 5 drives, maybe it hasn't been tested for more drives or it's a marketing thing (what I believe) because if you needed to have more than 5 drives you would need to buy enterprise class drives and those cost a lot more money than the Red drives which means more money into the WD pocket. I know others have more than 5 drives but I was only stating what WD claims for it's specs. Also I think "Inexpensive" is a relative term that had more meaning when it first came out.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Wow.. first post I've read in a week and already I see someone adding a ZIL and L2ARC when its not going to add value... ugh. Guess I really should just leave here for good.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Wow.. first post I've read in a week and already I see someone adding a ZIL and L2ARC when its not going to add value... ugh. Guess I really should just leave here for good.
That is because this person has a ton of money to spend and would rather not double the RAM which would bring the most benefit for this project.

I have noticed the slowdown of postings from you, kind of eerie.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
yeah. Tired of nobody actually reading stuff.. guides, manuals, etc.

There's other reasons too. Not overly happy with iX right now over a few disagreements. If you want to know more PM me. :P
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Thank you all for your comments. I'll attempt to address them here:

1. Use-case. This system will be storing raw radar data for engineering work. It is a lot of data. It will be served via CIFS. It will not be read/used very frequently, but it is important for it to be available when we need it (perhaps in total 16 hours/week read and 2 hours/week write).

2. CPU. I do not need a dual-CPU system and I intend to not install the second CPU. I only considered this mobo due to jgreco's recommendation.

3. Memory. I can up the memory 32->64GB, but is the only reason to increase the amount of ARC available? I will not run deduplication.

4. Drives. Yes, I am planning on filling up with WD Red drives. I did not know that WD says "install no more than 5 WD red drives into a system". I also did not know that there is "vibration" concern. I will look into this. Is there a drive you recommend for a 16+ drive system? I was hoping to take advantage of the "I" in RAID (Redundant Array of Inexpensive Disks).

BTW, after reading the ZFS tuning guide, I plan to bump to 18 drives so I can run two RAIDZ3 zvols. I increased the case size to accomadate and added two SSDs for ZIL/L2ARC:

Code:
Case  1900 (1 x SSG-6047R-E1R24L)
Drives 2250 (18 x WD30EFRX, RAIDZ3 @ 54 TB, 36 TB Usable)
CPU    230 (1 x Intel Xeon E5-2603, 1.8 GHz Xeon)
Mem    480 (1 x CT2K16G3ELSLQ8160B, ECC 32GB total)
Boot    20 (2 x SDCZ33-008G-B35, Sandisk Cruzer Fit 8GB)
Cache  200 (2 x SSDSC2CT080A4K5, Intel 80GB SSD)
Ttl:  $5080


Thank you again for your help.

Chris

Impressive system!

Note: you can save some cash on the mobo. That thing can handle A LOT of RAM - 16x DIMM slots. Impressive! Given your use case I suspect you're fine with a single CPU (as you state). Something that is 1 CPU and say 4 DIMM slots is probably ok for you. You can still get to 64GB of RAM with such a setup and 16GB DIMMs (assuming the mobo can handle 16GB DIMMs).

The above comments regarding SLOG (SSD for your ZIL) and an L2ARC are correct. Serving data via CIFS suggests (likely) you won't need them, certainly not the SSD SLOG (which only helps sync writes, which you won't be seeing with CIFS). Without knowing what you're doing with the data, it's hard to say whether an L2ARC will help. (e.g. what is the size of the working set?) As mentioned above I would recommend you spend the money you had allocated for SSDs on bumping RAM from 32GB to 64GB. This will increase the ARC and provide a performance increase.

You confused me with this statement, "BTW, after reading the ZFS tuning guide, I plan to bump to 18 drives so I can run two RAIDZ3 zvols". In your statement 1. you said you were going to serve data via CIFS (SMB) shares. However, in the quoted statement you are talking about zvols. I assume you meant vdevs. (zvols are another thing entirely. If you are not sure about it, make sure you understand the difference.) If true you meant vdevs, then the 2x 9-drive RAIDZ3 is not ideal. (But it would clearly work.) Ideally you'd want 2^N+3 drives per vdev for RAIDZ3. So 4, 5, 7, 11, or 19 drives per vdev. (Though only the 7 and 11 really make sense to me.) Here is what the per vdev storage would look like:

data drives______total drives______effective storage
___1_____________4_____________3TB_____
___2_____________5_____________6TB_____
___4_____________7_____________12TB____
___8_____________11____________24TB_____
___16____________19____________48TB_____


So the effective storage for *2 vdevs* with the #drives/vdev would be:

#drives/vdev____total drives____effective total storage
____7____________14___________8 x 3 = 24TB
____11___________22___________16 x 3 = 48TB

Make sense?

Also, I note this from the supermicro description... make sure you check this depending on how you plan to attach the drives... "(Both CPUs need to be installed for full access to PCI-E slots and onboard controllers. See manual block diagram for details.)"
 
Joined
Jan 27, 2014
Messages
26
Toadman,

Thank you for your help, especially your attention to detail.

Impressive system!

As I said earlier, its for my group at work... it's not my money and I am lucky to be able to set this up and learn more about NAS. I setup FreeNAS on an old workstation and ZFS on my linux laptop. My laptop is replicating with the workstation and everything is running smoothly.

Note: you can save some cash on the mobo.

I understand and I agree. I selected that mobo and chassis because they were sold together by supermicro. If I want to select the components myself, I will first need to understand terms such as backplane/controller/expander. My experience is only with on-mobo SATA slots and I have no idea what other components I will need to purchase in order for my SATA drives by visible to FreeNAS.

The above comments regarding SLOG (SSD for your ZIL) and an L2ARC are correct. Serving data via CIFS suggests (likely) you won't need them, certainly not the SSD SLOG (which only helps sync writes, which you won't be seeing with CIFS).

I understand, and I can see that a system that only performs async writes will not benefit from the ZIL. (I've never heard it called the SLOG but since you said it twice, it must be true!

You confused me with this statement, "BTW, after reading the ZFS tuning guide, I plan to bump to 18 drives so I can run two RAIDZ3 zvols"

Yes, indeed, I meant to say vdev. Adding to the confusion: on linux, you use the command "zpool create" which creates 1) a zpool 2) a zfs dataset/filesystem and 3) a vdev [apparently]. I didn't even know about term vdev until recently. I went down the zvol path when playing with iscsi (which I never got working but it was fun to try).


then the 2x 9-drive RAIDZ3 is not ideal. (But it would clearly work.) Ideally you'd want 2^N+3 drives per vdev for RAIDZ3. So 4, 5, 7, 11, or 19 drives per vdev.

I understand, thank you. For better or worse, Oracle's ZFS Storage Pool Creation Practices is where I got the idea that RAIDZ3 setups should use "9 drives":

http://docs.oracle.com/cd/E23823_01/html/819-5461/zfspools-4.html#gentextid-11774 said:
A RAIDZ-3 configuration maximizes disk space and offers excellent availability because it can withstand 3 disk failures. Create a triple-parity RAID-Z (raidz3) configuration at 9 disks (6+3).

I'm sure I've seen "Do not create vdevs with more than 9 drives"... so I'm not sure how that jives with "RAIDZ3 runs well with 7/11/19 drives".

Also, I note this from the supermicro description... make sure you check this depending on how you plan to attach the drives... "(Both CPUs need to be installed for full access to PCI-E slots and onboard controllers. See manual block diagram for details.)"

Thank you, you may have just saved the whole project! Unfortunately I do not understand what is an "onboard controller" and I do not know what are the repercussions of not having access to it. I will review the mobo documentation.


Thanks again for your help,

Chris
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Toadman,

Thank you for your help, especially your attention to detail.



As I said earlier, its for my group at work... it's not my money and I am lucky to be able to set this up and learn more about NAS. I setup FreeNAS on an old workstation and ZFS on my linux laptop. My laptop is replicating with the workstation and everything is running smoothly.



I understand and I agree. I selected that mobo and chassis because they were sold together by supermicro. If I want to select the components myself, I will first need to understand terms such as backplane/controller/expander. My experience is only with on-mobo SATA slots and I have no idea what other components I will need to purchase in order for my SATA drives by visible to FreeNAS.



I understand, and I can see that a system that only performs async writes will not benefit from the ZIL. (I've never heard it called the SLOG but since you said it twice, it must be true!



Yes, indeed, I meant to say vdev. Adding to the confusion: on linux, you use the command "zpool create" which creates 1) a zpool 2) a zfs dataset/filesystem and 3) a vdev [apparently]. I didn't even know about term vdev until recently. I went down the zvol path when playing with iscsi (which I never got working but it was fun to try).




I understand, thank you. For better or worse, Oracle's ZFS Storage Pool Creation Practices is where I got the idea that RAIDZ3 setups should use "9 drives":



I'm sure I've seen "Do not create vdevs with more than 9 drives"... so I'm not sure how that jives with "RAIDZ3 runs well with 7/11/19 drives".



Thank you, you may have just saved the whole project! Unfortunately I do not understand what is an "onboard controller" and I do not know what are the repercussions of not having access to it. I will review the mobo documentation.


Thanks again for your help,

Chris

Glad you are checking out FreeNAS. It's a darn good system from my experience.

Re: motherboards, there are many folks here that are far more up to speed on what would be best than I. I'm sure I could check the current product line by Supermicro and recommend something. But since jgreco was your original source, perhaps ask him for a UP recommendation that will handle 64GB.

Re: terminology... The ZIL (ZFS Intent Log) exists always. It's normally just part of the pool itself. (Well, unless you turn it off - but DON'T.) The SLOG is a Separate intent LOG. One can specify a specific device(2) to house the ZIL. The advantage is that if the separate device is faster than the pool itself you get a performance bump. Example: people use an SSD for the SLOG to house the ZIL. The latency and/or throughput is greater than what their pool of spinning disks can produce, so they benefit. See here... http://forums.freenas.org/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

On the RaidZ3, I think you must be referring to this line, "A RAIDZ-3 configuration maximizes disk space and offers excellent availability because it can withstand 3 disk failures. Create a triple-parity RAID-Z (raidz3) configuration at 9 disks (6+3)." Yes, you can to that, but it's not ideal. If you do some reading you'll find that the best performance is gained when the vdev is of 2^N+X disks, where X is the RAIDZ(X) level. Hence my table for RAIDZ3 with 3 extra drives to hold parity. I would still recommend to you a 14 (2 vdevs of 7 drives), 21 (3 vdevs of 7 drives), or 22 (2 vdevs of 11 drives) system depending on required total storage. I'm not sure about vdevs >9 drives. I have never run one that large. I do know there are people running 11 drive RAIDZ3 vdevs with no problem though. If you want to stick to drives <= 9, then maybe 3x 7-drives is best. But that gets you 3x 4 = 12 data drives = 36 TB. (Personally, I would do 2x 11 drives. But I would ask around to get some other recommendations.)

More on terminology/commands. "zpool create" specifically creates a POOL. Yes, the pool must, by definition, have one or more vdevs. Check this from oracle... "To create a storage pool, use the zpool create command. This command takes a pool name and any number of virtual devices as arguments. The pool name must satisfy the naming requirements in ZFS Component Naming Requirements." (http://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html)

"A ZFS volume (zvol) is a dataset that represents a block device." (http://docs.oracle.com/cd/E23824_01/html/821-1448/gaypf.html) So yes, often a zvol is served up via iscsi as a device extent. (Note: I have personally witnessed some odd behavior with zvols in FreeNAS. I now use only file extents for iscsi.)

Re: the bit about the onboard controller, I suspect that mobo has onboard disk controllers, i.e. more than one. I think the point of the warning is that some of them may only be accessible with two cpus in the system. So you'll have to be careful about how you plan to hook up the X drives you have in the system. For example, while I don't know, since it's an integrated chassis and mobo that offers say 24 drive bays, I would think the 24 bays are usable via onboard sata controllers. BUT, that may only be true if both CPUs are populated. That doesn't mean you can't run the system with one CPU, it just may mean you need to add another drive controller (say an LSI 16 drive controller) in a PCI-e slot. From what I read I bet only half the PCI-e slots are usable with only one CPU populated though.
 
Joined
Jan 27, 2014
Messages
26
Just a follow up. I bought the system and it is running fine with 11 drives.

Code:
Case    (1 x SSG-6047R-E1R24L)
Drives  (11 x WD30EFRX, RAIDZ3)
CPU    (1 x Intel Xeon E5-2603, 1.8 GHz Xeon)
Mem    (1 x CT2K16G3ELSLQ8160B, ECC 32GB total)
Boot    (2 x SDCZ33-008G-B35, Sandisk Cruzer Fit 8GB)


I will be bumping it to 22 drives in the coming months. In http://forums.freenas.org/index.php?threads/backplane-controller-expander-do-i-need-one.18168/ someone suggests I will not be able to add another 11 drives unless I first install something called a "SAS Expander" or "Controller".

Chris
 
Status
Not open for further replies.
Top