Please look over my parts list...

Status
Not open for further replies.

thedude

Cadet
Joined
Jul 7, 2011
Messages
8
Hi everyone. I have been searching around and want to make sure I have a sound build.

I plan to build a RAIDZ1 5+1 using 2TB drives. It will be used for home media storage, streaming, and data backup.

ASUS F1A75-V Pro + AMD A6-3650 Llano 2.6GHz = $234.98
16GB DDR3 1333 4GBx4 = $107.96
6 x SAMSUNG F4 HD204UI = $446.94
reusing case and PSU

Total = $789.88
9.31TB
$84.84/TB


Considerations:

-I want to use the new AMD Llano platform because the motherboard options are phenomenal. The one I have chosen has 7 SATA ports, (6 are RAID). I plan to use the 7th SATA port to attach an SSD cache in the future. The motherboard also has USB 3.0, UEFI, and can take 32GB of RAM.

-I am starting with 16GB of memory because my volume will be fairly large, and the onboard graphics will use some of the main memory.

-I want to make sure I'm set for at least one major drive swap (very large drives) using the current hardware platform. That is why I'm adding the expense of newer CPU and motherboard. The difference between older AMD and the new is ~$100. I figure having to replace the CPU and motherboard (and possibly the memory) later will be more than $100.


Questions:

-To get the best support for 3TB and larger drives I assume that a UEFI motherboard with the current generation of controller is best. Is that sound logic?

-I am not sure I understand the 4K drive issue.

-Does using drives with 64MB cache size help vs 32MB cache?


Thanks in advance!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
It sounds like a sweet setup and you're smart to start out with more RAM, I wish I had. One other piece of advice I'd offer is go with a raid z2 to start because once you load it up, it's not easy to switch from z1 to z2. I *THINK*, but I'm not sure, if you start with z2 you can still add another disk to the pool later, I could be wrong.

I can't really answer your questions, I know there are a couple of threads about 4k sectors so take a look for them. I have one drive with 64MB of cache and the rest are 32MB (Samsungs), there are no problems, but I can't say anything about advantages of 64/32. I think the Samsung drives you want are solid, I have 4 and have had no problems, but there's always a few bad apples. That's why I think z2 would be better for you, if you have that much stuff and you get a bad batch of drives and 2 of them fail, you're screwed, with z2 you're still safe.

IMPORTANT: Please see the post #11 from @matthewowen01 below for clarification on adding disks to pools.

UPDATE: The documentation for FreeNAS here:

http://doc.freenas.org/index.php/Volumes


Adding to an Existing Volume

ZFS volumes support the addition of disks to an existing zpool. To achieve this in the GUI, go to Storage -> Volumes -> Create Volume. In the "Volume Name" section, input the same name as an existing volume, select the disk(s) you wish to add, and click ZFS. FreeNAS will interpret this configuration as a request to expand the existing ZFS volume.
 
Joined
May 27, 2011
Messages
566
I plan to build a RAIDZ1 5+1 using 2TB drives
i would go with a raidz2, 2 drives of redundancy vs 1. that way when one eventually dies, you don't sweat bullets knowing you have no safety net while you RMA the dead drive.

ASUS F1A75-V Pro + AMD A6-3650 Llano 2.6GHz = $234.98
i did a quick Google and can find no information up or down that this board will be compatible. i can't say it will work or not, you can be the first to let us know.

-I want to use the new AMD Llano platform because the motherboard options are phenomenal.
i think going with the latest and greatest is overkill. i've got an old e7400 and it's more than enough.

-I am starting with 16GB of memory because my volume will be fairly large, and the onboard graphics will use some of the main memory.
more memory the merrier!

-I want to make sure I'm set for at least one major drive swap (very large drives) using the current hardware platform. That is why I'm adding the expense of newer CPU and motherboard. The difference between older AMD and the new is ~$100. I figure having to replace the CPU and motherboard (and possibly the memory) later will be more than $100.
don't forget about operating costs, the A6-3650 is a 100 watt cpu, you can find a 65 or 45 watt cpu for cheaper. you'll also be good until 64 bits of data, that's a metric shit ton (68 billion TB or something like that).


-To get the best support for 3TB and larger drives I assume that a UEFI motherboard with the current generation of controller is best. Is that sound logic?

no, older addressing schemes were 32 bits. drives were laid out into 512 byte blocks, giving a total of 2 TB. we no longer use 32 bit addressing schemes and we no longer use 512 byte blocks. if you're not using windows, and not using a system from the time of the 137 GB limit (28 bits fyi) you'll be fine.

-I am not sure I understand the 4K drive issue.
drives used to map out there data into 512 byte chunks, since drives have been growing immensely over the years, they decided to move to 4096 byte chunks. this is preferable for 2 reasons, it's easier to address the data, as you can cram 8 times the data into the same address space and you can access data faster as you get 8 times the data for the same number of requests.

-Does using drives with 64MB cache size help vs 32MB cache?
yes, but since you'll have 16 GB of memory, you'll be just fine either way.

One other piece of advice I'd offer is go with a raid z2 to start because once you load it up, it's not easy to switch from z1 to z2. I *THINK*, but I'm not sure
It's impossible to switch from z1 to z2.

if you start with z2 you can still add another disk to the pool later, I could be wrong.
False, you can not expand any vdev, you can add another vdev to a pool, but you cannot add to an existing vdev.
 

audix

Dabbler
Joined
Jun 11, 2011
Messages
36
Since the quality of all these large cheap disks are not what disks used to be, be prepared to replace a disk fairly soon (I had a failed F4 within a week). Will you have to use the nas during that swap-period? If so, I also strongly recommend raidz2.

Also, for optimal performance with 4, 6 or 10 drives, use raidz2.

In 8.0.1-beta3 and forward, you have the option to enable 4K-support when you create the volume.
 

thedude

Cadet
Joined
Jul 7, 2011
Messages
8
Okay thanks everyone. I guess I'll reconsider using RAIDZ2. I didn't want to waste that much but perhaps it is wiser.

I guess I was a bit confused by articles and various posts in other forums regarding motherboard support for 3TB drives and larger.

These stated that in addition to OSes, there were limitations in BIOSes and controllers that might impact the use of drives larger than 2.2TB. While they can support larger drives, they would be two partitions (2.2 + .8). Most have stated that a UEFI board is required to support the full 3TB and beyond, but I'd be happy to learn differently if I've got it mixed up.
 

thedude

Cadet
Joined
Jul 7, 2011
Messages
8
Been reading a bit on raidz2 capacity and noticed there's a "wide" option, permitting wider stripes that yield more usable storage space. Is this implemented in FreeNAS? I haven't found mention of it yet...
 
Joined
May 27, 2011
Messages
566
Okay thanks everyone. I guess I'll reconsider using RAIDZ2. I didn't want to waste that much but perhaps it is wiser.

I guess I was a bit confused by articles and various posts in other forums regarding motherboard support for 3TB drives and larger.

These stated that in addition to OSes, there were limitations in BIOSes and controllers that might impact the use of drives larger than 2.2TB. While they can support larger drives, they would be two partitions (2.2 + .8). Most have stated that a UEFI board is required to support the full 3TB and beyond, but I'd be happy to learn differently if I've got it mixed up.

Short answer: Windows sucks, you're not using windows.

Long answer: you're miss understanding the article. Windows Sucks. they only allow 32 bit addressing for formating disks. 32 bits x 512 bytes is about 2 TB. the Disk Unlocker program, simply makes your disk appear as 2 independent block devices to windows so both can be addressed separately when partitioned as MBR. windows finally got there act together and use GPT to partition the drives which allows for 64 bit addressing which is good for much much more. However, ZFS is 128 bits. Z is for Zetabyte. back when we hit the 28 bit, 137 GB barrier, we went to a 48 bit addressing scheme for ATA so all motherboards and controllers since the early 2000's should support 48 bit addressing.
 

thedude

Cadet
Joined
Jul 7, 2011
Messages
8
Short answer: Windows sucks, you're not using windows.

Long answer: you're miss understanding the article. Windows Sucks. they only allow 32 bit addressing for formating disks. 32 bits x 512 bytes is about 2 TB. the Disk Unlocker program, simply makes your disk appear as 2 independent block devices to windows so both can be addressed separately when partitioned as MBR. windows finally got there act together and use GPT to partition the drives which allows for 64 bit addressing which is good for much much more. However, ZFS is 128 bits. Z is for Zetabyte. back when we hit the 28 bit, 137 GB barrier, we went to a 48 bit addressing scheme for ATA so all motherboards and controllers since the early 2000's should support 48 bit addressing.


I did already have a grasp of what you were saying. I know the OS and the filesystem will not be an issue.

I just didn't know if there were any other snags to worry about since the articles mention controller and BIOS possibly being an issue recognizing large drives. I just don't want to be in a situation where the hardware doesn't see a large drive correctly.
 
Joined
May 27, 2011
Messages
566
I just didn't know if there were any other snags to worry about since the articles mention controller and BIOS possibly being an issue recognizing large drives. I just don't want to be in a situation where the hardware doesn't see a large drive correctly.

as long as your hardware was built after the early 2000's, you're good.
 
Joined
May 27, 2011
Messages
566
i got a question pm'd to me about my comments and i wanted to post the clarification here in case anyone else was wondering.

...you can not expand any vdev, you can add another vdev to a pool, but you cannot add to an existing vdev.

Well there is a bit of confusion on terms. there are Pools, that's the highest level of abstraction, that's the file system that you can mount, read and write too. Pools consist of 1 or more Vdevs. Vdevs can consist of 1 or more disks, that are configured as a single disk, a collection of 2 or more mirrored disks, or a collection of disks in a raidz(2,3).

We were talking about a Pool that consists of one Vdev, a raidz1. We can Never modify this Vdev (one caveat mentioned later) but we can add additional Vdev's to the Pool which will function similarly to a jbod, writes will be spread across both Vdevs according to I/O load. So you can add a single disk to the pool, however this is a Terrible idea. Doing so through the cli will give you a stern warning and not let you do it unless you add the -f option to force it. The problem is that adding a single disk, or more succinctly a non redundant Vdev will make your pool no longer redundant. If you loose that single disk you added, all of your data will be lost. I just submitted a bug report saying this actually. When i have time I'll try and edit the wiki as what currently is in there is a poorly written.

long story short, the docs need to be changed, it's very unclear as to what is being accomplished by 'Adding to an Existing Volume' it leads people who have not worked with ZFS natively to misunderstand what's going on.


the one caveat:
so you Can expand a vdev, you must one by one, replace disks with larger disks, once that is done, you can set a flag on the pool autoexpand or something like that... export and import the pool, and you'll have all the added space. you will always have the same number of disks in the same configuration, you can just migrate to larger drives.

for example, if you have a 4 disk raidz of 500 GB drives, you can swap one drive out with a 1.5 TB drive, let it resilver, swap another, wait, and do this until they are all done. once you're all done, you'll have 4.5 TB instead of 1.5 TB.
 
Status
Not open for further replies.
Top