Hi all,
I'm very happy with my 1st freenas box, 8*3T, raid-z2, about 16T capacity, all in 1 zpool, 1 vdev.
The photo & video generated by my DSLR(Nikon D800) are so huge, I'd like to build a 2nd box, about 100T capacity, for the next 2 years. I have read some posts, best practice and FAQ in the forum, but I am not clear on some questions still. Your feedback and comments are very welcome.
Here're some basic info:
a) A home server, mainly for multimedia, two or three concurrent connections at most.
b) As small+quiet+green+cheap as possible.
c) Reliability is the 1st priority, capacity is 2nd, availability 3rd, throughput 4th.
d) Data volume estimation : 100T, 1) photos (30~40T), 2) videos(40~50T), 3) etc, e.g. music/documents. (15~20T). It's about 35~40 4T HDDs.
e) I can wait till freenas 9.1 beta release.
f) Freenas is using CIFS sharing now, clients are windows and android equipments only.
Question 1. ZFS setup:
1.1: In order to reduce the HDD running time and maximize power saving , plus sometimes I only need to access 1 category while don't need to access the other categories for 2 weeks, I'd like to shutdown the unused HDDs, should I divide the categories to different zpools or vdevs or just 3 folders for the categories are fine?
1.2: zpool setup: From some threads, I heard only 1 zpool can be accessed at a time. If I want to access to another zpool, I need to export previous zpool and import new zpool. Am I right? So 3 zpools for 3 categories doesn't work because sometimes I still need to access 3 categories at the same time, so only 1 option: 1 zpool for all 3 categories ?
1.3: I'm happy with my 1st box with raid-z2. So 2nd box is raid-z2 still. Any problem?
1.4: From the best practice, raid-z2 needs 2^n+2 HDDs, e.g. 6/10/18/34 HDDs. 10*4T HDDs raid-z2 provide about 28T capacity, 18*4T HDDs raid-z2 provide about 50T capacity, 34*4T raid-z2 give 100T. So, 34*4T HDDs are enough for 100T, or it should be : photo category will occupy a set of 18 HDDs, video category will occupy a set of 18 HDDs, rest category will occupy a set of 6 HDDs. So totally 42*4T HDDs needed. Which is better?
1.5 vdev setup: How about vdev? Some threads said over 11 HDDs in 1 vdev are too large. Is it true? So if every 10 4T HDDs for 1 vdev, should setup 4 vdevs? Or 1 category 1 vdev? Or only 1 vdev for 100T is fine?
1.6. should I consider ESXi ?
Question 2. Hardware setup:
2.1. CPU, maybe intel i3, maybe a haswell i5? One cpu is enough.
2.2. RAM, is 32G enough? As lots of 8G ram in the market, mainboard with 4 ram slots, which is cheap, can make to 32G easily. If 64G is needed, then only the super expensive server version mainboard can provide 8 ram slots. :( And 8G DDR3 ECC ram is still too expensive, and only the expensive mainboards can support ECC ram. So if possible, I'd prefer 32G normal ram. Fine?
2.3 mainboard. Some ASRock mainboards have 8 SATA ports, 4 ram slots and only $70, I love it! But if needs to support 8 ECC ram slots, it seems not too much choices....and all are expensive......
2.4. SATA card. 4* 8 ports as cheap as possible SATA cards plus mother board SATA ports? Or use one DELL H810 6GB SATA card (with 1G cache) which can connect 190*4T HDDs? Does the functions "auto cache on SDD" and "1G cache" useful?
2.5. Case+Power. As there will be 34~42 HDDs in total. As discussed in 1.1, if it's able to shutdown parts of the HDDs, should use 1 case for all HDDs or 2 cases?
Option 1) I found second hand Rackable SE-3016 3U 16 disks server case is small and not too expensive. Any better suggestion? Why I said in 2.4 that using one DELL H810 6GB SATA card, because it can connect up to 12 * Rackable 3U 16 disk cases. There're 2 SAS SFF-8088 ports on the front panel of the case. The case internal has SATA cables for 16 disks.
Option 2) Buy 4~5 normal cases which with 10 HDDs disk space each. But how to connect 5 cases (power cables and SATA cables) to one motherboard to control and solve the cable length problem? As the requirement: as small+quiet+green+cheap as possible, which option is better? Any case recommend?
2.5. Should I go for two gigabyte NICs to double the throughput? I'm using a gigabit router/switch and NICs right now. Does it needed to using two gigabit NICs on the new box to double the throughput? If yes, should I upgrade the gigabit router/switch and WIFI component? Or I should use other ports or connection? Environment: CIFS+ win/android.
2.6. To get as small+quiet+green+cheap as possible, maybe this time I should build 3 small freenas boxes ( 1 for photo, 1 for video, 1 for etc) but not 1 giant box. So that I can use the mainstream components and no need to buy the expensive server ones. Right?
Qestion 3. anything missing ?
I knew I mess up several concepts and some questions are dump enough. :) Your comments are very welcome, whatever to which question. :)
I'm very happy with my 1st freenas box, 8*3T, raid-z2, about 16T capacity, all in 1 zpool, 1 vdev.
The photo & video generated by my DSLR(Nikon D800) are so huge, I'd like to build a 2nd box, about 100T capacity, for the next 2 years. I have read some posts, best practice and FAQ in the forum, but I am not clear on some questions still. Your feedback and comments are very welcome.
Here're some basic info:
a) A home server, mainly for multimedia, two or three concurrent connections at most.
b) As small+quiet+green+cheap as possible.
c) Reliability is the 1st priority, capacity is 2nd, availability 3rd, throughput 4th.
d) Data volume estimation : 100T, 1) photos (30~40T), 2) videos(40~50T), 3) etc, e.g. music/documents. (15~20T). It's about 35~40 4T HDDs.
e) I can wait till freenas 9.1 beta release.
f) Freenas is using CIFS sharing now, clients are windows and android equipments only.
Question 1. ZFS setup:
1.1: In order to reduce the HDD running time and maximize power saving , plus sometimes I only need to access 1 category while don't need to access the other categories for 2 weeks, I'd like to shutdown the unused HDDs, should I divide the categories to different zpools or vdevs or just 3 folders for the categories are fine?
1.2: zpool setup: From some threads, I heard only 1 zpool can be accessed at a time. If I want to access to another zpool, I need to export previous zpool and import new zpool. Am I right? So 3 zpools for 3 categories doesn't work because sometimes I still need to access 3 categories at the same time, so only 1 option: 1 zpool for all 3 categories ?
1.3: I'm happy with my 1st box with raid-z2. So 2nd box is raid-z2 still. Any problem?
1.4: From the best practice, raid-z2 needs 2^n+2 HDDs, e.g. 6/10/18/34 HDDs. 10*4T HDDs raid-z2 provide about 28T capacity, 18*4T HDDs raid-z2 provide about 50T capacity, 34*4T raid-z2 give 100T. So, 34*4T HDDs are enough for 100T, or it should be : photo category will occupy a set of 18 HDDs, video category will occupy a set of 18 HDDs, rest category will occupy a set of 6 HDDs. So totally 42*4T HDDs needed. Which is better?
1.5 vdev setup: How about vdev? Some threads said over 11 HDDs in 1 vdev are too large. Is it true? So if every 10 4T HDDs for 1 vdev, should setup 4 vdevs? Or 1 category 1 vdev? Or only 1 vdev for 100T is fine?
1.6. should I consider ESXi ?
Question 2. Hardware setup:
2.1. CPU, maybe intel i3, maybe a haswell i5? One cpu is enough.
2.2. RAM, is 32G enough? As lots of 8G ram in the market, mainboard with 4 ram slots, which is cheap, can make to 32G easily. If 64G is needed, then only the super expensive server version mainboard can provide 8 ram slots. :( And 8G DDR3 ECC ram is still too expensive, and only the expensive mainboards can support ECC ram. So if possible, I'd prefer 32G normal ram. Fine?
2.3 mainboard. Some ASRock mainboards have 8 SATA ports, 4 ram slots and only $70, I love it! But if needs to support 8 ECC ram slots, it seems not too much choices....and all are expensive......
2.4. SATA card. 4* 8 ports as cheap as possible SATA cards plus mother board SATA ports? Or use one DELL H810 6GB SATA card (with 1G cache) which can connect 190*4T HDDs? Does the functions "auto cache on SDD" and "1G cache" useful?
2.5. Case+Power. As there will be 34~42 HDDs in total. As discussed in 1.1, if it's able to shutdown parts of the HDDs, should use 1 case for all HDDs or 2 cases?
Option 1) I found second hand Rackable SE-3016 3U 16 disks server case is small and not too expensive. Any better suggestion? Why I said in 2.4 that using one DELL H810 6GB SATA card, because it can connect up to 12 * Rackable 3U 16 disk cases. There're 2 SAS SFF-8088 ports on the front panel of the case. The case internal has SATA cables for 16 disks.
Option 2) Buy 4~5 normal cases which with 10 HDDs disk space each. But how to connect 5 cases (power cables and SATA cables) to one motherboard to control and solve the cable length problem? As the requirement: as small+quiet+green+cheap as possible, which option is better? Any case recommend?
2.5. Should I go for two gigabyte NICs to double the throughput? I'm using a gigabit router/switch and NICs right now. Does it needed to using two gigabit NICs on the new box to double the throughput? If yes, should I upgrade the gigabit router/switch and WIFI component? Or I should use other ports or connection? Environment: CIFS+ win/android.
2.6. To get as small+quiet+green+cheap as possible, maybe this time I should build 3 small freenas boxes ( 1 for photo, 1 for video, 1 for etc) but not 1 giant box. So that I can use the mainstream components and no need to buy the expensive server ones. Right?
Qestion 3. anything missing ?
I knew I mess up several concepts and some questions are dump enough. :) Your comments are very welcome, whatever to which question. :)