BUILD c2750 - asrock or supermicro?

Status
Not open for further replies.

Artemis1121

Cadet
Joined
Apr 18, 2015
Messages
4
just for you to see what my motivation is:

Current NAS:
- Marvell Sheevaplug (1.2GHZ Kirkwood ARM SoC, 512MB RAM, USB2)
- 3 USB Hubs
- 19 USB harddisks (1-3TB each, 37.5TB total)
- replaced all the single power supplies with an ATX-Power-Supply
- debian testing
- 15-30MB/s
- no raid
- no encryption
- no redundancy

Trying to achieve:
- 100MB/s
- encryption
- raidz1/raidz2

After searching and comparing possible options (and waiting for D-1540 just to see that a board costs 1000€+), i believe that a c-2750 would suit my needs best.

But i have some questions:

1) ASRock C2750D4I or Supermicro A1SA7-2750F
i read, that the ASRock board has problems with its se9230 sata controller, and that it is propably a driver issue. since i will need 9 or 10 drives, its not an option to use only the other ports.. on the other hand its a difference of 150€.. in the german part of this forum the asrock board is recommended without mentioning this problem. has this problem been solved?

2) ASRock C2750D4I or Supermicro A1SA7-2750F
since my system will idle most of the time, does anybody know idle power consumption of those builds?
read something like 19w for a simple c2750, 26w for the asrock(caused by the marvell controllers), but found no info about the supermicro with 16xsata..

3) 8gb or 16gb
1-2 simultaneous users, most time only reading, mostly large files.. are 16gb really necessary? zfs uses a lot ram, but my current system works with 512mb.. that just feels wrong ^^

4) Seagate Archive or WD Green
Any know disadvantages with the seagate archive harddisks? partially overwriting tracks doesnt seem a good idea.. better not joining the public beta test or does anybody have positive experience with such a raid?

5) raidz1/raidz2
important data is backuped on additional harddisks, dvds or both.
i have lost disks and data in the past. happens..
read a lot about it, but is raidz2 really sooooo important?
is it simple to upgrade a raidz1 to raidz2, if i decide to do so later?

6) spindown
in my current system i use hd-idle to spindown the disks after 10minutes without use.. will the disks of a freenas raid automatically spindown? i even read that it is strongly advised not to spindown the drives, is this correct?

7) overhead
my current drives are formatted with ext3 and the option largefile4.. on a 1TB drive this gives me 932GB usable space compared to 917GB without largefile4.
what is the filesystem overhead of zfs?

Any help solving these questions would be great!
Thank You!

PS: you dont need to say anything about my current NAS, just laugh a little bit or scream or both! ^^
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
1&2) The ASRock C2750D4I and C2550D4I have 6x Intel SATA ports and 6x Marvell. The Marvell ports are currently slow to unstable to the point of pool unavailability. Our recommendations for >6 SATA Ports are the Supermicro X10SL7-F and the ASRock E3C224D4I-14S. The A1SA7-2750F doesn't fit standard systems, so you'd need to customize your chassis and PSU. You'll need an 8pin 12V and a 4pin 5Vsb plug on your PSU and the mounting holes are nonstandard, but the 16port controller is quite good. We had lots of users who had problems with the Marvells -> no buy for the C2x50D4I. A quick calculation: Supermicro states a C2750 system will use 30W before AC/DC, to that 16W for the SAS controller, a few watts for the DIMMs and USB stick(s) -> 50W give or take under full load excluding HDDs.

3) At that pool size we'd recommend at least 16GB and having another 16GB on order.

4) The Seagate Archive 8TBs are optimized for archival storage -> write once your huge media library or backups to it and from then on mostly read access. For a general NAS I'd stick with WD Red 6TB, but you will most certainly achieve 100MB/s if you stick 8x8TB HDDs together. Also they're around the same price than the 6TB Reds due to that new technology.

5) If you use 6TB drives, you'd need 10 of them to create a raidz2 with the same usable capacity as you have now. With 8TB drives that's "only" 8 HDDs with raidz2 to store everything which you have on your current sad excuse of a NAS. At least you can easily expand with another 8x8TB set if you choose the 16port mobo. raidz1 is out of fashion for such big arrays and HDDs. The point where it is acceptable is 4x3TB HDDs in raidz... above that a failed rebuild is almost certain.

6) not really possible/feasible since it puts a lot more strain on the HDDs.

7) In general we'd recommend to not fill the ZFS pool over 90% to keep it all nice and speedy. Above that percentage ZFS switches to space optimisation -> slower. At 95% you best have your new vdev ready to expand the pool... apart from that ZFS has an overhead of 5-10%ish. Depending on the files stored compression can help a bit and doesn't put any strain even on that C2750.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
FYI: The German Hardware sticky is still WIP and is only a quick reference to filter out some of the most egregiously bad builds (Seems to be working).

The Marvell controllers on the ASRock boards are still flaky. These boards do use quite a bit more power than bare C2x50 systems.

If you really want 10 drives, there's not much point to miniITX. For smallish systems, the ASRock E3C224D4I-14S is a good choice. It fits in a Lian-Li PC-Q26, but not most miniITX cases. It takes a Haswell processor and as such is more flexible than the Atom boards. Idle power consumption should be similar to the Atom boards, plus 10W more or less, from the SAS controller.

8GB of RAM will work, but expect miserable performance and even real issues if you start using stuff like Plex. 16GB is a much better starting point. At the very least, start with a single 8GB DIMM and not two 4GB DIMMs.

Encryption is designed to fail by locking your data away. You must absolutely be diligent in backing up all keys and passphrases and following the drive replacement instructions to the letter or you will lose data. Do not use encryption if this is more than you're willing to handle.
4) Seagate Archive or WD Green
Any know disadvantages with the seagate archive harddisks? partially overwriting tracks doesnt seem a good idea.. better not joining the public beta test or does anybody have positive experience with such a raid?

5) raidz1/raidz2
important data is backuped on additional harddisks, dvds or both.
i have lost disks and data in the past. happens..
read a lot about it, but is raidz2 really sooooo important?
is it simple to upgrade a raidz1 to raidz2, if i decide to do so later?

6) spindown
in my current system i use hd-idle to spindown the disks after 10minutes without use.. will the disks of a freenas raid automatically spindown? i even read that it is strongly advised not to spindown the drives, is this correct?

7) overhead
my current drives are formatted with ext3 and the option largefile4.. on a 1TB drive this gives me 932GB usable space compared to 917GB without largefile4.
what is the filesystem overhead of zfs?

Any help solving these questions would be great!
Thank You!

4) I'd say neither. The shingled drives are still a bit experimental all around. WD Greens aren't very appropriate, but can be hacked into usable NAS disks (wdidle - search the forum). WD Reds are a better choice and work out of the box. In any case, test the drives very thoroughly (several options described elsewhere on the forum).

5) RAIDZ2, definitely. RAID5 and RAIDZ1 can be rather prone to data loss. Moving to RAIDZ2 later is far from trivial (Please read Cyberjock's guide for pretty much what you need to know about ZFS).

6) Frequent spin down-spin up cycles are likely to wear out your drives faster, for negligible power savings. It can be accomplished, but it's not recommended nor supported.

7) This guide should be useful: https://forums.freenas.org/index.php?threads/zfs-raid-size-and-reliability-calculator.28191/
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
4) Seagate Archive or WD Green
Any know disadvantages with the seagate archive harddisks? partially overwriting tracks doesnt seem a good idea.. better not joining the public beta test or does anybody have positive experience with such a raid?
...

I own a Seagate 8TB Archive SMR drive and use it with my FreeNAS. BUT, it's used solely as a single disk pool
for backing up a 4 disk, 4TB per disk RAID-Z2 pool. Write performance is spotty. I got 30MB to 60MB using
Rsync, though read was much faster, (up to 150MBps). For my use, I am happy.

Since these SMR drives use a new type of track writing, I really wanted to use ZFS so I could scrub the backup
pool before I start another backup. That way I have a chance to detect firmware problems, (or other drive related
problems).

My recommendation today is not to use these SMR drives in RAID-1, or higher. Performance would be irregular.
Plus, they don't seem to support time limited error recovery options.

Here are 2 discussion about SMR drives on FreeNAS;

https://forums.freenas.org/index.php?threads/seagate-archive-8tb-discussion-moved.28416/
https://forums.freenas.org/index.php?threads/seagate-8tb-archive-drive-in-freenas.27740/
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The performance would still be stable above 1Gbps with that many disks.

Error recovery is IMHO not of importance with ZFS. This error recovery would make the performance _really_ irregular. ZFS would just note a checksum error and carry on with a slightly degraded array. That doesn't bother us if we would do a 11disk raidz3... A hardware RAID adapter removes the whole disk from the array and leaves the whole 8TB to rebuild, where ZFS rebuilds just that widdle teensy checksum error.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The performance would still be stable above 1Gbps with that many disks.

Error recovery is IMHO not of importance with ZFS. This error recovery would make the performance _really_ irregular. ZFS would just note a checksum error and carry on with a slightly degraded array. That doesn't bother us if we would do a 11disk raidz3... A hardware RAID adapter removes the whole disk from the array and leaves the whole 8TB to rebuild, where ZFS rebuilds just that widdle teensy checksum error.

Note the checksum error and immediately correct it, if redundancy is available.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The performance would still be stable above 1Gbps with that many disks.
...

For reads, yes. Writes, well, just because I saw 30MBps writes, and if you use 8 data disks, that should give a 240MBps write speed.
Except, their are delays in the write. So it's possible, (note the word possible), that with a RAID-Z3 of 11 x 8TB SMRs, you could
get less due to the delay.

That said, it would be an interesting experiment. However, not one I would pay for...

Better if the SMR disks had TRIM / DISCARD support, so ZFS could write to known empty tracks.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
With 8 data disks spinning and 30MB/s per disk = 180MB/s minimum, 50% more than Gbps ethernet. And that's going to the cache area first. No, these disks are not made for high IOPS. They are closer to tapes than to HDDs. They're archival mass storage. Besides, it won't be a high performance build either since the RAM/Storage ratio is quite low. If you want performance, get HGST He8 for 600$ a pop.

@OP: If you desire 64GB RAM, the combination of a Xeon E5 with 4x16GB Registered DIMMs is around 30% cheaper than a C2750-based system with 4x16GB unbuffered DIMMs.
 

Artemis1121

Cadet
Joined
Apr 18, 2015
Messages
4
i have to rethink the whole build..
currently i have a very very small system.. 5w idle +usb hubs+ drives.. the c2750 20w tdp, probably less than 30w idle without hdd seemed a good choice..
but now.. "get e3", "or e5", "additional raid controller", "better 32gb ram", "or directly 64gb", "better get wd red", "if you want performance get HGST He8"... the costs get higher and higher.. and power consumption too.. tried to find such builds.. ~60w idle without hdd... and the cpu has a tdp of 95w.. even if idle is way less, i have to take care for cooling this.. and the price for 48tb usable space increased from 2700 to 4300-7000€...
i just wanted to get encryption, redundancy and saturating GBE when transfering large files, while staying as energy-efficent as possible..
12x power consumption, 64-128x ram, high end disks wasn't exactly what i had in mind..
yes i know, not all these options are to be considered, but you all gave me many things to think about.. this systems gets too big for me...
and i didn't know that encryption is such a problem with freenas.. i have my systems and some portable disks encrypted right now.. i thought that hardware-accelerated encryption of a raid should be an easy thing with freenas..
if i have to take care of this - additionally to the huge amount of ram needed - maybe ZFS/freenas isn't the way to go.. have to take a look at LUKS or other options again..
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Cooling? Slap the boxed cooler on there and be done.

The He8 wasn't exactly for you, that was more a rolleyes as to what one needs to pay if one wants performance enough to saturate 10Gbe links instead of opting for cheaper disks (Seagate Archive) which are capable of saturating 1Gbe well enough. Then I thought that you might have seen the "supports 64GB RAM!!1" on the C2750 boards which isn't feasible. But for archival storage 32GB is enough for ~100TB worth of data, 16GB RAM shouldn't be used with >30TB.

Encryption isn't a problem. Just backup your keys in many places and chose a CPU capable of AES-NI to lower the power consumption and CPU usage. Most encryption problems are about users losing their keys.

There's no additional RAID controller needed. ZFS doesn't want RAID controllers. ZFS wants relatively cheap HBAs like the one on the A1SA7-2750F. If you feel fine with modding a PSU and case, that's the right mobo for you. Some users can't even fit a standard ATX board in a standard ATX chassis without shorting out something... PSU wise I'd pick one with at least 40A on the 12V rail due to spinup current. You could mod a 3U rackmount chassis like the Inter-Tech 3U-3316L or 3U-3416 (the latter comes with 3x120mm fan bays, the 3316L fan tray would need some dremeling as well) with 16 HDD bays to your needs. Standard ATX PSUs are a little bit too high for 3U mounting, so the rear end of the chassis would need a complete redo since the board doesn't use standard. that should be a quiet, low-power chassis capable of housing 128TB raw (~80TiB usable). To connect the backplanes to the mobo you'll need these cables: https://geizhals.de/inter-tech-mini-sas-x4-sff-8087-auf-4x-sata-kabel-88885237-a1006361.html
 
Last edited:

Artemis1121

Cadet
Joined
Apr 18, 2015
Messages
4
It took some time to read more and to increase the budget, but here my hopefully almost final NAS:

1) Supermicro A1SAM-2750F retail (MBD-A1SAM-2750F-O) OR Supermicro MBD-A1SRM-2758F-O
should i get a 2750 or 2758? 2758 is 11% cheaper..

2) Samsung 8GB DDR3-1600 CL9 (M391B1G73QH0-YK0) OR Hynix 8GB DDR3 / PC3-12.800 / CL11 (HMT41GE7AFR8A-PB)
both are listed as tested memory with both mainboards. the Hynix costs 25% more.. so i would take the Samsung..

10x Seagate Archive HDD SATA III 8TB (ST8000AS0002)
Samsung 850 Evo 120GB
Lian Li PC-D8000A silber
IBM M1015
2x LSI CBL-SFF8087OCF-10m Multilane Breakout Kabel SFF8087 auf 4x SATA von Avago
600 Watt be quiet! System Power 7 Bulk Non-Modular 80+ Silver

overall ~3500€ for usable 64TB

what do you think about it?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It took some time to read more and to increase the budget, but here my hopefully almost final NAS:

1) Supermicro A1SAM-2750F retail (MBD-A1SAM-2750F-O) OR Supermicro MBD-A1SRM-2758F-O
should i get a 2750 or 2758? 2758 is 11% cheaper..

2) Samsung 8GB DDR3-1600 CL9 (M391B1G73QH0-YK0) OR Hynix 8GB DDR3 / PC3-12.800 / CL11 (HMT41GE7AFR8A-PB)
both are listed as tested memory with both mainboards. the Hynix costs 25% more.. so i would take the Samsung..

10x Seagate Archive HDD SATA III 8TB (ST8000AS0002)
Samsung 850 Evo 120GB
Lian Li PC-D8000A silber
IBM M1015
2x LSI CBL-SFF8087OCF-10m Multilane Breakout Kabel SFF8087 auf 4x SATA von Avago
600 Watt be quiet! System Power 7 Bulk Non-Modular 80+ Silver

overall ~3500€ for usable 64TB

what do you think about it?
C2758 will be slower.

Otherwise, looks good. Consider a Seasonic G-550 instead of the be quiet.
 

Artemis1121

Cadet
Joined
Apr 18, 2015
Messages
4
Thanks for the feedback!
The Seasonic G-550 costs 40% more, has less sata connectors(6vs9) and the efficency imho seems to be good for both power supplies..
What would be the benefit of the Seasonic?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks for the feedback!
The Seasonic G-550 costs 40% more, has less sata connectors(6vs9) and the efficency imho seems to be good for both power supplies..
What would be the benefit of the Seasonic?

Better quality, mostly.

As for the SATA power connectors, that's just a matter of adding adapters/Y cables.
 
Status
Not open for further replies.
Top