Supermicro A1SAM-2550F compatibility questions

Status
Not open for further replies.

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
Hey guys,
I'm looking at changing the configuration I was planning on my new NAS setup due to some concerns with the way I was going to do it. One thing I want is for my NAS to be low powered, and produce little heat. The new c2550 and c2750 atom chips look very appealing for this. I'm looking in particular at this board: http://www.supermicro.com/products/motherboard/Atom/X10/A1SAM-2550F.cfm

It has a lot of nice features for a server board and can be gotten fairly cheap (saw one site with it for 250). I could also get it as a barebones system on newegg for 550, but not sure if I'd go that route because it's only a 4 bay chassis and I want at least 8 bays.

I'm concerned though with the compatibility of everything on this board. Especially the NICs, I'm not familiar with that NIC they are using on this model at all. It's listed on that page as:
  • C2000 SoC I354 Quad GbE controllers (MACs)

So the first question is if anyone has any experience with this board or these NICs and know if FreeNAS will work on it? And is FreeNAS going to be able to do Link Aggregation on 4 NICs?

Next question is concerning memory. I obviously will get ECC memory, but I'm wondering if 8GB will be sufficient, or if I should go with 16 GB? My array will most likely be consisting of 8 x 4tb drives (I already have the drives I just am not sure if I will put all 8 toward the NAS or not). If 8GB is sufficient, is there any real advantage for free nas to use two sticks instead of 1? I was considering if I did 8 for now, I'd do it with just 1 stick so that if I did decide to upgrade later I could still potentially put it up to 32 GB without having to replace smaller sticks.

Next is about the performance if I'm planning to run just FreeNAS and a Plex media server on here. From all reports I've read these little atom chips actually perform really well. With a quad core 2.4 ghz for this model do you guys think performance is going to be adequate with potentially up to 8x 4TB encrypted disks?

The last question I have isn't relating directly to that board but a question on the best disk configuration. I've been doing a lot of reading today about potential problems and it seems like even raid 6 (raidz2) is somewhat risky with drives of this size. But I can't really think of any other raid configuration that would be any less risky. Raid 5 has the worries of a single drive failing during a rebuild, raid 6 of 2 drives failing (while a lower chance than a single drive it is still a somewhat significant number according to what I've read, in the neighborhood of a 6 to 10% chance). A raid10 type configuration could be equally risky and it really depends on where a 2nd drive failed if one were to fail in the rebuild. If it failed on the same mirror as the first then no big deal, but if it failed on the other mirror then you have a problem lol :) So what raid configuration is considered safest these days for higher capacity disks?

Thanks for any information you guys can give me :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Those CPUs are nice for FreeNAS. The FreeNAS Mini is based on the AsRock version of the C2750. :D

I'd recommend you go with 32GB of RAM. The 1GB of RAM per TB of disk space means you should be seriously considering 32GB of RAM. If you're trying to save a little money you could try going with 2x8GB sticks and if performance is slow then buy two more. I wouldn't do less than 16GB for a pool of that size though. You'd be taking some risks of things going badly and you probably don't want that.

My own review of the Mini found that the CPU is definitely not a concern with throughput as I installed a 10Gb LAN card in my Mini and I saturated my pool that consisted of several SSDs that were striped.

As for your disk configuration, if you setup a good maintenance schedule for disk monitoring, testing, and scrubs you should be okay with a RAIDZ2. If you are worried that RAIDZ2 might not be enough redundancy get another disk and go with RAIDZ3.

PS - Please don't say RAID5, RAID6, and RAID10. Those are names for hardware RAID. ZFS uses the terms RAIDZ1, RAIDZ2, and mirrors. If you plan to use hardware RAID with ZFS you will find yourself flogged to death here and those "in the know" will quickly find a use for your username.. in the ignore list. ;)
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
Lol, yeah sorry about the terms. I come from an all hardware raid background so they are what I'm used to :) When it boils down to it, they mean more or less the same thing in a generalized since haha. One disk parity, two disk parity, striped, mirrored, etc. A fig newton is still a cookie! ;)

But no, I was planning to avoid hardware based raid on this one, even though I still haven't been able to find a single thing pointing out any of the actual dangers to it. Only things stating you lose some of the advantages of ZFS. I will acquiesce to what seems to be the popular opinion and avoid it though. Besides, for a dedicated NAS thinking about it, it makes sense for the CPU to do the heavy lifting on the primary role of the server. I mean what is a hardware raid really anyway, it's just a SoC software raid dedicated to handling the disk management. Essentially the entire box is the hardware raid in this configuration and things like the webserver and plugins are the ones that are intruding hehe :)

I'm assuming RaidZ3 is simply 3 disk parity?

My biggest concern though is the NIC support. This feature request seems to suggest support for them was added in the 9.2.1 build, if I'm reading it correctly? https://bugs.freenas.org/issues/3589

EDIT:
Oh and just thinking about the disk configurations again. Would FreeNAS be able to do two RAIDZ1 pools mirrored? Obviously that would be giving up a lot of usable space, but would allow for a single disk failure on the opposite mirror during rebuild. I don't think anything I'm putting on this NAS is going to be important enough to warrant giving up that much space (I'm most likely going to stick with RAIDZ2), but I'm just theorizing about what would be safest lol. So I guess if ZFS pools support a configuration like this and I was going for utmost data redundancy over space a RAIDZ3 (assuming that is 3 disk parity) mirrored would be a highly redundant and safe system haha.

EDIT2:
Two more questions I need to know if I build using this board. Will ZFS be able to pool drives connected to different controllers (if I had 4 connected to the on board controller, and 4 connected to a PCIe controller board)? And I'm going to be searching this but going to go ahead and include it in my post as well, are there any software based PCIe controllers that are recommended?

EDIT3:
Ok.. one more question. Power supply. The chassis I'm looking at doesn't come with one so if I go with it, I'm going to have to order one separately. The drives I have say they use only 4.5 watts during write / read. Even x 8 that is only 36 watts peak, and considering the rest of the system is based around low power usage atom board, it almost seems like I could probably get away with as low as a 200 Watt PSU, but I'd probably bump it up to at least a 300 Watt, maybe a 400 watt to be safe and have low loads on it for longevity. Does this seem accurate?
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You might want to read my noobie guide. There's no such thing as RAIDZ1 pools mirrored. Either it's RAIDZ1 or mirrored. If you want to do what I think you are wanting to do, you'd be better off just going with flat out mirrors for all of the disks. The advantage is your pool will be wicked fast with all the iops you'll be able to do. RAIDZ2 or RAIDZ3 is probably the safest statistically though.

As long as the controller is supported by FreeNAS/FreeBSD the hard drives will "just work". You can switch them around between controllers whenever you want and bootup the machine and ZFS will "make it work'.

There's plenty of stickies that have lists of recommended controllers. I'd recommend you give those a read.

With 8 disks I'd do a 400w. The starting current will be significantly higher than running current and there's no reason to fry your PSU (and potentially your new server) because you went too small. ;)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Lol, yeah sorry about the terms. I come from an all hardware raid background so they are what I'm used to :) When it boils down to it, they mean more or less the same thing in a generalized since haha. One disk parity, two disk parity, striped, mirrored, etc. A fig newton is still a cookie! ;)

But no, I was planning to avoid hardware based raid on this one, even though I still haven't been able to find a single thing pointing out any of the actual dangers to it. Only things stating you lose some of the advantages of ZFS. I will acquiesce to what seems to be the popular opinion and avoid it though. Besides, for a dedicated NAS thinking about it, it makes sense for the CPU to do the heavy lifting on the primary role of the server. I mean what is a hardware raid really anyway, it's just a SoC software raid dedicated to handling the disk management. Essentially the entire box is the hardware raid in this configuration and things like the webserver and plugins are the ones that are intruding hehe :)

I'm assuming RaidZ3 is simply 3 disk parity?

My biggest concern though is the NIC support. This feature request seems to suggest support for them was added in the 9.2.1 build, if I'm reading it correctly? https://bugs.freenas.org/issues/3589

EDIT:
Oh and just thinking about the disk configurations again. Would FreeNAS be able to do two RAIDZ1 pools mirrored? Obviously that would be giving up a lot of usable space, but would allow for a single disk failure on the opposite mirror during rebuild. I don't think anything I'm putting on this NAS is going to be important enough to warrant giving up that much space (I'm most likely going to stick with RAIDZ2), but I'm just theorizing about what would be safest lol. So I guess if ZFS pools support a configuration like this and I was going for utmost data redundancy over space a RAIDZ3 (assuming that is 3 disk parity) mirrored would be a highly redundant and safe system haha.

EDIT2:
Two more questions I need to know if I build using this board. Will ZFS be able to pool drives connected to different controllers (if I had 4 connected to the on board controller, and 4 connected to a PCIe controller board)? And I'm going to be searching this but going to go ahead and include it in my post as well, are there any software based PCIe controllers that are recommended?

EDIT3:
Ok.. one more question. Power supply. The chassis I'm looking at doesn't come with one so if I go with it, I'm going to have to order one separately. The drives I have say they use only 4.5 watts during write / read. Even x 8 that is only 36 watts peak, and considering the rest of the system is based around low power usage atom board, it almost seems like I could probably get away with as low as a 200 Watt PSU, but I'd probably bump it up to at least a 300 Watt, maybe a 400 watt to be safe and have low loads on it for longevity. Does this seem accurate?

What matters isn't HDD load power usage, it's spin up power. Count on 30W per drive.
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
You might want to read my noobie guide. There's no such thing as RAIDZ1 pools mirrored. Either it's RAIDZ1 or mirrored. If you want to do what I think you are wanting to do, you'd be better off just going with flat out mirrors for all of the disks. The advantage is your pool will be wicked fast with all the iops you'll be able to do. RAIDZ2 or RAIDZ3 is probably the safest statistically though.

As long as the controller is supported by FreeNAS/FreeBSD the hard drives will "just work". You can switch them around between controllers whenever you want and bootup the machine and ZFS will "make it work'.

There's plenty of stickies that have lists of recommended controllers. I'd recommend you give those a read.

With 8 disks I'd do a 400w. The starting current will be significantly higher than running current and there's no reason to fry your PSU (and potentially your new server) because you went too small. ;)

Eh, I was just theorizing about utmost data redundancy. A mirrored raidz1 if possible would allow for a failure on the opposite mirror during recovery, where as simply mirroring will not. Simply mirroring if you get a failure on the opposite side of the mirror during recovery, you might be digging into your backups :) I know it's something possible in higher end hardware raid controllers (it's dubbed Raid51), but I guess not in ZFS.

And as for the PSU, thanks for the input. That helps me decide, I'm gonna go with a 450 watt Gold efficiency unit. I'm skimping slightly here and getting a standard ATX unit instead of a 2U server unit, because the 2U chassis I'm looking at supports both and I've used this model PSU in a couple of other builds and know it to be reliable. (Corsair CS450M)

I'll go with 2 x 8GB of ECC ram for now, and move to 32 GB if it seems like my performance is low. Although I'm not sure how to determine if it's cpu or memory limiting the performance, hopefully I'll be able to figure that out if it's an issue before having to simply throw more money at it haha.

And I'm going to stick with RaidZ2 for now, so that I can have one potential failure during a rebuild without losing the array. Don't want to risk the one failure being on the other side of the mirror, and the speed gains from striped mirroring aren't as important to me as being able to recover from a failed disk without losing the array.

I do think I'm going to scale back to 6 drives in this build instead of 8 though. Going to take my other two drives and keep them in my ESXi server hardware mirrored there as a secondary backup location, and for some local storage for the ESXi. It should still leave me with sufficient space (I really only need about 5 TB right now, I only bought 8 drives because it's how many my server could hold hehe), and if I need more later I can always just order another drive or two and expand the array.

Thanks again for the advice!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You can't add disks to a vdev that is already created.. so adding a single disk to your pool will be a bad idea. My noobie guide that I recommended you read earlier in this thread explains that because it's a common mistake and for many people has been fatal for their data.
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
You can't add disks to a vdev that is already created.. so adding a single disk to your pool will be a bad idea. My noobie guide that I recommended you read earlier in this thread explains that because it's a common mistake and for many people has been fatal for their data.

Well that is unfortunate lol. I just assumed it would be possible since you can do it with a lot of hardware raid controllers. I wonder if that is something ZFS might rectify in the future or if it is simply impossible with the way the technology is handled. Maybe I'll dig more into later if I get the extra time.

Not really a big deal, I'll decide if I want to go with 6 or 8 disks when I get the other hardware in. If I have to simply buy 6 x 6 TB drives in order to increase the capacity later when I need it, so be it. With ~14.8 TB of usable space with 6 drives in a raidz2 array, I probably wont be needing more space before the new nas is due for retirement anyway hehe. It's taken me nearly 4 years to get to the point where I felt my old 4 x 2TB in Raid5 array was getting to small.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm gonna say this again and hope it sinks in if it hasn't already...

/steps on my soap box
You should read my noobie guide. There's plenty of mistakes that you might not know you make until its too late. ;)
/steps off my soap box
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
I'm gonna say this again and hope it sinks in if it hasn't already...

/steps on my soap box
You should read my noobie guide. There's plenty of mistakes that you might not know you make until its too late. ;)
/steps off my soap box

Lol, I was looking at your guide before you got on your soap box. But honestly after seeing your posts and exchanging those PMs with you I question if you ever really get off your soapbox!!! I mean no offense by that, it's just when searching the forums and seeing post after post of yours with an antagonistic feel to them, it's the impression that it gives.

There were a couple things in it I didn't know, but nothing I would've been unable to figure out when it came time to tackle them. Like the drive expansion for example, I just took a look at it in my current test installation and it's clear to me if you try to add a disk that you can't add it to the original vdev. It clearly shows the new drive being added to it's own array and not part of the original array. So only thing that might have happened with me is wasting some money ordering a drive before I realized it, but meh, I would've found a use for it had that happened lol.

The L2Arc bit was interesting, but unneeded for my uses.

I didn't know about the AES-N1 bit either, and it was nice to confirm that the C2550 chip that I'm looking at does support it since I intend to use encryption.

I would make a suggestion to you however. You might want to change the background on your guide. The White text on the lighter colored gray to the left can be difficult. I had a color blind friend of mine look at it and he said he simply was not able to discern some of the words without highlighting them.

Thanks again for the help and suggestions!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The noob guide isn't to point you to things you couldn't figure out. It's to simply all the stuff you want/need to know. Otherwise you might not be able to find it all before its too late. It's meant to be an easy-access to the most common mistakes.

What is more likely is you'd have added the disk and realized you boned yourself. ;) Once a vdev is added to a pool it cannot be removed. Even if you just added it 5 seconds ago. So you'd have a SPOF for your pool. Then someday days/months/years later that drive dies and you are wondering why your pool that had 2 disks of redundancy failed from a single disk. Happens all the time. That's one reason why I specifically have that situation in my guide. It sucks telling people "Sorry, but you can't get to your files. Oh, by the way there are zero recovery tools for ZFS. And if your data is important and you don't have a backup you're looking at about $20k to recover just 450GB of data". Used to be nearly a daily occurrence around here. :/ One guy accidentally added his SSD as a stripe. Realized his mistake immediately and shut his server down. Then, being the genius that he is, he zeroed out the SSD. Never saw his data again. No backup and didn't have $20k in cash to blow on his data.

The background is changed in my guide already. I've done some updating. I just have to add a little info to it and push it out the door for 9.2.1.6. Thanks for the feedback though. :)
 
Status
Not open for further replies.
Top