BUILD Check my build if any good

Status
Not open for further replies.

DinoRS

Cadet
Joined
Sep 3, 2016
Messages
3
Dears,

now I have been toying around on a 32 GB RAM / i3 Box but apparently that didn't do the trick well enough so here is what I have now:

FreeNAS 9.10 Stable

X9SRH-7TF w. 6 C/ 12T CPU
64 GB ECC RAM

Volume 1:
5x 4 TB HGST 7k RPM Drives - RaidZ1, iSCSI, Plex, CIFS
1x Intel SSD DC S3710 200 GB as Log drive
2x Crucial 256 GB SSDs as Read Cache

Volume 2:
2x 3 TB WD NAS Red Drives - Mirror, iSCSI only

What I do: iSCSI, CIFS and plex (currently, planning to move CIFS / Plex to another box)

Mostly ofcourse I'm interested if the RAM should be enough for this build or not. Could probably add another 64 GB to it if really necessary.

I had on the 32 GB box random iSCSI Connection droppings so VMs running on this Hardware just froze away. Currently it's still only 1 VM on the whole pool as that freezing was what held me off from deploying more to it. CIFS also dropped after like 26 GB of Data transfered to 0 Bytes and never recovered unless I removed the Network cable from one of the i350-T4 Ports fast enough (as in I was waiting for the transfer to complete) then it sometimes continued with the data transfer.

Before I would cry for a bug I'd like to hear the ok for this Hardware configuration. I'm not looking for best iSCSI Performance, just no more VM freezing otherwise I'll have to rely on something else for my storage needs.

As I understand 16 GB RAM is minimum for iSCSI on FreeNAS, 26 TB worth of disks is if 1 GB / TB would be taken as target, fine and on 2GB / TB it would be just not enough by a tiny stretch. I enabled lz4 compression, no de-dup.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
5x 4 TB HGST 7k RPM Drives - RaidZ1, iSCSI, Plex, CIFS
RaidZ1 is not really recommended normally by itself. RaidZ1 and iSCSI make for poor bed mates and even poorer performance.
1x Intel SSD DC S3710 200 GB as Log drive
Good choice for a SLOG, but recommend you focus on getting more RAM; especially for iSCSI

If you really want to do iSCSI, here are some recommendations:
  1. Do Mirror vDevs instead, more IOPS are needed for iSCSI
  2. Don't plan on using over 50% of available space (performance drastically tanks)
  3. Max out the RAM or at least get it to the 128 you were contemplating
  4. Make the iSCSI drives their own isolated Pool/Volume
    • This is my personal recommendation
    • Would not want to have the drives "sharing" time with CIFS or anything else IMHO
  5. Don't think you need the L2ARC (Cache) right now; until you get higher in total RAM
Is there a particular reason you are wanting to do iSCSI? I run it because I use the box as a DataStore for ESXi and Hyper-V VMs...
 

DinoRS

Cadet
Joined
Sep 3, 2016
Messages
3
Well yes I do want to run iSCSI for the exact same reasons, however performance really isn't that big issue, this is a test and if I really put FreeNAS / ZFS into production, no worries I have another system which takes much more spindles (currently a mix of 28 2,5" HDD and SSDs) however I really am worried if the issues on the 32 GB box appeared would even reappear if I switch to the production system (which has 128 GB RAM) and things go sideways for whatever reason. I won't be able to go past 128 GB of RAM, bigger "sticks" were just unpayable back then ;-)

As I'm moving CIFS / Plex away already (5 TB data just takes a moment) I would really like to get to know how much RAM should I have for iSCSI for the amount of storage listed above?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
For comparison, this is my setup:
  • 96 GB RAM (plan on getting it up to 128GB)
  • 5 x HGST SAS 4TB Mirrors (10 Disks)
  • 2 Hot Spares
  • 1 Cold Spare
  • A SSD SLOG (Intel DC S3710 200GB)
If you are only testing try it out with 64GB, but make sure to use Mirror vDevs. Also, I have mine set "sync=always"; which is recommended for iSCSI. Then bump up the RAM to 128 and see how you like it.

Expert from another thread I commented on:
Honestly, my setup results in ~17.5TB of Usable Space; but I plan on never going above 8TB combined for my iSCSI Volume (Has two zVols). Currently I have each zVol set to only 2.5TB, but can increase it to 4TB later if needed. *** From my understanding while one can increase a zVol; you don't want to try and decrease it...

So in conclusion while I have 40TB of Raw Space (10 x 4TB), which yields me ~17.5TB Usable Space (Mirror vDevs); I will only plan to ever use 8TB of that... *** Not counting the 2 Hot Spares or 1 Cold Spare

There are some other posts regarding Plex taking up a lot of CPU. You should eliminate that variable from your testing as well. Then later add it in to see how if it is a factor.

For some really good insight read @jgreco 's "Why iSCSI often requires more resources for the same result"

Excerpt:
What people want -> old repurposed 486 with 32MB RAM and a dozen cheap SATA disks in RAIDZ2

What people need -> E5-1637v3 with 128GB RAM and a dozen decent SATA disks, mirrored
 

DinoRS

Cadet
Joined
Sep 3, 2016
Messages
3
Is there any way to exactly calculate how much RAM FreeNAS needs based on Disks, TB or some other metric? Or is there a way to determine from reports when the system is under pressure? It's not an issue to add more spindles in my production system if performance is insufficient IO wise, if it's generally "128 GB RAM will be enough for anything (without de-dup)" I wonder where that comes from.

All I don't want right now is drives completely locking up until one nic port is disconnected (and reconnecting worked to fix things, too) and if it happens again I don't want to be put down by devs when I file a bug Report...

Thank you! I've been reading alot but until I actually decided to give it a try and ran into issues right away, I thought this might be less ugly to maintain than SCSI-3 persistent Volumes on Debian, in the worst case I'll do that (again) but I rather hope to get things working with whatever performance, just no freeze on everything! :)

Offtopic, but it's about HDD performance:
I noticed when I increased from 8 to 16 disks in Raid 10 (on an enabled 2308 Controller) with my 2,5" 7,2k RPM drives that the performance drop shifted from the 50% towards more of the 75% mark.
Ofcourse this is completely a different OS, layout and everything but I think the hdd form factor matters as the motors in smaller drives have a smaller travel way in total, same as why latency is usually lower on 2,5" drives :smile:
 
Last edited:
Status
Not open for further replies.
Top