Comments on: A Complete Guide to FreeNAS Hardware Design, Part III: Pools, Performance, and Cache https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/ Mon, 07 Apr 2025 15:18:56 +0000 hourly 1 https://wordpress.org/?v=6.7.2 By: YIQIAN https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5184 Mon, 07 Oct 2019 19:37:37 +0000 http://web.freenas.org/whats-new/?p=859#comment-5184 In reply to Jonathon Reinhart.

Hello, the main storage server connects to multiple JBOD disk array cabinets. Do you have to use HBA card? Or can it be connected through a 100G network card?

]]>
By: Jonathon Reinhart https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5183 Mon, 17 Dec 2018 19:51:48 +0000 http://web.freenas.org/whats-new/?p=859#comment-5183 In reply to fat7e.

You can’t span a ZFS pool across multiple systems. If you want to build a large ZFS system, add additional HBAs (host bus adapters) and bus expanders to your system.

]]>
By: Joon Lee https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5182 Tue, 27 Nov 2018 21:07:11 +0000 http://web.freenas.org/whats-new/?p=859#comment-5182 In reply to David A Kresley.

We recommend posting this on the FreeNAS forums. Input from other community members may help you.

]]>
By: David A Kresley https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5181 Tue, 27 Nov 2018 00:12:14 +0000 http://web.freenas.org/whats-new/?p=859#comment-5181 Regarding vdevs, would is be bad practice to use different sized mirrored vdevs?
Example: vdev1 2x4tb mirror vdev2 2x2tb vdev, vdev3 2x1tb vdev and vdev3 2x1tb?
This is my exact configuration for home use as it was initially all 1tb disks and as they failed I added the 4’s and 2’s.

]]>
By: Joon Lee https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5180 Fri, 27 Oct 2017 23:47:58 +0000 http://web.freenas.org/whats-new/?p=859#comment-5180 In reply to picsart app.

Thank you for your support!

]]>
By: picsart app https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5179 Wed, 12 Apr 2017 17:19:51 +0000 http://web.freenas.org/whats-new/?p=859#comment-5179 Hey,
Thanks so much for this post. I appreciate how you explained ZFS pool configuration. I’m sure that this will help a lot of people!
Best,
Dennis

]]>
By: fat7e https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5178 Tue, 07 Mar 2017 22:00:37 +0000 http://web.freenas.org/whats-new/?p=859#comment-5178 how to use more than one motherboard on the same zfs pool i mean how are they connected? Ethernet?

]]>
By: Michael Dexter https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5177 Thu, 12 Nov 2015 23:44:39 +0000 http://web.freenas.org/whats-new/?p=859#comment-5177 In reply to Mike.

With regards to failures, RaidZ1, 2 and 3 or RAID-10 equivalents of stripped mirrors are straight forward in how many drives can fail before the pool fails. A RaidZ1 pool can survive one drive failing etc. With regards to performance, there are many older FAQ posts and other mentions of “optimal” numbers of drives for any given RaidZ configuration and fortunately this is largely and obsolete concern. Matt Ahrens has provided a very thoughtful article on the matter: http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/

]]>
By: Mike https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5176 Wed, 25 Mar 2015 15:53:12 +0000 http://web.freenas.org/whats-new/?p=859#comment-5176 What about drive count in those pools? One FAQ entry says to limit to 12 drives per pool, but I’ve seen 16 or more in use. What is the downside to using more drives (besides increased risk of failure due to drive failure-its easier to have 3 out of 16 drives fail than 3 out of 12, I understand that part)?

]]>
By: Ghyslain Ledoux https://www.truenas.com/blog/a-complete-guide-to-freenas-hardware-design-part-iii-pools-performance-and-cache/#comment-5175 Wed, 18 Feb 2015 11:37:31 +0000 http://web.freenas.org/whats-new/?p=859#comment-5175 Great posts Joshua,
Could you please elaborate/explain further when you talk about “IOPs performance of a single drive” in your “ZFS pool configuration” section? I simply do not understand why ZFS would provide only the IOPS of a single disk when you use ZAIDZ2 of 10 disks in your scenario. What about different disk/poll configuration or layout?

]]>