Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

LSI 9211-8i + 12TB SAS 512e drives

kspare

Senior Member
Joined
Feb 19, 2015
Messages
440
Speaking from experience, if you are going to run VMs from this storage, do NOT use SATA drives. The SATA bus itself is half duplex which will cause problems with performance. Also, make sure you have a solid SLOG device such as a high IO SSD or NVDIMM. You can disable sync but then you run the risk of file system corruption on your VMs should you lose power suddenly.
Runs just fine with sata drives if you build out the SLOG, L2Arc and even better now with meta drives.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,541
Speaking from experience, if you are going to run VMs from this storage, do NOT use SATA drives. The SATA bus itself is half duplex which will cause problems with performance.
Speaking from lots of experience, VMs run just fine from SATA drives, and SATA has basically no impact. Modern hard drives are incapable of saturating 6Gbps SATA (or even 3Gbps SATA) so "half duplex" is irrelevant.

The problem with HDD's are simply that you don't get a lot of IOPS out of them, and you need to run mirrors, which most first-timers seem to miss, so they end up with some RAIDZ2 zombie array with incredibly poor performance. Even when running mirrors, a mature pool will not have huge IOPS capacity, so you will get nowhere near peak sequential HDD speeds for any sustained period of time.
 

kspare

Senior Member
Joined
Feb 19, 2015
Messages
440
Speaking from lots of experience, VMs run just fine from SATA drives, and SATA has basically no impact. Modern hard drives are incapable of saturating 6Gbps SATA (or even 3Gbps SATA) so "half duplex" is irrelevant.

The problem with HDD's are simply that you don't get a lot of IOPS out of them, and you need to run mirrors, which most first-timers seem to miss, so they end up with some RAIDZ2 zombie array with incredibly poor performance. Even when running mirrors, a mature pool will not have huge IOPS capacity, so you will get nowhere near peak sequential HDD speeds for any sustained period of time.
exactly. we maintain N + 1 for our storage servers and actually migrate all the vm's from one server to another monthly to manage fragmentation.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,541
exactly. we maintain N + 1 for our storage servers and actually migrate all the vm's from one server to another monthly to manage fragmentation.
Classic ZFS "fix". :smile:
 

ChrisRJ

Senior Member
Joined
Oct 23, 2020
Messages
307
Just out of curiosity: What is the data volume that gets "fixed" that way? I guess there will be many people out there crying out loud, if the N+1 would mean an additional 100k$ or so. But relative to other prices (esp. some software) that is not really so much money for a larger enterprise and its overall IT budget.
 

kspare

Senior Member
Joined
Feb 19, 2015
Messages
440
Just out of curiosity: What is the data volume that gets "fixed" that way? I guess there will be many people out there crying out loud, if the N+1 would mean an additional 100k$ or so. But relative to other prices (esp. some software) that is not really so much money for a larger enterprise and its overall IT budget.
The only way to defrag a zfs volume is to move the data off and then on again. So with the extra server, we migrate from the current store to the new store....and just like that...defragged. now the old fragmented storage server is the spare, and so on.
 

ChrisRJ

Senior Member
Joined
Oct 23, 2020
Messages
307
Thanks, the approach was clear. I was interested in the number of GB or TB in question
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,541
Just out of curiosity: What is the data volume that gets "fixed" that way? I guess there will be many people out there crying out loud, if the N+1 would mean an additional 100k$ or so. But relative to other prices (esp. some software) that is not really so much money for a larger enterprise and its overall IT budget.
Best practice for ZFS is to maintain occupancy rates someplace between 10%-50% for VM block storage, depending on the amount of churn.

With used X9 2U 12 bay servers going for $500, 512GB DDR3 for $800, 1TB NVMe for $150/ea, and 14TB drives shucked for $200/ea, you can assemble a basic system (2 NVMe L2ARC + 12 14TB disks) for about $5K if you shop frugally. If your storage policy requires redundancy to be maintained in the face of a disk failure, this means three-way mirrors, so four 14TB vdevs are possible, 56TB in the pool, of which 5-25TB-"ish" are usable for block storage. SLOG device and network card not included.

You can, of course, buy new gear at significantly higher prices and get a little bit more performance out of it, but even then, the prices are very reasonable compared to your typical EqualLogic or NetApp.
 
Top