doesnotcompute
Dabbler
- Joined
- Jul 28, 2014
- Messages
- 18
Hi,
Lurker turned post'er here.
We've used openfiler in the past for our opensource backup targets. I am building a freenas box for zfs usage this coming weekend. We've used some commercial offerings that use zfs under the hood for our 2nd tier san/nas so I'm familiar with zil/slog, arc vs. l2arc and vdevs vs. zfs vols. Our use case for this freenas system at first is primarily writing to the freenas with nfs for backing up primary systems via a backup app that can write to cifs or nfs, but performs better to nfs.
*So disks > vdev > zfs volumes - got that. Then we have the zfs dataset, then shares and exports. Does anyone have a handy template/format (txt, csv, etc) they use to document their hierarchy of components and pieces that they're willing to share?*
Be glad to share our test system specs and performance and how it looks using this documentation method you've shared with me :)
PS i've got an intel 3700 pcie on order and will a/b test it against our spare zeus disk if you're interested in seeing how they compare. Gonna also test with one of our intel ssd's as zil, so a 3 way battle royale. To start it'll be 12 x 1TB drives on a flashed m105 in a pe510 chassis, then adding two 15 drive external sas shelves. gonna do 4+2 drive raidZ2 and just keep growing the stripe size. once we figure what works best in what role, we'll build one as our backup target system and one as a dev/test san for internal projects. We've got multi-10G blades to generate load and have seen our blades and fabric do 30k+ iops and 1.9GB/s with 64k block size sqlio testing (only 2 threads used) from inside a windows vm (2012 w/ sql 2012), so we'll pound on these filers as best we can during testing.
Thanks, looking forward to learning and sharing.
-Hak
Lurker turned post'er here.
We've used openfiler in the past for our opensource backup targets. I am building a freenas box for zfs usage this coming weekend. We've used some commercial offerings that use zfs under the hood for our 2nd tier san/nas so I'm familiar with zil/slog, arc vs. l2arc and vdevs vs. zfs vols. Our use case for this freenas system at first is primarily writing to the freenas with nfs for backing up primary systems via a backup app that can write to cifs or nfs, but performs better to nfs.
*So disks > vdev > zfs volumes - got that. Then we have the zfs dataset, then shares and exports. Does anyone have a handy template/format (txt, csv, etc) they use to document their hierarchy of components and pieces that they're willing to share?*
Be glad to share our test system specs and performance and how it looks using this documentation method you've shared with me :)
PS i've got an intel 3700 pcie on order and will a/b test it against our spare zeus disk if you're interested in seeing how they compare. Gonna also test with one of our intel ssd's as zil, so a 3 way battle royale. To start it'll be 12 x 1TB drives on a flashed m105 in a pe510 chassis, then adding two 15 drive external sas shelves. gonna do 4+2 drive raidZ2 and just keep growing the stripe size. once we figure what works best in what role, we'll build one as our backup target system and one as a dev/test san for internal projects. We've got multi-10G blades to generate load and have seen our blades and fabric do 30k+ iops and 1.9GB/s with 64k block size sqlio testing (only 2 threads used) from inside a windows vm (2012 w/ sql 2012), so we'll pound on these filers as best we can during testing.
Thanks, looking forward to learning and sharing.
-Hak