atmarx
Cadet
- Joined
- Jan 19, 2020
- Messages
- 2
Hi folks --
My team has used FreeNAS a few times whenever a researcher wants us to set up some shared storage between compute nodes and doesn't want to spend... well, anything. Usually set it and forget it. So, when the 10-bay Synology box backing a smaller 4 node HyperV cluster started freezing up every week or so, and I figured I'd try FreeNAS.
I took one of our old Dell PE R720 server with two E5-2630 procs, loaded it up with 128GB of RAM, and 5 Micron 5200 8TB SSDs. I swapped out the H710 for an H310 HBA so the drives could be individually addressed (I originally tried making single drive RAID0 arrays with the H710, but realized there was no way to replace it if a drive failed without a reboot). I've got them set up as 2 mirrored vdevs with a hot spare.
I put two Intel X550-T2 cards in there, and bonded 1 connection from each card together using LACP (so ix0+ix2=lag0 on vlan40, ix1+ix3=lag1 on vlan41).
I set up more of the same cards in each of the 4 blades, installed Intel's latest drivers, set the card profile to Storage Server (low latency, turns off vmq), set flow control off on the switch (a Netgear XS728T). Jumbo frames across the board.
I got everything set up, got iSCSI connected using MPIO, mounted the extent, and boom - I'm in business. Then I ran some speed tests and... the reads are good (2-300MBps). The writes suck. Sequential was okay (100MBps) but random was terrible (single digits). Copying over a large file would go fast (~100MBps) for the first few hundred megs and then would tank - bouncing between 5MBps and 0.
I know those SSDs are not speed demons, but I didn't think it would be that terrible. The VMs I'm hosting are mostly for research computation, so writes are bursty (scratch files, results, logging) -- it doesn't need to sustain multiple TBs at a time, but more than 1GB would be nice.
So I read a bunch more and have two Intel 900P 280GB PCIe cards on the way. My question is what would be the most advantageous way to deploy them to get better write speeds? Mirrored SLOG? Single SLOG and L2ARC? I have another R720 I can sacrifice and steal the RAM from to get up to 192GB.
I also have more of those 8TB drives on the way as well (once the SSD backlog clears up) -- I don't need the space, but I'm guessing adding more vdevs will also make a difference across the board.
When all is said and done, not counting the reused server, I'll have put in <$10k to get a (hopefully) decently useful SAN.
If there's anything glaring I've missed, please let me know -- thanks for reading and any advice you can give.
My team has used FreeNAS a few times whenever a researcher wants us to set up some shared storage between compute nodes and doesn't want to spend... well, anything. Usually set it and forget it. So, when the 10-bay Synology box backing a smaller 4 node HyperV cluster started freezing up every week or so, and I figured I'd try FreeNAS.
I took one of our old Dell PE R720 server with two E5-2630 procs, loaded it up with 128GB of RAM, and 5 Micron 5200 8TB SSDs. I swapped out the H710 for an H310 HBA so the drives could be individually addressed (I originally tried making single drive RAID0 arrays with the H710, but realized there was no way to replace it if a drive failed without a reboot). I've got them set up as 2 mirrored vdevs with a hot spare.
I put two Intel X550-T2 cards in there, and bonded 1 connection from each card together using LACP (so ix0+ix2=lag0 on vlan40, ix1+ix3=lag1 on vlan41).
I set up more of the same cards in each of the 4 blades, installed Intel's latest drivers, set the card profile to Storage Server (low latency, turns off vmq), set flow control off on the switch (a Netgear XS728T). Jumbo frames across the board.
I got everything set up, got iSCSI connected using MPIO, mounted the extent, and boom - I'm in business. Then I ran some speed tests and... the reads are good (2-300MBps). The writes suck. Sequential was okay (100MBps) but random was terrible (single digits). Copying over a large file would go fast (~100MBps) for the first few hundred megs and then would tank - bouncing between 5MBps and 0.
I know those SSDs are not speed demons, but I didn't think it would be that terrible. The VMs I'm hosting are mostly for research computation, so writes are bursty (scratch files, results, logging) -- it doesn't need to sustain multiple TBs at a time, but more than 1GB would be nice.
So I read a bunch more and have two Intel 900P 280GB PCIe cards on the way. My question is what would be the most advantageous way to deploy them to get better write speeds? Mirrored SLOG? Single SLOG and L2ARC? I have another R720 I can sacrifice and steal the RAM from to get up to 192GB.
I also have more of those 8TB drives on the way as well (once the SSD backlog clears up) -- I don't need the space, but I'm guessing adding more vdevs will also make a difference across the board.
When all is said and done, not counting the reused server, I'll have put in <$10k to get a (hopefully) decently useful SAN.
If there's anything glaring I've missed, please let me know -- thanks for reading and any advice you can give.