Streaming_Nerd
Cadet
- Joined
- Aug 24, 2021
- Messages
- 3
Hello All,
I have been a FreeNAS, now on to TrueNAS user for years with older hardware as it cycles down the food chain of life. This years "Core Systems" funds went to storage. So I decided to build a larger 50 disk system with 50 10K SAS drives and mirror them. Goal was decent storage with strong read/write and low latency. So off to the web I went got 34 more 1.2TB used drives and placed them into my Dell 720xd and MD1220 setup. Tossed in 128GB ram and picked up two NVME cards that are in there own PCIe slots and figured I could play with SLOG or L2ARC stuff with them.
My biggest VM guest abuser of drives is my core streaming server. It see's a solid 400 - 450Mbps writing to it 24/7/365 of HLS chunks. It streams out 500Mbps - 2Gbps all day long as well to end users. With this latest build my Solarwinds shows the SAN to be about 15ms to 20ms for latency that it is told via vCenter. I did find with HT turned off that the ms was more stable no matter the load on it. With HT on it would see a larger spread say 15 - 25ms. Depending on the tasks it had to handle. So off it stays at least for now. (CPU's are E5-2667 pair)
My ESXi hosts are 10GB dual round robin to each san I have and jumbo frames is set to 9000. IOPS set to 1.
.
Over the years I have tried many things RAIDZ2 in a HUGE vdev mirrors. NFS, iSCSI.
I am not in the need for PB's of storage its fast relabel storage that can at times handle hard hits as I migrate full VM's from one storage system to another. For updates or dealing with issues.
My questions are:
1. Is there a point say 20 - 30 ect. Disks that its better to create smaller pool sizes vs one HUGE one?
a. Like would two groups of 24 in a mirror be better then one group of 50?
2. My ARC sees 90 - 100% hits and the misses show every so often would taking 128GB and making it 256GB be a good investment?
a. I have another 64GB on my desk it was a thought to toss it in and see how it responds. (Todays test possibly)
b. I read the rule of thumb is about GB for TB. (So I have 54.5TB of raw storage so 1GB per TB that is 55GB currently 111.5GB in the ZFS Cache)
3. iSCSI vs NFS for ESXi Hosts?
a. Anyone using one over the other in a outside of the home solution say Small Biz/Ent. (I have not played with NFS for yeas been iSCSI all the way)
4. Caching disks? SLOG/L2ARC ect.
a. I know iSCSI doesn't do SLOG but you can force it to inside TrueNAS at the pool. I didn't find it to help it actually hurt it a bit. My latency came up 10 - 15ms at the reported VM level via solarwinds. But it was a fun test. Next is to try a L2ARC and see what happens with it.
5. Montering! Who is using what to monitor latency / vm's disks ect?
I am still learning as I go. Just looking to see if I am in the ball park of what this thing can do or if there is still room to get a bit more out of it.
Thanks,
Mike
I have been a FreeNAS, now on to TrueNAS user for years with older hardware as it cycles down the food chain of life. This years "Core Systems" funds went to storage. So I decided to build a larger 50 disk system with 50 10K SAS drives and mirror them. Goal was decent storage with strong read/write and low latency. So off to the web I went got 34 more 1.2TB used drives and placed them into my Dell 720xd and MD1220 setup. Tossed in 128GB ram and picked up two NVME cards that are in there own PCIe slots and figured I could play with SLOG or L2ARC stuff with them.
My biggest VM guest abuser of drives is my core streaming server. It see's a solid 400 - 450Mbps writing to it 24/7/365 of HLS chunks. It streams out 500Mbps - 2Gbps all day long as well to end users. With this latest build my Solarwinds shows the SAN to be about 15ms to 20ms for latency that it is told via vCenter. I did find with HT turned off that the ms was more stable no matter the load on it. With HT on it would see a larger spread say 15 - 25ms. Depending on the tasks it had to handle. So off it stays at least for now. (CPU's are E5-2667 pair)
My ESXi hosts are 10GB dual round robin to each san I have and jumbo frames is set to 9000. IOPS set to 1.
.
Over the years I have tried many things RAIDZ2 in a HUGE vdev mirrors. NFS, iSCSI.
I am not in the need for PB's of storage its fast relabel storage that can at times handle hard hits as I migrate full VM's from one storage system to another. For updates or dealing with issues.
My questions are:
1. Is there a point say 20 - 30 ect. Disks that its better to create smaller pool sizes vs one HUGE one?
a. Like would two groups of 24 in a mirror be better then one group of 50?
2. My ARC sees 90 - 100% hits and the misses show every so often would taking 128GB and making it 256GB be a good investment?
a. I have another 64GB on my desk it was a thought to toss it in and see how it responds. (Todays test possibly)
b. I read the rule of thumb is about GB for TB. (So I have 54.5TB of raw storage so 1GB per TB that is 55GB currently 111.5GB in the ZFS Cache)
3. iSCSI vs NFS for ESXi Hosts?
a. Anyone using one over the other in a outside of the home solution say Small Biz/Ent. (I have not played with NFS for yeas been iSCSI all the way)
4. Caching disks? SLOG/L2ARC ect.
a. I know iSCSI doesn't do SLOG but you can force it to inside TrueNAS at the pool. I didn't find it to help it actually hurt it a bit. My latency came up 10 - 15ms at the reported VM level via solarwinds. But it was a fun test. Next is to try a L2ARC and see what happens with it.
5. Montering! Who is using what to monitor latency / vm's disks ect?
I am still learning as I go. Just looking to see if I am in the ball park of what this thing can do or if there is still room to get a bit more out of it.
Thanks,
Mike