jgreco
Resident Grinch
- Joined
- May 29, 2011
- Messages
- 18,680
mirrored Z1
(idly wonders wtf this would be)
mirrored Z1
Sorry, striped z1. Still sorting out all the terminology(idly wonders wtf this would be)
No I mean 980 pro, but I imagine the principle is the same"1TB Samsung 980 Evo Pro"
Do you mean 890 EVO Plus? As soon as the SLC cache fills, the drive writes will slow down dramatically as it becomes a TLC drive.
For video editing, I'm wondering how fast a drive you need. 4K video only uses about 25-Mb/sec on the wire or 3.2-MB/sec. Have you tried editing using that SMB share?
The storage on my windows box is mainly NVME with a couple sata SSDs. The drive I've been using to test is a 1TB Samsung 980 Evo Pro, but speeds are the same across all drives.
I don't currently have anywhere to move the data unfortunately. Anyway since I work with heavy video files I need the extra storage efficiency, and I think a mirrored Z1 might sacrifice too much capacity.
If it really is just a side effect of the topology then I'm willing to live with it, but it seems (at least anecdotally) that there's some other problem
But with a strip of two raidz1 vdevs, I can only lose one drive in each vdev vs. any two drives in the single vdev, wouldn't this make it less safe in terms of drive failure?You will have the same storage efficiency with two 4 drive RAID Z1 Striped. Either way it's 8 drives 6 data drives and two parity drives. I have not tried with two 4 drive RAID Z1, only 3 yet it should be faster as long as you don't run out of CPU. Your i7-4930k is a bit slower than the i7-7820X I'm using.
Also, are you using TrueNAS Core or Scale? If you are using Scale, take a look at the latest benchmarks for sequential access. The last I looked Core was faster.
But with a strip of two raidz1 vdevs, I can only lose one drive in each vdev vs. any two drives in the single vdev, wouldn't this make it less safe in terms of drive failure?
Another thought just occurred to me: Since my array is made up 4 of these old drives in addition to the 4 new iron wolves, is it possible that they're the cause of the bottleneck? Could they have degraded over time somehow?
edit: Also to answer your last question I'm using TrueNAS Core
Could it be a lack of memory? I thought 32GB would be sufficient, but I know they recommend 1GB per TB so maybe new memory is in order.
I initially purchased a 10 port SATA card but then returned it as I didn't think I'd need it. But maybe I should give it a try.
6Gb/s, one is 3Gb/s. So in total there are 4x SATA2 connectors and 5x SATA3