nemesis1782
Contributor
- Joined
- Mar 2, 2021
- Messages
- 105
Hi All,
I would like to make to switch to ZFS and TrueNAS. I'm in the process of gaining knowledge, aqcuiring hardware and asking question. (Thread about that: https://www.truenas.com/community/t...-first-treunas-setup.91526/page-2#post-634858)
Now from what I've seen there are many conflicting stories especially on performance and benchmarking it is a bit of mixed bag because of the way ZFS works.
What I'm thinking is writing something to do a proper benchmarking tool for TrueNAS / FreeBSD to "benchmark" a ZFS pool. First of because of the way ZFS works a benchmark would be time consuming and would put your hardware through it's pases for a long time! So I'm looking to make a "cheap" array to test with. SInce a 6x1TB disk vdev will probably scale in performance and behavior compared to a 6x8TB enterprise disks vdev.
How will I be testing (this approach will probably not work when using dedup and compression):
Note: Sample means sample and log.
- Have a second disk array of fast read storage with static files of differing sizes
Archiving performance:
- Test A1: Archiving large chunks of data, single source
--> Start with a clean zPool containing 1 vDEV
--> Start writing a single stream of Large chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A2: Archiving small chunks of data, single source
--> Start with a clean zPool containing 1 vDEV
--> Start writing a single stream of small chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A3: Archiving large chunks of data, multi source
--> Start with a clean zPool containing 1 vDEV
--> Start writing multiple (10x) streams of Large chunks of data simulataniously until full and sample the throughput, total and per stream (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A4: Archiving small chunks of data, multi source
--> Start with a clean zPool containing 1 vDEV
--> Start writing multiple (10x) stream of small chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A5: Archiving varying chunks of data, multi source
--> Start with a clean zPool containing 1 vDEV
--> Start writing multiple (10x) stream of varying sizes of chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
Ever changing data disk. For instance download destination:
I will add this later, since I really need to get to work now.
Block devices:
I will add this later, since I really need to get to work now.
I am a experienced Software engineer with C oriented languages. Maily C++(Linux), C#(Windows and Linux). However do not yet have any experience developing for FreeBSD or TrueNAS.
Any input would be greatly appreciated!
I would like to make to switch to ZFS and TrueNAS. I'm in the process of gaining knowledge, aqcuiring hardware and asking question. (Thread about that: https://www.truenas.com/community/t...-first-treunas-setup.91526/page-2#post-634858)
Now from what I've seen there are many conflicting stories especially on performance and benchmarking it is a bit of mixed bag because of the way ZFS works.
What I'm thinking is writing something to do a proper benchmarking tool for TrueNAS / FreeBSD to "benchmark" a ZFS pool. First of because of the way ZFS works a benchmark would be time consuming and would put your hardware through it's pases for a long time! So I'm looking to make a "cheap" array to test with. SInce a 6x1TB disk vdev will probably scale in performance and behavior compared to a 6x8TB enterprise disks vdev.
How will I be testing (this approach will probably not work when using dedup and compression):
Note: Sample means sample and log.
- Have a second disk array of fast read storage with static files of differing sizes
Archiving performance:
- Test A1: Archiving large chunks of data, single source
--> Start with a clean zPool containing 1 vDEV
--> Start writing a single stream of Large chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A2: Archiving small chunks of data, single source
--> Start with a clean zPool containing 1 vDEV
--> Start writing a single stream of small chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A3: Archiving large chunks of data, multi source
--> Start with a clean zPool containing 1 vDEV
--> Start writing multiple (10x) streams of Large chunks of data simulataniously until full and sample the throughput, total and per stream (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A4: Archiving small chunks of data, multi source
--> Start with a clean zPool containing 1 vDEV
--> Start writing multiple (10x) stream of small chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
- Test A5: Archiving varying chunks of data, multi source
--> Start with a clean zPool containing 1 vDEV
--> Start writing multiple (10x) stream of varying sizes of chunks of data until full and sample the throughput (every 10 seconds or so), include important zPool and vDEV statistics for each sample, still need to see what I'll need to be logging
--> Test read at certain intervals
Ever changing data disk. For instance download destination:
I will add this later, since I really need to get to work now.
Block devices:
I will add this later, since I really need to get to work now.
I am a experienced Software engineer with C oriented languages. Maily C++(Linux), C#(Windows and Linux). However do not yet have any experience developing for FreeBSD or TrueNAS.
Any input would be greatly appreciated!