BUILD 6 drives with ECC on a spousal budget

Status
Not open for further replies.

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
Alright, let me start this off with a huge "Your Mileage WILL Vary". I have watched for stuff on sale, and did a lot of research, and bought stuff as I could sneak money away from the wife.

You will most likely be unable to duplicate the case purchase, but suitable cases can be found for similar prices (you will want to source your own decent PSU though).

Case: SUPERMICRO CSE-731i-300B ($23 (microcenter clearance bin))
PSU: PWS-303-PQ (came in the case above (reason I bought the case))
HDD: 10 x 320GB WD Blue WD3200AAJS ($18 each shipped, best offer on fleabay)
RAM: 2 x Crucial CT102472BD1339 8GB DDR3 SDRAM 1333MHz ( $70 each, fleabay; $109 each microcenter)
MB: ASUS M5A78L-M LX PLUS ($40, ebay; $55 newegg)
CPU: AMD Athlon II X2 240 ($30 ebay)
Misc: Random SATA cables (drawer), 2 Molex to SATA adapters (drawer), network cable (drawer)
USB: 4 x 4GB microcenter USB Drives ($5 each)
Drive Cage: Comes with fan ($23)

I payed a total of $456, including drives (6 in use, 4 cold spares), and boot device (1 in use, 3 imaged from final config).

The ECC WORKS :) I am so happy about this. You simply (from bios) change the ECC type to enabled, and set your RAM scrub rate.

The network WORKS. It is a realtek NIC, and it doesn't set any speed records (at about 80MB/s), but it works, and I haven't had any problems with about 2 weeks uptime. Freenas 9.1 RELEASE (FYI)

The onboard SATA has a bug. You have to set the first four drives as ACHI, and the other two as IDE. (or all IDE, not recommended). I honestly didn't expect this, but I am at the latest MB firmware revision, so there isn't much to do about it. When all were set to IDE, scrub speeds were awful (~30MB/s). When set 4 ACHI, 2IDE, I get around 250MB/s scrub, and 190MB/s with a 10GB dd (from dev/zero to /mnt/pool/zero.file)

The drives stay pretty cool. 1 hour into a zpool scrub, the temps were 35, 35, 35, 37, 31, 32. The 37c reading was one of the middle drives in the group of four (I think the higher middle). The ambient room temp varies between 70F and 80F. Unfortunately I don't know what they were during the above measurements. Case is sitting on a coffee table, about 2 foot off the ground.

If you wanted to duplicate this, my recommendations would be to lower the 16GB of RAM to 8GB if you are using small drives like mine. (Save ~$70), and maybe not buy as many spares.

Feel free to ask any questions. Criticism will be tolerated. Suggestions welcome.
 

Johhhn

Explorer
Joined
Oct 29, 2013
Messages
79
Great build! I like that you saved $ that way! $278 w/o drives and 8GB RAM.

I just ordered the same motherboard! ha! Should be in tomorrow!

Is there anything to be concerned about with the ACHI and IDE setup that you mentioned?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you want the NCQ/TCQ you must have AHCI enabled. TRIM also requires AHCI if I remember correctly, but that's not really applicable to servers.
 

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
Yep, you lose NCQ/TCQ. It is a real shame, and if you had the money a M1015 would be a good investment, but if you have the money, you wouldn't be building this thing anyway.

In practice, it makes me sad, but the benchmarks are "ok", and since it has a realtek NIC, you wouldn't see the difference really anyway.

If you wanted to correct this build, you could add a gigabit PCIe Intel NIC, and the IBM M1015, but that brings you closer in price to the SuperMicro board that everyone else loves. (FYI, the board does have 2 PCIe slots, so you could upgrade it if you don't like the performance)

Also, make sure you can afford the extra ~$30 if the network card does fail. While the Realtek does work, who knows how long and how well. My tests have been for 2 weeks, and ~1TB total transfer, which is hardly enough to call it "FreeNAS certified" or anything.
 

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
Also, FYI, I am transcoding well to a Roku running plex at 720p. No buffering or other problems with the CPU.

This is the main reason I don't care about NIC performance. I just it for long term picture storage and the storage for 3 Rokus.

Edit: I just realized this, but cyberjock commented on my thread, and didn't complain about cheap parts or impending doom. Holy cow. :) That is as good as a recommendation to me.
 

Johhhn

Explorer
Joined
Oct 29, 2013
Messages
79
Also, FYI, I am transcoding well to a Roku running plex at 720p. No buffering or other problems with the CPU.

This is the main reason I don't care about NIC performance. I just it for long term picture storage and the storage for 3 Rokus.

Edit: I just realized this, but cyberjock commented on my thread, and didn't complain about cheap parts or impending doom. Holy cow. :) That is as good as a recommendation to me.


that's because you're using ECC ;)

What's your exact command that you ran to bench it? I'll try the same on my before and after setups.
 

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
The command I ran is:

Code:
 dd if=/dev/zero of=/mnt/POOL_NAME_GOES_HERE/zero.file bs=1M count=20000


With the output being:

Code:
20000+0 records in
20000+0 records out
20971520000 bytes transferred in 103.507188 secs (202609311 bytes/sec)


Which translates to:
202.6 MB/s

I ran the command as root, be very careful with dd (nicknamed data-destroyer for good reason). Also be sure to rm the file when you are done (so you don't have 20GB less in your array). If you want to try different sizes, change the 20000 MB to 1000 or whatever.
 

Johhhn

Explorer
Joined
Oct 29, 2013
Messages
79
How reliably is that command though? Obviously it showed a problem with the IDE setting, but I know that when I ran other benchmarks that exceeded the cache, my numbers were much lower. Can't remember exactly, but somewhere in the 200+ range I think.

I did this back in October and got a big arse number but not very relevant since it was all cache.

621MB/s - LOL
Code:
dd if=/dev/zero of=/mnt/BigVol/test/test bs=4M count=10000
10000+0 records in
10000+0 records out
41943040000 bytes transferred in 65.369022 secs (641634809 bytes/sec)
 
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
doing a dd from /dev/zero is most of the time going to give completely irrelevant benchmarking, especially if you have compression activated as it will not actually write the number of bytes you think it will....

For example, with that command I get: 20971520000 bytes transferred in 8.623793 secs (2431820858 bytes/sec) 2.4GB/s with RAIDZ2 where a disk can at max do 150MB/s...
 

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
Johhhn: True. I choose 20GB, because my cache is around 11GB, and I don't ever expect to write much more than 20GB in a large swath.

jyavenard: True, but I have compression disabled. Ideally you would use /dev/urandom if you had it enabled, but that will be somewhat screwed up by your CPU Software RNG.

Also, just in case people wanted to see the /dev/urandom writes:

Code:
dd if=/dev/urandom of=/mnt/POOL_NAME_GOES_HERE/urandom.file bs=1M count=100000


Yields:

Code:
100000+0 records in
100000+0 records out
104857600000 bytes transferred in 1460.280186 secs (71806494 bytes/sec)


This should be the absolute worst case scenario because it is a very high CPU load, while writing to the ZFS. The data is theoretically uncompressable (since it is "random") so it shouldn't matter whether you have de-dup or compression turned on. In fact, it will perform worse with compression enabled because it is a very CPU intensive task. Also, keep in mind the Athlon I chose is pretty under powered for Software Random Number Generating.

Also, notice that the dd size was 100GB, so my ARC of 11GB (of RAM) shouldn't be able to contribute too much to the overall speed.

All in all, I am ok with the worst case scenario of 71.8 MB/s since the Realtek NIC limits me to around 90 MB/s anyway. Does anyone else care to try this out on their box to see what they get speed wise?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
doing dd tests with /dev/zero has been used to determine your maximum pool speeds since the 8.0-beta days. Not sure why jyavenard says its not. It's a very good estimate of how fast your pool can read and/or write data on a strictly throughput basis. It's useful, but not all encompassing as you are usually going to be limited to your NIC speeds. Generally, if your dd tests are twice your NIC throughput you won't usually be bottlenecked by your pool. Yes compression does invalidate the test. But it becomes blatantly ovbious from the dd test results that something is horribly wrong. We've seen people with 2 disk pools get 4GB/sec. Clearly they aren't getting those speeds. /dev/random and /dev/urandom are very poor choices for dd testing as they can't generate enough data fast enough to hit the pool's limit.

If you read around there are tons of threads that use dd. Typically 1M blocks are the most recommended and you should choose a size that is sufficiently large to flush the cache. I always do at least 3x the system RAM.

dd tests are good for helping to diagnose possible disk issues, possible disk subsystem bottlenecks, possible "slow sharing" complaints(because the pool is the limiting factor in your system), and quite a few others.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
doing dd tests with /dev/zero has been used to determine your maximum pool speeds since the 8.0-beta days. Not sure why jyavenard says its not. It's a very good estimate of how fast your pool can read and/or write data on a strictly throughput basis.


that it's been done for years doesn't mean it's any good.

if it was that good a test, you wouldn't get result like:
http://forums.freenas.org/threads/6-drives-with-ecc-on-a-spousal-budget.16368/#post-83917

for info I get (with compression explicitly off)

83886080000 bytes transferred in 137.009751 secs (612263575 bytes/sec)

which doesn't make much sense when using RED drives (150MB/s max continuous write as per spec)


If you read around there are tons of threads that use dd. Typically 1M blocks are the most recommended and you should choose a size that is sufficiently large to flush the cache. I always do at least 3x the system RAM.

of course there are.... but using urandom is a far more meaningful value, problem is that you need a fast CPU to do also get a relevant number so the throughput of the random generator doesn't get in the way.
You certainly don't want to use /dev/random: it's a blocking device and relies on system activities to gather entropy. /dev/urandom doesn't

If you could measure dd speed only after it has filled the cache, that would be more useful problem is that it doesn't provide that data

And I can find you tons of thread in various forums explaining exactly the same thing that writing a series of zeros doesn't usually provide any meaningful results when it comes to benchmarking.


dd tests are good for helping to diagnose possible disk issues, possible disk subsystem bottlenecks, possible "slow sharing" complaints(because the pool is the limiting factor in your system), and quite a few others
that's indeed very true, for diagnosing purposes it can be very useful, if with dd you get low speed, it certainly is pointing of a problem somewhere.​
I just argue its value for benchmarking​
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
that it's been done for years doesn't mean it's any good.

if it was that good a test, you wouldn't get result like:

http://forums.freenas.org/threads/6-drives-with-ecc-on-a-spousal-budget.16368/#post-83917

Did you even read my whole post before you started talking trash? Did you see where I said:

Yes compression does invalidate the test. But it becomes blatantly ovbious from the dd test results that something is horribly wrong. We've seen people with 2 disk pools get 4GB/sec. Clearly they aren't getting those speeds. /dev/random and /dev/urandom are very poor choices for dd testing as they can't generate enough data fast enough to hit the pool's limit.

So just like all benchmarking tools, you have to know when to use it and when not to. I know, still learning from your other benchmark thread where I told you that your tests were basically useless? Stop talking if you aren't knowledgeable in whatever you're about to post.

Additionally, you can sometimes use the information that appears bogus to figure out if something is wrong. (Read the end of this post where I ask the OP a question...)

of course there are.... but using urandom is a far more meaningful value, problem is that you need a fast CPU to do also get a relevant number so the throughput of the random generator doesn't get in the way.

You certainly don't want to use /dev/random: it's a blocking device and relies on system activities to gather entropy. /dev/urandom doesn't

If you could measure dd speed only after it has filled the cache, that would be more useful problem is that it doesn't provide that data

And I can find you tons of thread in various forums explaining exactly the same thing that writing a series of zeros doesn't usually provide any meaningful results when it comes to benchmarking.

that's indeed very true, for diagnosing purposes it can be very useful, if with dd you get low speed, it certainly is pointing of a problem somewhere. I just argue its value for benchmarking

Ok.. so I'll take you at your word. If I get low dd speeds then it points to a problem. We'll ignore the benchmarking value for a minute. Here's my output:

[root@freenas] ~# dd if=/dev/random of=/mnt/tank/testfile bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 133.241884 secs (78697176 bytes/sec)

Oh no! I can't even saturate a 1Gb LAN link. That really sucks! (Hint: I can saturate dual Gb LAN simultaneously and still run a scrub at the same time!)

So which of these tests is better Mr. Hotshot? Here's my server output as I just ran these for you:

[root@freenas] ~# dd if=/dev/zero of=/mnt/tank/testfile bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 19.360341 secs (541610293 bytes/sec)
[root@freenas] ~# dd if=/dev/random of=/mnt/tank/testfile bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 133.241884 secs (78697176 bytes/sec)

So which is better? Hint: I really do get over 500MB/sec when reading or writing to my pool!

Now check this out:

[root@freenas] ~# dd if=/dev/random of=/dev/null bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 119.436844 secs (87793344 bytes/sec)
[root@freenas] ~# dd if=/dev/urandom of=/dev/null bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 118.179076 secs (88727720 bytes/sec)

So if I take you at your word that benchmark value is nil but dd tests help for diagnostic purposes, then clearly something must be wrong with my server. But.... nothing is actually wrong. What is wrong is that I took your advice and used /dev/random when that's a big no-no.

dd test's aren't meant to be the end all for benchmarks. They are very very useful if you are looking for what kind of throughput you can get with your pool right now. They are useless for I/O testing(that's where iozone is more useful.. and again, when properly used, which I might add you didn't use properly in your benchmark thread). dd has been used since day zero in FreeNAS because they do work and are recommended. You look like a fool when you try to argue with every person before you(the manual and the FAQ) to try to say you're right and everyone else is wrong. Check out the FAQ, it actually tells you to run a dd test to determine if your disks are too slow. It even tells you to use /dev/zero. Oh the horror! So there's an exception for dd if you use /dev/zero and use compression or dedup(again, mentioned this before). That doesn't mean you should use /dev/urandom. In fact, if you read the documentation on urandom it tells you that it should never be used for a dd test because it will be a serious bottleneck as its single threaded and was NEVER designed to be used as a device to throw out GBs of numbers if performance was a consideration. My E3-1230v2 doesn't even hit 90MB/sec if I do dd if=/dev/random of=/dev/null bs=1m count=1000. There's a cool document somewhere that said that random and urandom are limited by the clock cycles of your CPU, and a 20Ghz+ CPU will need to be used for /dev/random and /dev/urandom to be useful for most benchmark purposes. Of course, the document also states that by the time those CPUs are available the likelihood that random and urandom will be fast enough for benchmark comparisons with the hardware of that time period is almost zero. For that reason, /dev/zero or another source of input data is recommended for throughput testing.

If you go reading about disk throughput the thorough guides tell you not to use /dev/random unless you can't use /dev/zero because of compression or other feature that causes large numbers of zeros to be problematic. And those guides tell you to bench test your /dev/random and /dev/urandom to ensure you get sufficient throughput as their slow output will be a limition for most situations because of CPU frequency and both devices are single threaded.

As you can see above, my benchmark of /dev/random and /dev/urandom to /dev/null is already so slow(about 88MB/sec) that a single WD Red 3TB disk will appear to be slower than it actually is. Now imagine what that other guy in the thread you linked to would have gotten if he has used /dev/random on his MUCH slower Athlon X2 240. Is that really the kind of benchmarks you think are useful?

@malcolmputer: Can you post the output of zpool status and zfs get all | grep compress? With those high of dd numbers(600MB/sec) you either have compression enabled or you are using a striped vdev. I think its more likely you have a striped vdev since most people get 2GB/sec+ with their systems when compression is enabled with /dev/zero tests(albeit, their systems are usually a little more powerful). I'd expect that a 10 disk stripe could give a reasonable 620MB/sec with a dd test and perhaps even more.

@jyavenard- See what I did there?

I have learned one thing from reading your posts though. You really aren't that knowledgable in things. I can tell if you stick around here you're going to be very unhappy because every time you post you seem to get more stuff wrong than you do right. I explained a bunch of crap in your trashy benchmark thread just the other day and I'd have thought you might go do more reading before you started talking about more stuff you'd get wrong.

And just for the record. Not only did I use a dd test for benchmarking purposes but I even used it for potential diagnostic purposes at the same time! Who'd have thunk it?!
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
So just like all benchmarking tools, you have to know when to use it and when not to. I know, still learning from your other benchmark thread where I told you that your tests were basically useless? Stop talking if you aren't knowledgeable in whatever you're about to post.

if only you could stay cool headed just because someone disagree with you... it's not the end of the world you know.

I find it rather humourous that you could post something like this in regards to the validity of the benchmarks I've used; yet you believe copying a series of zero to be more appropriate.
you can't have it both ways


All I said is that copying from /dev/zero wasn't giving you a valid benchmark. If you want to get all revved up about it, that's up to you.


It would however if you could measure its speed once the cache has been fully filled only. Which is something that was alluded to in the ZFS benchmark thread:
http://forums.freenas.org/threads/benchmarking-zfs.7928/

Even if you use 3 times the size of your cache, that's still 33% of your result test that is made of rubbish...
Make that 30 times, then we'll talk.

So let's agree to disagree.


Want to measure raw disk performance? then do what jpaetzel

Ok.. so I'll take you at your word. If I get low dd speeds then it points to a problem. We'll ignore the benchmarking value for a minute. Here's my output:

[root@freenas] ~# dd if=/dev/random of=/mnt/tank/testfile bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 133.241884 secs (78697176 bytes/sec)

Oh no! I can't even saturate a 1Gb LAN link. That really sucks! (Hint: I can saturate dual Gb LAN simultaneously and still run a scrub at the same time!)

which part of
You certainly don't want to use /dev/random: it's a blocking device and relies on system activities to gather entropy.

did you miss ?

You've been patronising people here since you joined, I'm fine with that... Verbosity doesn't equal quality unfortunately.
 

Johhhn

Explorer
Joined
Oct 29, 2013
Messages
79
Can't we all just get along? So much hostility, cyber!

I'm surprised no one mentioned iometer. You can specific parameters so cache is not a factor. It's what I normally use.


Sent from my iPhone using Tapatalk
 

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
@cyberjock I just looked it up. Z2-0 with 6 disks, compression disabled. That ~600M/s wasn't me. My fastest was a 250MB/s scrub.

I can't copy and paste on my phone but I will post full outputs of those commands tomorrow.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, I just patronize people who talk when they shouldn't. Especially if they are giving bad/wrong advice. If there's one thing I've learned in my 2 years here, its that if someone doesn't correct horrid errors 100 other people in the future will quote that thread when they are confused why something isn't working, they lost data, or they can't get speeds they wanted.

There's a time to talk and a time to ask questions/listen.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Especially if they are giving bad/wrong advice. If there's one thing I've learned in my 2 years here,


please feel free to point out anything I've said above that is incorrect or wrong...
And no, that something has been used for a long time isn't a valid argument...
Nor is linking a FAQ that *you* wrote any more valid either.

You got to really wonder why people went to the extensive exercise in creating tools like iometer, bonnie etc :)
its that if someone doesn't correct horrid errors 100 other people in the future will quote that thread when they are confused why something isn't working, they lost data, or they can't get speeds they wanted.

and here come your typical strawman again... for sure, anything discussed here will no doubt, in a few months time (years maybe) lead to something that is not working or someone losing data.... sigh...

Actually, no forget about answering... Getting screamed it or spat on isn't something that make any good read...
You seem having a very hard time dealing with anyone or anything that contradict your beliefs.
Sorry to all the other readers for getting off topic.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Here is an example of what I meant of not being able to accurately mesure:

Copying a file of 11GB across two pools, using time cp file1 file2; took 28.39s; however, you could see via zpool iostat 1; that it would continue writing to the disk for another 9s after that.
If I was just to time cp; it would give me 396MB/s; yet it's more like 301MB/s, 31% over optimist.

using in another terminal the command:
zpool iostat pool 1

and in another I run the above dd command:

capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----

pool 10.4T 11.3T 0 0 0 0
pool 10.4T 11.3T 0 0 0 0
pool 10.4T 11.3T 0 0 0 0
pool 10.4T 11.3T 0 0 0 0
pool 10.4T 11.3T 0 0 0 0
pool 10.4T 11.3T 0 0 0 0 <==== dd is started here
pool 10.4T 11.3T 0 0 4.00K 0
pool 10.4T 11.3T 1 0 7.99K 0
pool 10.4T 11.3T 0 0 4.00K 0
pool 10.4T 11.3T 40 0 300K 0
pool 10.4T 11.3T 287 0 1.13M 0
pool 10.4T 11.3T 566 0 2.47M 0
pool 10.4T 11.3T 723 0 3.09M 0
pool 10.4T 11.3T 545 71 2.39M 420K
pool 10.4T 11.3T 561 281 2.45M 1.87M
pool 10.4T 11.3T 705 0 3.02M 0
pool 10.4T 11.3T 701 0 3.00M 0
pool 10.4T 11.3T 722 0 3.08M 0
pool 10.4T 11.3T 182 37 736K 261K
pool 10.4T 11.3T 0 2.57K 0 276M
pool 10.4T 11.4T 0 2.25K 0 261M
pool 10.3T 11.5T 0 2.96K 0 352M
pool 10.3T 11.5T 0 3.53K 0 423M
pool 10.3T 11.5T 0 3.52K 0 417M
pool 10.3T 11.5T 0 3.70K 0 447M
pool 10.3T 11.4T 0 3.30K 4.00K 392M
pool 10.3T 11.4T 0 3.24K 0 385M
pool 10.3T 11.4T 0 3.05K 0 347M
pool 10.3T 11.4T 0 3.50K 0 419M
pool 10.3T 11.4T 0 3.41K 0 409M
pool 10.3T 11.4T 0 3.51K 0 422M
pool 10.3T 11.4T 0 3.95K 0 475M
pool 10.3T 11.4T 0 4.03K 0 485M
pool 10.3T 11.4T 0 3.30K 0 419M
pool 10.3T 11.4T 0 3.46K 0 413M
pool 10.3T 11.4T 0 3.49K 0 417M
pool 10.3T 11.4T 0 3.42K 0 407M
pool 10.3T 11.4T 0 3.36K 4.00K 400M
pool 10.3T 11.4T 0 3.65K 0 438M
pool 10.3T 11.4T 0 4.12K 0 488M
pool 10.3T 11.4T 0 3.59K 0 456M
pool 10.3T 11.4T 0 3.39K 0 403M
pool 10.3T 11.4T 0 1.75K 0 198M
pool 10.3T 11.4T 0 4.33K 0 552M
pool 10.3T 11.4T 0 4.07K 0 521M
pool 10.3T 11.4T 1 3.30K 7.99K 386M
pool 10.3T 11.4T 0 3.44K 0 404M
pool 10.3T 11.4T 0 2.15K 0 236M
pool 10.3T 11.4T 0 4.56K 0 580M
pool 10.3T 11.4T 0 4.62K 0 587M
pool 10.3T 11.4T 0 4.26K 0 505M
pool 10.3T 11.4T 0 2.66K 0 338M
pool 10.3T 11.4T 0 4.52K 0 566M
pool 10.3T 11.4T 0 4.51K 0 573M
pool 10.3T 11.4T 0 2.31K 0 266M
pool 10.3T 11.4T 0 4.52K 0 574M
pool 10.3T 11.4T 0 4.52K 0 574M
pool 10.3T 11.4T 0 2.59K 0 295M <==== dd stops while here
pool 10.3T 11.4T 0 4.01K 0 510M
pool 10.3T 11.4T 0 4.67K 0 593M
pool 10.3T 11.4T 0 3.28K 0 376M
pool 10.3T 11.4T 0 4.34K 0 551M
pool 10.3T 11.4T 0 3.23K 0 378M
pool 10.3T 11.4T 0 4.58K 0 582M
pool 10.3T 11.4T 0 2.92K 0 331M
pool 10.3T 11.4T 0 4.41K 0 553M
pool 10.3T 11.4T 0 782 0 75.8M
pool 10.3T 11.4T 0 0 0 0
pool 10.3T 11.4T 0 0 0 0
pool 10.3T 11.4T 0 464 0 33.9M
pool 10.3T 11.4T 0 0 0 0
pool 10.3T 11.4T 0 0 0 0
pool 10.3T 11.4T 0 0 0 36.0K
pool 10.3T 11.4T 0 0 0 0
pool 10.3T 11.4T 1 285 7.99K 1.90M
pool 10.3T 11.4T 0 0 0 0

dd gave 471MB/s for a 44.45s run.

But from iostat, you can tell that writing to the pool took another 12s to finish writing the data to disk (the machine is completely idle, and wasn't doing anything but the dd command)

So the actual writing time was: 56s: 371MB/s. That's not bad in itself.
So dd was 27% over-optimist. Exactly what I was describing earlier.
 

malcolmputer

Explorer
Joined
Oct 28, 2013
Messages
55
Alright,

Code:
zpool status


Yields:

Code:
  pool: Huge_Porn_Store
state: ONLINE
scan: scrub repaired 0 in 0h2m with 0 errors on Tue Nov 19 18:14:03 2013
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        Huge_Porn_Store                                 ONLINE      0    0    0
          raidz2-0                                      ONLINE      0    0    0
            gptid/382b11f7-4f46-11e3-821e-5404a6d9fd24  ONLINE      0    0    0
            gptid/38891779-4f46-11e3-821e-5404a6d9fd24  ONLINE      0    0    0
            gptid/38e4d855-4f46-11e3-821e-5404a6d9fd24  ONLINE      0    0    0
            gptid/39423696-4f46-11e3-821e-5404a6d9fd24  ONLINE      0    0    0
            gptid/399e2afd-4f46-11e3-821e-5404a6d9fd24  ONLINE      0    0    0
            gptid/39ff15a1-4f46-11e3-821e-5404a6d9fd24  ONLINE      0    0    0
 
errors: No known data errors


and

Code:
zfs get all | grep compress | awk '{print $2; print $3}'


Yields:

Code:
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
compressratio
1.00x
compression
off
refcompressratio
1.00x
 
Status
Not open for further replies.
Top