Recommendation for 12 drives

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
Hi,

I have a hp d180 g6 with 12 x 4tb drives.

Freenas is installed on an internal ssd.

A separate proxmox server is attached via iscsi across 4 nics, with mpio setup, that's all working.

My question is, what is the best storage type to use, I've read 12 disks in a zfs pool is bad, obviously with VMs running, iops and data integrity are important.

thanks in advance.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My question is, what is the best storage type to use
If you are using the FreeNAS system to store VMs, you are going to want to configure those drives as a pool of mirror vdevs to obtain the most IOPS possible. If all you have is 12 drives, you will be limited to six vdevs, which is not optimal, but will probably give you reasonable results and about 22TB of total storage, but with iSCSI it is important that you don't fill it above about 50%, so you must self limit to around 11TB of data in the pool. You may still need to have a SLOG if you are doing sync writes to the pool.
A separate proxmox server is attached via iscsi across 4 nics, with mpio setup, that's all working.
What kind of NIC?
 

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
Hi,

so essentially we will have 6 x 4tb vdevs available, with 1 disk loss per vdev? curious though, wouldnt more disks in a pool give a better iops, i.e 2 x 24 TB or maybe 4 x 12TB on raidz2?

The nics are 1gb each.

thanks
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
wouldnt more disks in a pool give a better iops, i.e 2 x 24 TB or maybe 4 x 12TB on raidz2?
No.

VDEVS are the performance unit (although results in reality can give confusing information when testing due to cache and testing methods).

1 VDEV should perform as well as the slowest single disk in that VDEV.

6 VDEVS will give more IOPS than 2 or 4... the theory is as simple as that. For VMs, you need IOPS.

If you're more worried about redundancy, you can have it at the expense of performance, or you just need to spend more money to have both.
 
Last edited:

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
ok, so in that case, given these are sata 7200 nas disks, would a raid6 be a better option?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you want to have RAID6, then you're in the wrong place/OS, FreeNAS can do RAIDZ2, but that will produce 1 VDEV, see my advice about VDEV performance.
 

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
sorry i have to make a correction to my statement,

I have 12 drives, but i am planning to use 2 of them for backups is a raid0 so if you remove the 2 drives, i have 10 disks available.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
but wouldnt using that result in each one being the speed of a single disk?
 

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
sorry, think im understanding it now, so mirrored vdevs with one volume on top.
 

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
i have created a new volume, within that i created a disk mirror of 2, then kept using the 'Add Extra Device' option to create additional mirrors. It gave me 17TB usable.

I then created a new ZFS dataset, with a block of 64k. Is this the correct approach?

in addition im getting these results from a VM.

Write test

dd if=/dev/zero of=test bs=1G count=5
5+0 records in
5+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 21.6147 s, 248 MB/s

Read test

dd if=test of=/dev/null
10485760+0 records in
10485760+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 54.3854 s, 98.7 MB/s

reads are not good.

thanks
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
i have created a new volume, within that i created a disk mirror of 2, then kept using the 'Add Extra Device' option to create additional mirrors. It gave me 17TB usable.

I then created a new ZFS dataset, with a block of 64k. Is this the correct approach?
That doesn't sound right. I would want to look at the console output of the command zpool status so I can be sure what your pool layout is.
It should look something like this:
Code:
  pool: Irene
state: ONLINE
  scan: scrub repaired 0 in 0 days 03:22:04 with 0 errors on Wed Nov 28 03:22:05 2018
config:

        NAME                                            STATE     READ WRITE CKSUM
        Irene                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/8710385b-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/87e94156-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/88db19ad-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/89addd3b-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/8a865453-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8b66b1ef-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/8c69bc72-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8d48655d-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
          mirror-4                                      ONLINE       0     0     0
            gptid/8e2b6d1f-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/8efea929-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
          mirror-5                                      ONLINE       0     0     0
            gptid/8fd4d25c-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/90c2759a-becf-11e8-b1c8-0cc47a9cd5a4  ONLINE       0     0     0

errors: No known data errors
 
Last edited:

Itobin

Dabbler
Joined
Jan 29, 2019
Messages
19
Hi,

Code:
  pool: iscsi-storage
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        iscsi-storage                                   ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/34a15667-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
            gptid/354b4b70-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/360e0dc2-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
            gptid/36cb9f10-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/378636fd-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
            gptid/38462cef-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/390439f4-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
            gptid/39b20c2a-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
          mirror-4                                      ONLINE       0     0     0
            gptid/3a6f04ae-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0
            gptid/3b22a757-24cc-11e9-8026-3c4a92dfecee  ONLINE       0     0     0

errors: No known data errors

  pool: nfs
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:22 with 0 errors on Sun Jan  6 00:00:22 2019
config:

        NAME                                          STATE     READ WRITE CKSUM
        nfs                                           ONLINE       0     0     0
          gptid/3790f36b-f1bf-11e8-90e3-3c4a92dfecee  ONLINE       0     0     0
          gptid/3849c4eb-f1bf-11e8-90e3-3c4a92dfecee  ONLINE       0     0     0
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Nice. That looks perfect. It might still help, if you are doing sync writes, to have a SLOG (Separate LOG) device.
 
Top