need to resize P3700 NVME slog ssd?

Status
Not open for further replies.

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
I was researching how to over provision my p3700 for use as a slog/l2arc and came across a thread where Intel engineering had confirmed these drives have an additional 25% space for use during garbage collection etc. Does this mean over provisioning isn't actually required or beneficial?

Heres the thread I stumbled upon: https://communities.intel.com/thread/96093?start=0&tstart=0

If it is still beneficial, any pointers on how to accomplish and if its required to make adjustments to anything else in FreeNAS to optimise.

thanks in adv
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
I once read that any SSD that was 250GB vs 256GB had built-in overprovisioning, same for 120GB vs 126GB, etc. I don't know if this is factual, just something I read a year or so ago.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
ZIL SLOG devices typically don't need to be any larger than 4-16GB -- or more, of course, depending on the system -- so there's a lot of capacity for overprovisioning a device the size you guys are discussing. I'd try the hdparm approach described here. I've used it to set up the Intel DC S3700 SSDs I use for a ZIL SLOG; I assume this technique would work with the P3700, too:

https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
From further reading it seems you can't over provision NVME drives yet...still reading to understand more though....
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Just partition the nvme drive and add the partition as the slog.

Code:
gpart create -s GPT daX
gpart add -t freebsd-zfs -a 4k -s 16G daX
Run glabel status and find the gptid of daXp1
zpool add tank log /dev/gptid/[gptid_of_daXp1] 


Edit: the above assumes pool name of "tank" and daX is the nvme drive.
Edit 2: changed for gptid per CJ
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Out of curiosity, is this still needed anymore? I recall reading somewhere a dev mentioning that TRIM would handle it. Perhaps this is more for drive longevity? Maybe I am just confused and coffee is not kicked in yet.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Out of curiosity, is this still needed anymore? I recall reading somewhere a dev mentioning that TRIM would handle it. Perhaps this is more for drive longevity? Maybe I am just confused and coffee is not kicked in yet.
It can't hurt...
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Out of curiosity, is this still needed anymore? I recall reading somewhere a dev mentioning that TRIM would handle it. Perhaps this is more for drive longevity? Maybe I am just confused and coffee is not kicked in yet.

No, I had submitted a feature request a while ago asking for this as a feature, https://bugs.freenas.org/issues/2365 and was challenged to prove this was necessary.

TRIM of course greatly improves the situation, but having a larger bucket of free pages CANNOT HURT. There's no guarantee that it will make a given platform faster, but it has the potential to do so, and as long as the SLOG is appropriately sized it cannot have a negative impact on performance, so there's no good reason not to do this other than the developers didn't care to spend time to figure out how to size a SLOG appropriately (which was more of a concern several years ago).

It is probably not a usual situation where a sudden sustained burst of write traffic depletes the SSD's free page table faster than the garbage collection blanks pages. It's an optimization, to be sure, and maybe not an important one, but since it can't hurt, why not.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just partition the nvme drive and add the partition as the slog.

Code:
gpart create -s GPT daX
gpart add -t freebsd-zfs -a 4k -s 16G daX
zpool add tank log daXp1


Edit: the above assumes pool name of "tank" and daX is the nvme drive.

Your code is wrong. You should not be ading daXp1. You should be adding based on the gpt-id.
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
thanks CJ and thanks Mlovelace for updating the instructions above.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Meh, it's just the slog. You can always export and re-import.

https://forums.freenas.org/index.php?threads/building-pools-from-the-cli.17540/

There's a problem though, that's one that you absolutely don't want to get "lost". If the slog can't be found, the zpool will NOT import on bootup. There is no provisions in the WebGUI to import the zpool without the slog.

If you were running TrueNAS HA, you'd be in serious doo-doo for doing that (it is possible to break High Availability). Of course, I'm sure you wouldn't have done what you did. ;)
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
There's a problem though, that's one that you absolutely don't want to get "lost". If the slog can't be found, the zpool will NOT import on bootup. There is no provisions in the WebGUI to import the zpool without the slog.

If you were running TrueNAS HA, you'd be in serious doo-doo for doing that (it is possible to break High Availability). Of course, I'm sure you wouldn't have done what you did. ;)
Yeah no, I was attempting to be ironical. I just adjusted the hpa through camcontrol on my slog and added it through the GUI. That option isn't available for nvme apparently. I adjusted the post above to use the gptid. I'd hate to have caused someone issues down the road since freeNAS does things a bit differently then freeBSD.
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
It doesn't look like nvme devices have gptid's exposed (yet). Any other ideas guys?

Code:
% glabel status
                                      Name  Status  Components
gptid/c20085e9-8c0f-11e5-b72e-90e2ba382e3c     N/A  da0p2
gptid/477508a3-3a74-11e4-b9cb-90e2ba382e3c     N/A  da1p2
gptid/42391ba3-3a74-11e4-b9cb-90e2ba382e3c     N/A  da2p2
gptid/43109c05-3a74-11e4-b9cb-90e2ba382e3c     N/A  da3p2
gptid/43f12866-3a74-11e4-b9cb-90e2ba382e3c     N/A  da4p2
gptid/44ceabd7-3a74-11e4-b9cb-90e2ba382e3c     N/A  da5p2
gptid/4087080c-3a74-11e4-b9cb-90e2ba382e3c     N/A  da6p2
gptid/415efb3e-3a74-11e4-b9cb-90e2ba382e3c     N/A  da7p2
gptid/45aecb5f-3a74-11e4-b9cb-90e2ba382e3c     N/A  da8p2
gptid/48592025-3a74-11e4-b9cb-90e2ba382e3c     N/A  da9p2
gptid/4693dff1-3a74-11e4-b9cb-90e2ba382e3c     N/A  da10p2
gptid/9512d2bb-fe87-11e5-9385-0007430495a0     N/A  ada0p2
gptid/94e2e509-fe87-11e5-9385-0007430495a0     N/A  ada1p2
gptid/6309327c-2472-11e6-9198-0007430495a0     N/A  ada2p1


or camcontrol

Code:
% camcontrol devlist
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 0 lun 0 (pass0,da0)
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 1 lun 0 (pass1,da1)
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 2 lun 0 (pass2,da2)
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 3 lun 0 (pass3,da3)
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 4 lun 0 (pass4,da4)
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 5 lun 0 (pass5,da5)
<ATA WDC WD40EFRX-68W 0A80>        at scbus0 target 6 lun 0 (pass6,da6)
<ATA WDC WD40EFRX-68W 0A80>        at scbus1 target 0 lun 0 (pass7,da7)
<ATA WDC WD40EFRX-68W 0A80>        at scbus1 target 1 lun 0 (pass8,da8)
<ATA WDC WD40EFRX-68W 0A80>        at scbus1 target 2 lun 0 (pass9,da9)
<ATA WDC WD40EFRX-68W 0A80>        at scbus1 target 3 lun 0 (pass10,da10)
<Samsung SSD 850 PRO 256GB EXM02B6Q>  at scbus3 target 0 lun 0 (ada0,pass11)
<Samsung SSD 850 PRO 256GB EXM02B6Q>  at scbus4 target 0 lun 0 (ada1,pass12)
<SATA SSD S9FM02.1>                at scbus8 target 0 lun 0 (ada2,pass13)


for example

Code:
    NAME                                            STATE     READ WRITE CKSUM
    RAID                                            ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/4087080c-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/415efb3e-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/42391ba3-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/43109c05-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/43f12866-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/44ceabd7-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/45aecb5f-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/4693dff1-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/477508a3-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
        gptid/48592025-3a74-11e4-b9cb-90e2ba382e3c  ONLINE       0     0     0
    logs
      nvd0p1                                        ONLINE       0     0     0

errors: No known data errors
 
Last edited:
Status
Not open for further replies.
Top