SSD Pool Settings

Status
Not open for further replies.

STREBLO

Patron
Joined
Oct 23, 2015
Messages
245
I've got some questions about what needs to be changed with an SSD pool. I've been looking around to find information about Sector size on SSDs and have been getting conflicting information about whether it matters at all. While on hard drives it seems 512 and 4 k size sector are the norm, I've seen information about huge sector sizes on SSDs. When I tried to look at the sector size of my SSD via fdisk -l I got a size of 512 bytes, Is that normal on an SSD? I also found information saying using ashift of 13 made sense on an SSD, why would that be? I know these are a lot of questions but I've been having trouble getting any consistent information about using SSD pools on ZFS. Does anyone have experience as to how to check sector size on SSDs properly and whether or not they have to be aligned with ashift?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've got some questions about what needs to be changed with an SSD pool. I've been looking around to find information about Sector size on SSDs and have been getting conflicting information about whether it matters at all. While on hard drives it seems 512 and 4 k size sector are the norm, I've seen information about huge sector sizes on SSDs. When I tried to look at the sector size of my SSD via fdisk -l I got a size of 512 bytes, Is that normal on an SSD? I also found information saying using ashift of 13 made sense on an SSD, why would that be? I know these are a lot of questions but I've been having trouble getting any consistent information about using SSD pools on ZFS. Does anyone have experience as to how to check sector size on SSDs properly and whether or not they have to be aligned with ashift?

"Huge" sector sizes are probably a mistake. Without knowing the underlying structure of the flash, anything larger than maybe 8K or 16K is just a shot in the dark. Most SSD manufacturers are going to present 512 or 4K to the user anyways because that's what current operating systems understand.

It seems likely that aligning to 8K is a good idea and that's what a lot of ZFS people do on other systems. As it stands, FreeNAS is wired in to use ashift=12 and it isn't clear that this will be revisited anytime soon. Other optimizations I've suggested for SSD goodness have been dismissed, so I'm not in a hurry to champion another performance optimization.

http://list.zfsonlinux.org/pipermail/zfs-discuss/2014-June/016290.html
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
I suspect AFA's on FN will become almost exponentially more popular in the future, don't stop fighting for your right to party, jgreco.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So what you're really saying is that I should piss off Cyberjock by running with a manually created ashift=13 pool?

Well that's a damn good point actually, why didn't I think of that earlier.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay. So that was strange and interesting and fun.

Apparently at some point it was decided not to enforce ashift=12 anymore. My SSD pool had an ashift=9.

Code:
[root@storage3] /# zpool list storage3-ssd
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storage3-ssd   460G  14.6M   460G         -     7%     0%  1.00x  ONLINE  /mnt
[root@storage3] /# zpool destroy storage3-ssd
[root@storage3] /# zpool create  -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -O compression=lz4 -O aclmode=passthrough -O aclinherit=passthrough -f -m /storage3-ssd -o altroot=/mnt storage3-ssd mirror /dev/gptid/1f063912-9da7-11e5-aa9b-002590fc8616 /dev/gptid/1f1f92d0-9da7-11e5-aa9b-002590fc8616  
[root@storage3] /# zdb -C storage3-ssd | grep ashif
                ashift: 9
[root@storage3] /# sysctl -w vfs.zfs.min_auto_ashift=13
vfs.zfs.min_auto_ashift: 9 -> 13
[root@storage3] /# zpool destroy storage3-ssd                                                                
[root@storage3] /# zpool create -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -O compression=lz4 -O aclmode=passthrough -O aclinherit=passthrough -f -m /storage3-ssd -o altroot=/mnt storage3-ssd mirror /dev/gptid/1f063912-9da7-11e5-aa9b-002590fc8616 /dev/gptid/1f1f92d0-9da7-11e5-aa9b-002590fc8616
[root@storage3] /# zdb -C storage3-ssd | grep ashif                                                                          ashift: 13
[root@storage3] /# sysctl -w vfs.zfs.min_auto_ashift=9
vfs.zfs.min_auto_ashift: 13 -> 9


Now admittedly it's a tiny SSD pool, because it was just an experiment before I went all out... eventually it's supposed to be a 1.5TB pool of three way mirrors (10 devices total, one spare).

I guess I now get to go set up some iSCSI on it.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
Any reports on how it goes would be greatly appreciated. I'm supposed to be building a pretty large flash array sometime in the near future.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Odd, I thought I posted a followup. Seems to work fine, though I wasn't getting real good sequential performance out of it (~150-200MBytes/sec). For my purposes, that's probably fine especially if it increases somewhat when I add more vdevs. I'm not really after massive sequential speeds as much as I am out of reducing seek times and increasing IOPS. I don't really have a lot of time right now to be repeatedly ripping it apart and trying different things with it, and there's probably some fixing I should do to make iSCSI "better" anyways... so this isn't a really accurate reflection on the whole situation.
 
Status
Not open for further replies.
Top