change iSCSI drive type

Status
Not open for further replies.

dvg_lab

Dabbler
Joined
Jan 27, 2015
Messages
11
I installed Freenas 9.3 to usb stick and export by iscsi RAIDZ3 zvol dev from 8 non-SSD hard drives. Can't realize why VMWare iscsi initiator shows exported device as SSD instead of non-SSD? Can't find how to change it. What it can affect?
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Once i read something about this topic on the forums. It was a really well detailed explanation given from a zfs guru, so i suggest you to search it because it's really valuable
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Once i read something about this topic on the forums. It was a really well detailed explanation given from a zfs guru, so i suggest you to search it because it's really valuable

It was over in the 9.3 prerelease forum and it wasn't by a "ZFS guru." Jordan got a bit petulant and deleted the entire thread when multiple people weren't seeing things his way.

The short form is "cuz iX thinks ZFS acts more like an SSD than a HDD" - which is true in a few ways but mostly false unless you actually have it running on SSD. Nexenta doesn't report as an SSD, Solaris doesn't report as an SSD.

ZFS with the new iSCSI subsystem and zvols is indeed capable of doing things like unmapping of blocks (think: TRIM/UNMAP) but actually reporting as an SSD is a significant deviation from expected storage behaviour in many environments, particularly where automated provisioning systems key off of variables such as this.

Several VMware folks have said that this is unexpected and strange behaviour, but I am not expecting it to be changed.

Given a system that's large enough and properly architected, I'm willing to concede that ZFS can begin to feel a bit like a magic SSD. I'm currently building a new VM storage server and even in a small configuration (64GB RAM, 256GB L2ARC) with four WD Red 2.5 1TB drives in a striped pair of mirrors, the read speeds are pretty amazing especially once the L2ARC's warmed up. But part of that is that despite being a 2TB pool, I'm keeping disk utilization down to less than 250GB, which means that writes are not struggling to find blocks and reads are usually filled out of the L2ARC. Still, not getting write speeds better than 80MB/sec even with sync=standard so I am not buying the SSD tag. ;-)
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Actually the "zfs guru" i was referring were you...
I didn't remember it was a discussion with jordan (just someone of iX) and where it was, but i remember all the argument and detailed explanation you gave about. Glad you reported a bit to answer to the OP
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Actually the "zfs guru" i was referring were you...

As I said...

it wasn't by a "ZFS guru."

:tongue:

I'd actually be just fine with it if the SSD tag was a configurable option, since you might indeed actually have an all-SSD pool, or you might be trying to design a pool for similar characteristics. ZFS with hard drives and ARC (and possibly L2ARC) does very much resemble a hybrid solid state hard drive in terms of performance, and I expect that a TrueNAS box that's sold to a customer for VM storage is going to even more closely resemble an SSD, but the fact of the matter is that here in the forums, you have guys trying to do stuff on the cheap which usually means RAIDZ2 and not-enough-RAM. A pool with a single vdev is always going to more closely resemble a single hard disk.

I'm currently (literally!) exploring performance characteristics of a new VM storage server and trying to quantify stuff. I have an environment here which is not particularly stressy in terms of writes, because our local design policy has always been to minimize superfluous writes, but I'd really like something that can kick into at least medium gear and let a VM feel like it's almost got a real hard drive for those cases where heavy writes are needed. For reads, I think I'm golden because I'll be doing three-way mirror vdevs, and I can easily bump up from 64GB of RAM to 128GB if I feel it'd help. The three-way mirror vdev thing is in order to fulfill the availability requirement (a single failure should not eliminate redundancy) but at the same time it gives ZFS more devices to be able to use independently for reads. Between that and the L2ARC I am expecting it to feel much closer to an SSD for read purposes. Keeping pool utilization low (< 25%) should also help with write speeds.
 

dvg_lab

Dabbler
Joined
Jan 27, 2015
Messages
11
Apparently I missed something. Even if iX hardcoded ssd type somewhere in iscsi target we can make a patch and fix it cuz it open source, isn't it? The fix (jgreco, thanx for the link) on esxi side doesn't look so good. I'll try to email mav@ he'd tell us where to dig.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It can be specified like `option rpm 7200` in LUN section of /etc/ctl.conf. But there is no GUI/middleware now to control that now.
 

dvg_lab

Dabbler
Joined
Jan 27, 2015
Messages
11
my investigations on irc and mav@ answers

1. FreeNAS uses TRIM on the datastore to let ZFS free up the unused blocks etc. As to Copy-On-Write engine random write/read doen't always random.
2. That has to do with some new VMware certifications
3. It's not speed aware. Check raid type/memory/network etc...
4. Trust freenas default settings, they are reasonable.
5. So you shouldn't fix it at esxi side too.
6. Even if you change settings in ctl.conf GUI rewrite it at the first chance.

and best choice for vm on esxi is 10GiB network, striped mirrors, add RAM, run zilstat to check if you need SSD for ZIL.

Thanx to mav@ and irc chanel.
 
Status
Not open for further replies.
Top