- Dec 12, 2011
Yeah, sorry, closed that since I didn't want to risk another thread getting deleted. And we got a good answer that's now at the end of the thread, so I'm inclined to keep it closed.
I guess that makes kinda sense. It could perform much better than a hardware RAID, given sufficient resources...
Interesting. What filesystem are they using for that, anyways?Also windows cluster shared volumes for hyper-v.
NTFS or ReFS, the latter being in 2012R2 only. Locking is handled at the cluster level via SMB negotiation between Hyper-V hosts.Interesting. What filesystem are they using for [CSV], anyways?
Well, apparently it might be if your initiator is Windows and you're running NTFS on it. I'd have thought that to be a strange edge case, but apparently it is a little more common than I would have expected.So, as I understand it, showing as SSD is a good thing, right?...
Ding ding ding. Ideally one shouldn't make decisions based on what the underlying thing is, but rather on what it can('t) do (see: duck typing).But showing it as SSD is also bad for other cases, as covered elsewhere in this thread. The root problem is that iSCSI doesn't really allow a bitfield of flags like "the underlying datastore is subject to fragmentation so don't do things like defrag" or "the iSCSI disk supports UNMAP" or "the underlying storage is hybrid". Because if you could indicate device capabilities, then you wouldn't need to overload the meaning of a tag like "SSD".
It is not true in part of UNMAP. SCSI provides enough information about UNMAP capabilities separately from SSD status. There is no official dependency in specifications between UNMAP and SSD status. The rest is indeed true -- SSD is the only flag to control request sorting, defragmentation and probably other not very related things.The root problem is that iSCSI doesn't really allow a bitfield of flags like "the underlying datastore is subject to fragmentation so don't do things like defrag" or "the iSCSI disk supports UNMAP" or "the underlying storage is hybrid".
I can't wait until I have to explain to people why those make for a terrible choice for your zpool.Suddenly now we have hard drives that support shingled recording, so now we'd kinda need a new flag for 2015 era drives, "drags butt during writing."
Their performance doesn't have to be so terrible, we're just hamstrung by the interfaces. All of a sudden assumptions we made about hard drives 20 years ago are no longer valid. This happened with SSDs, and it took a couple years for TRIM support to really propagate through all the OSes, file systems, and controllers. I don't know what the final solution will look like, but it seems like this is going to be the future of spinning rust media; there's a lot of money in that, the workarounds will come.I can't wait until I have to explain to people why those make for a terrible choice for your zpool.
"Well, if you never write to it, ever, it will do fine ..."
Well, they might or they might not... our largest pool here is archival in nature, which doesn't actually mean no rewrites, but typically something written will be left alone for years at a time, and lower write performance would be deemed acceptable to commit data to the pool. I would rather have somewhat fewer disks and lower power consumption since the pool isn't in need of a large number of disks (ftp server, iso data storage, etc).I can't wait until I have to explain to people why those make for a terrible choice for your zpool.
It seems like it could potentially be a very useful tier of storage, but, yes, bad for many types of workloads. A write cache would only be helpful for bursty write traffic, and then you basically have this big unknown hazard hanging over your head if there's too much write traffic. The ideal workload would be one where the commit speed wasn't a serious concern. The Seagate Archive 8TB's are reported to be writing at about 1/3-1/4th the speed of a conventional HDD, so I'm not even convinced it's a big deal.Based on that, it looks like they'll be perfectly fine for a situation like a desktop or archival storage that doesn't need to have sustained writes, and can bandaid their way over the inherent shingled-writes with a non-volatile write cache (either a reserved section of disk or NAND) but for a server environment that doesn't have idle time to allow for garbage collection I can't see it being able to keep up.
I fondly remember interfacing ESDI drives to Sun workstations (SCSI) via an Emulex interface translator. Two drives as a single SCSI target (c0t0d1s0!)Maybe the manufacturers should resurrect the old 5.25" FH standard. [emoji6]I remember the 5MB & 10MB drives. Back in ~1990 I had a 70MB ESDI drive.
Sent from my phone