Slog? Do I *really* need it.

Status
Not open for further replies.

looney

Dabbler
Joined
Jan 30, 2017
Messages
16
I have no personal experience with the 750 yet but I have not found anything bad on the forums regarding them.

Here are some nice reads:
https://forums.freenas.org/index.php?threads/hw-raid-for-zil-question.30210/
https://forums.servethehome.com/index.php?threads/newegg-intel-750-400gb-aic-nvme-drive-299-99.7553/
https://forums.freenas.org/index.php?threads/slog-l2arc-ssd-units.32804/

They are not a good as the 3x00 series as far as endurance goes but for a lab system that may not be a issue.
And of course the main thing is that you want a solution with build in power protection which the 750 has.

I do not know however if its still recommended to mirror nvme (or if its even possible).
I know that for normal SSD's its highly recommended to mirror them due to their crucial nature.

Also might be a good idea to test the array first without the SLOG as you should still be able to add it after the fact.


PS: I am still quite the FreeNAS noob so be sure to wait for other people to respond before making up your mind.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The big trouble with the 750 is endurance. It's rated at 70GB written per day, or 0.175DWPD (drive writes per day) for 5 years... a total of 127TB written over its life. The S3700, which is one of the commonly recommended drives, is rated at 10DWPD for 5 years. So, my 200GB (which is downprovisioned to 16GB) is good for 3,650TB written over its life.

The 750 will work, but keep an eye on it via SMART monitoring and expect that you'll wear it out fairly quickly. To give you an idea, my SLOG, which is serving a 12-drive pool running about 40 VMs, has endured about 38TB of writes in a bit under a year of production use.
 

GangstaRIB

Cadet
Joined
Jan 29, 2017
Messages
6
Awesome, thanks for your response. I was having a hell of a time trying to figure out why the s3700s were so damn expensive when the 750s looked like they would churn out far better performance at 1/3 to 1/2 the price. 70GB/day is definitely not enough for an enterprise SAN but I will probably only hit that on my 'build days' It's more or less for lab use but I will be running an ESXi host with maybe 20 or less VMs. If a Avgd 35GB/day I would be very surprised which would give me 10yrs of life in theory.... I'm sure I'd replacing platters by then. I wanted to go SSD but I wanted about 4TB of usable space which would have been a pretty penny. I decided on 8 x 2TB seagate NAS drives which I will stripe, mirror, and under provision to 50% as recommended for CoW filesystems over iSCSI. I will be using iscsi w/ sync write which is one reason why I picked FreeNAS because of the interface and the advantages of zfs's slog with sync writes. I have buddies that have to rebuild their labs as they do async with other products and they catch a power blip, power button, OS crash etc which I suppose would not be the end of the world since its not prod but I'm interested in protecting my data as much as possible. I've already dropped a few K on the full setup so a slog @ 300 bucks seems like a small investment for that (and it also has power loss protection). Sure an offline backup is also best practice but we all know how that goes with our labs.

Should be a fun ride I still have fedex dropping things off at the door.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
I'm curious. Power loss protection in an SSD is fine, but not a replacement for a UPS. If these other labs had data loss on power failure, how did it happen? Did they just not have UPS systems at all, or were only some computers on them, or what?
 

GangstaRIB

Cadet
Joined
Jan 29, 2017
Messages
6
I'm curious. Power loss protection in an SSD is fine, but not a replacement for a UPS. If these other labs had data loss on power failure, how did it happen? Did they just not have UPS systems at all, or were only some computers on them, or what?

Sh*t Happens.... lol. cords get pulled by accident, OSs crash, etc. I will have a UPS but neither is a replacement for the other.
 

GangstaRIB

Cadet
Joined
Jan 29, 2017
Messages
6
Sh*t Happens.... lol. cords get pulled by accident, OSs crash, etc. I will have a UPS but neither is a replacement for the other.

For what its worth SLOG with an Intel 750 made a huge difference with sync writes. They still weren't at async levels but very close. LSI 9208 HBA w/ 8-2TB seagate wolf NAS (5900rpm consumer level) with striped mirrors. I was getting in the neighborhood of 350MB/s sequential writes (async) with sync it was closer to 50MB/s I added the 750 as a SLOG with sync=always on the pool and I was seeing 300MB/s or so. The write tests were very close to the local storage sata 6 SSD I had on my host. Random write tests obviously were not as good. Reads I'm not sure of... With my 10GB iSCSI HBA i was getting just over 1000MB/s but at that level its quite possible my NIC with overhead was a limiting factor (I hadn't done much tuning yet... no JumboFrames etc). My theoretical max read would be 180(single drive rating)x8 which puts me at 1440.

With all of that being said a SLOG should be well worth it for anyone that requires the safety of synchronized writes for their VMware environment and uses a RAID with platters. Remind you I have mirrored stripes! NOT RAIDZ and I was getting 50MB/s

Also.... I couldn't help myself and I can't see myself 'wasting' a 400GB $300 drive. SO I ordered another one... lol. I am NOT using this in a full prod environment and it will be my lab holding 20 some odd VMs that I will be using throughout the week for testing. Since I've already spent a good amount of money on RAM, mobo, CPU etc... I've decided to step back and redesigned a bit. I'm going to run these striped only (800GB total) on one of my pools and I will be using mpio (my cards have 2 ports just needed a 1$0 twinax cable) Is this a good idea in prod (HELL TO THE NO) but its for my lab. I will also thin provision, use compression and dedup on this store. I will have maybe 3 different types of OSs but most of the guests will have lots that can be dedupd.

My platters will be for 'backup'. I'm going to use RaidZ2 to get back some extra space (write speed would be nice but not realtime critical) I'll end up recovering an extra 4 GB and I will under provision the pools by 80% which should give me shy of 10 TB (no dedup but compression) I will make daily/weekly clones as needed and use the store for ISOs, build software etc. I think it's a good compromise.

TLDR; Bought a SLOG.... tested with good results... Decided to buy my SLOG a friend and give them both new jobs.

Will turn my platters to 10TB of 'backup' space instead of 4GB worth of 'ESXi guest' space (8TB total in mirrored striped + underprovisioned 50%) . If I had to do it over again I may have just gone for 3 good PCIe's on RaidZ ( 900MB write speed on one drive probably 800MB on RaidZ for 3) and had Mirrored 6TB drive for 'backups' probably best of both worlds but hey.... Its been a lot of fun these past few weeks.
 
Status
Not open for further replies.
Top