BUILD replacing pool of WD Red with Samsung 860 Pro

Status
Not open for further replies.

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
little update:
ordered 4x 1TB 860 EVOs (got discount of €50 on each). Currently waiting for Supermicro 2.5" Trays (MCP-220-00118-0B).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
little update:
ordered 4x 1TB 860 EVOs (got discount of €50 on each). Currently waiting for Supermicro 2.5" Trays (MCP-220-00118-0B).
Great!
Please share with the forum what kind of results you obtain. The more information we can get here, the better it is for everyone.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I'm getting an itch to do this myself, although I'll probably do PCIe drives... then I can repurpose all my spinning rust into my RAID-Z2 array.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Great!
Please share with the forum what kind of results you obtain. The more information we can get here, the better it is for everyone.
of course, I will do so.

while waiting for the drive cages (approx 7-14 days), I'm planning the move onto the SSDs (all iSCSI ZVOLs).
which VMDKS of which VM should go to SSD and which to HDD... It's not like which VM it's really which VMDK. e.g. Windows Server system partition on SSD, data partition on HDD. (but a little bit more complex)

will report back, what I did and what the results are
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
all parts arrived. will assemble things the next 3-4 days.

Is there anything I should do before using the SSD? firmware updates?
(I think in the way of WD REDs with wdidle before using them)
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
did it:)

longterm tests are coming, but here are some of my impressions
  • 4x Samsung 860 EVO 1TB in 2x mirror (~1.8TB usable)
  • just accessed via iSCSI by ESXi 6.5 (5x 1GBit)
  • slightly better/smoother performance with SLOG (INTEL DC P3700)
  • only system partition VMDKs (2x Windows Server 2008 R2, 2x 2012, 3x Windows 10)
  • VMs are working much more responsive
  • Temperature first was ~25°C (HDDs ~34°C)
  • On excessive use, temperature was going to ~43°C o_O (without adjusting fans) (HDDs ~38°C)

simple benchmarks of sequential read/write do not differ from HDD because of the 5x 1GBit NICs. But the overall responsiveness is... wow. (was mentioned in this thread before)
while doing backups (veeam endpoint), system partition was read with 30-70MB/s and now is >300MB.

yes, it's not a fair comparison beacause a moved/cloned VMDK can increase read/write performance on any storage hardware.

currently I'm testing replacing
  • ZVOL -> iSCSI -> ESXi datastore -> VMDK
to
  • ZVOL -> iSCSI -> RDM
normally I do
  • 200GB ZVOL
  • 100GB VMDK
with RDM I would do
  • 200GB ZVOL
  • 100GB NTFS (Windows)
  • 100GB unpartitioned
will report back, when I have more days in production.

Are there any recommendations, how high temperature can go and what temp should not be exceeded?
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
a little update on SSD pools...

Replacing spinning disks with some Samsung 860 EVO was a good idea - thanks to all replies in this thread. It's worth the money if you use FreeNAS for ESX storage (iSCSI). From now on, I will go for SSD in the first place and HDD only if budget is too low...

I also upgraded another system from a normal RAID10 (4x 2TB WD Re) to 4x 1TB Samsung 860 EVO with the 4x WD Re as "slow" storage. Now there is a pool of 2 mirrored vdevs (SSD) and the same for the HDD.
The FreeNAS itself is a VM on the ESXi 6.5 (9240-8i flashed to 9211-8i IT and passthrough) and the SAN for the ESXi. iSCSI with sync=always (without ZIL) is fast enough to serve 8 VM's.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
all parts arrived. will assemble things the next 3-4 days.

Is there anything I should do before using the SSD? firmware updates?
(I think in the way of WD REDs with wdidle before using them)
Sorry, I somehow overlooked this post. Thanks for getting back to us on it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I also upgraded another system from a normal RAID10 (4x 2TB WD Re) to 4x 1TB Samsung 860 EVO with the 4x WD Re as "slow" storage. Now there is a pool of 2 mirrored vdevs (SSD) and the same for the HDD.
The FreeNAS itself is a VM on the ESXi 6.5 (9240-8i flashed to 9211-8i IT and passthrough) and the SAN for the ESXi. iSCSI with sync=always (without ZIL) is fast enough to serve 8 VM's.
Do you have any numbers for the speed / IOPS?
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
little update with all new 10G :cool::
SSD pool of 3 mirrored vdevs ( = 6x Samsung 860 Evo), Intel DC P3700 as SLOG and brand new Intel X550-T2.
everything is served as iSCSI to an ESXi 6.5 U2. FreeNAS and ESXi are directly connected via 2x 10G Cat. 6 (MPIO). Short test results:
10g-ssd.png

Screenshot_2018-09-12 freenas-1 - FreeNAS-11 1-U5 (8e2a858a1)(1).png

Screenshot_2018-09-12 freenas-1 - FreeNAS-11 1-U5 (8e2a858a1)(2).png


I know the discussion of sync or not to sync... just little testing without any longterm experience on sync=standard. Perhaps it's time to think about a mirrored SLOG.

EDIT:
MPIO is RR with IOPS=1 and MTU 9000 all the way through. I need to test some other NIC options in FreeNAS to offload e.g. iSCSI stuff to the X550
 
Last edited:

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
tested iSCSI sync always and removed SLOG:
upload_2018-9-15_23-18-17.png

expected to get similar results as with the SLOG, but I did not expect this result.
seems to be time for striping SLOG :D
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Test the max potential first
you are right and for now the result with sync always is good enough for my needs. but with this setup (2x 10G MPIO), I could get more than 1GB/s in write. if I find some money for buying another NVMe SLOG, I will report back. Perhaps, this will be never;)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
you are right and for now the result with sync always is good enough for my needs. but with this setup (2x 10G MPIO), I could get more than 1GB/s in write. if I find some money for buying another NVMe SLOG, I will report back. Perhaps, this will be never;)

Try with a RAM disk to see the max potential.
 
Status
Not open for further replies.
Top