Looking to replace existing FreeNas

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
HI all,

Been reading and trying to get up to speed in hopes of making the right decisions for either upgrading or replacing my existing freeNas system.

Primary Use Case
  • iScsi target for 3x VMware hosts via 10GBE networking
    • the FreeNas server has a LAG setup for 20GBE networking
  • File server for 2-3 work stations
    • Video Editing
    • File storage (video files, pics, and office documents)
  • replicates important documents to Cloud storage

Existing Hardware
  • i3-8100 CPU @ 3.60GHz (4 cores)
  • 12x Crucial MX500 500GB 3D NAND SATA 2.5 Inch Internal SSD - CT500MX500SSD1(Z)
  • Intel Optane SSD 900P Series (280GB, AIC PCIe x4, 3D XPoint) (Cache)
  • 1x 250 GB Crucial SSD for Log storage
  • Intel 82599ES dual port 10GBE network card
  • 16GB Memory


Wants
  • Push closer to the 10GBE saturation
  • introduce snap shows to slower, spinning storage in additional to VMware snaps


As i understand, I will likely
Need to
  • Upgrade the CPU
  • Get ECC memory and more of it
  • drop the Optane SSD since I already have SSD's



if I drop the Optane, should I still make use of a cache device?
what processor should I upgrade to?
What is the recommended disk layout for VMware when I'm trying to even out Read and writes speeds.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You may/probably will still see benefit using the Optane as SLOG and it will help your VMs. Don't use the crucial for that.

You need a lot more memory (I wouldn't use less than 128GB in that use case) if you want good performance for your VMs and editors at the same time.

If you're serious about performance, you will need to provide good IOPS, which will mean striped mirrors (losing half your raw capacity to parity). Will you have enough storage with only 12 x 500GB?

Also, beware the crucial SSDs... there are some models with a controller that causes data loss in current FreeNAS versions unless you disable TRIM... try running a scrub on a test pool and see if you get reports of corrupted files.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
thank you very much for the response. Please keep in mind the VM's are LAB only. Nothing here is production. so far i have been getting by OK with the workload on the above specs.


I just want faster and more even reads and writes.

I like the idea of flash array's which is why I went SSD. Yes, I did go cheap as it was my first time going all flash (upgraded from WD reds).

since my last post, i was thinking of some of the following

  • 2x Samsung 970 EVO 250GB - NVMe PCIe M.2 2280 SSD
    • Cache and Log
  • Move Boot to USB drive
  • Replace all 12 SSD's with either
    • Samsung SSD 860 EVO 1TB
    • Samsung 860 PRO SSD 512GB
  • Pickup a 32 GB ECC memory kit

Thoughts?

Below is my current pool layout which I would think I would do the same
Code:
pool                                            ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/c538e114-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
            gptid/c57b2947-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/c5bee1a4-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
            gptid/c601ee24-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/c6489258-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
            gptid/c68d3b1e-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/c6d48f47-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
            gptid/c71b0ee2-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
          mirror-4                                      ONLINE       0     0     0
            gptid/c76b112f-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
            gptid/c7b319d0-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
          mirror-5                                      ONLINE       0     0     0
            gptid/c80039c8-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
            gptid/c849a418-0adb-11ea-80d2-001b21a7c63c  ONLINE       0     0     0
        logs
          gptid/c8d52b7e-0adb-11ea-80d2-001b21a7c63c    ONLINE       0     0     0
        cache
          gptid/c872293b-0adb-11ea-80d2-001b21a7c63c    ONLINE       0     0     0
 
Last edited:

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
today.jpg


here is the current disk performance
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
If I were you, I will spread the SSDs in each VMware hosts and I will replace VMware ESXi with Nutanix Community Edition to build an hyper converged cluster. You will get much more IOPS than the actual architecture.
Then I will replace the SSDs in the FreeNAS server by spinning disks to share large files for the workstations and I keep the Intel Optane as SLOG.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
If I were you, I will spread the SSDs in each VMware hosts and I will replace VMware ESXi with Nutanix Community Edition to build an hyper converged cluster. You will get much more IOPS than the actual architecture.
Then I will replace the SSDs in the FreeNAS server by spinning disks to share large files for the workstations and I keep the Intel Optane as SLOG.

Thanks for the response. Since this is a lab, my goal has always been to be as close to work as possible. we do not use Nutanix and i did not hear about the product until you mentioned it.

would using NVME as log and cache be of benefit when combined with SSD's?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
would using NVME as log and cache be of benefit when combined with SSD's?
Having a SLOG on a disk with the same speed as your pool disks can still provide a theoretical benefit, but using one with much higher IOPS than your pool disks (like optane or NVME SSDs) is a much better option.

Using L2ARC with less than 64GB of memory is unlikely to bring a lot of tangible benefit.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Thanks for the response. Since this is a lab, my goal has always been to be as close to work as possible. we do not use Nutanix and i did not hear about the product until you mentioned it.
If you want to stay with VMware, there are some alternatives to Nutanix to have a virtual vSAN
  • VMware vSAN: but unlike Nutanix, there is no free version of VMware vSAN, so you can use it only in evaluation mode for 60 days.
  • Starwind vSAN, there is a free version, but I have never tried it yet.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
If you want to stay with VMware, there are some alternatives to Nutanix to have a virtual vSAN
  • VMware vSAN: but unlike Nutanix, there is no free version of VMware vSAN, so you can use it only in evaluation mode for 60 days.
  • Starwind vSAN, there is a free version, but I have never tried it yet.

I'm stuck trying to figure out why its being recommended another product. I was able to improve my performance by over 500% via identifying a mistake in deployment. I had a Cache device, and a LOG device. The LOG device was an SSD (same as the rest of the pool) and the cache was an Optane device. Once swapped, performance improved
 
Top