Will this work?

Status
Not open for further replies.

normanu

Dabbler
Joined
Jul 19, 2018
Messages
17
I want to build a NAS for use with servers running VM's off it.
Would this build work, as in would it be able to max the 4x 1 Gbit connections?
And is it fast enough to VM's off off it?

1x Fractal Design Node 804
1x Crucial 64GB DDR4-2400 CL17 ECC quad kit
1x SuperMicro A2SDI-4C
1x Intel Optane 900P 280GB (M.2 Adapter) (for L2ARC / SLOG)
9x HGST HTS721010A9 (still have these)
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
1x Intel Optane 900P 280GB (M.2 Adapter) (for L2ARC / SLOG)
Way too large.

I don't believe that you will need a L2ARC, however you will need a SLOG if you're planning to use FreeNAS as a ESXI Datastore. Get the smallest SSD you can get, because the SLOG will only use a couple of GB at a time.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, 280 GB is as small as they make the 900P and probably the P3700 or whatever the current top-end model is - if not higher, in the latter case.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
What do you plan to use as an HBA for the hard drives? This seems overkill... I can pull 1.6GB/s with my system (see my sig) tha's 12.8gb/s in network parlance.
 
Last edited by a moderator:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The big question is how many IOPS do you need? Also are you planing to use NFS or iSCSI?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
The SSD is too big. You'd have to play games to get both the boot pool and the SLOG on the same device. Consider a pair of smaller SSD's. 60Gb drives are available from multiple manufacturers.

Beyond that... The disk pool has enough raw performance that you need to look at the individual I/O channels. Do you have enough bandwidth to the SATA ports, network cards, etc... You don't mention what networking tech you're using. But consider, 1GbE tops out at perhaps ~1/6th of your proposed disk pool bandwidth.
 

normanu

Dabbler
Joined
Jul 19, 2018
Messages
17
I was planning on using the onboard controller, so I just noticed the M2 slot is shared SATA.
So either I use 7 storage drives and a SSD or 8 and a PCIe drive.
Or 8 drives and no SLOG if you say it isn't necessary.

The drives don't have the best write performance, so I thought a SLOG would be a good idea especially because of the VM's.
And I thought with the bandwith the 900P has, it could have a double function as SLOG & L2ARC.
Maybe 40GB for SLOG, 100GB for L2ARC and leave the rest unpartitioned.

For Network tech, I wanted to bond the 4x 1GB adapters and have 2 HOST PC connected with ZFS over iSCSI to store VM's on block level. (https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI)

What would you advise?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

normanu

Dabbler
Joined
Jul 19, 2018
Messages
17
How many IOPS do you need? What applications will be running on your Proxmox hosts?

I don't know IOPS numbers.
I have a Windows VM running SQLServer for a SAP Business One database.
I have a Java application running on Oracle in a VM.
I have a Domaincontroller.
And some services like mailserver, VPN server, webserver etc.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
If those services are already running can you look at the IO on them? It's kind of fundamental to designing storage.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I have a Windows VM running SQLServer for a SAP Business One database.
I have a Java application running on Oracle in a VM.
I have a Domaincontroller.
And some services like mailserver, VPN server, webserver etc.
A database could be anywhere from a few 100s of IOPS to several 1000s. This depends on the number of users and how the database works.
Java applications could be 0 to a million. Give us a hint.
The DC should be a few hundred at most again depending on the number of users
Mail Server like email? How many users?
VPN should be negligible
Web Server..... for what? a simple static page? Is it backed by a database? Does it server video to millions of people? Is it text only?

I don't mean to be rude but if your "engineering" a storage solution, you need to have quantifiable targets and an understanding of what workloads your servicing.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I was planning on using the onboard controller, so I just noticed the M2 slot is shared SATA.
So either I use 7 storage drives and a SSD or 8 and a PCIe drive.
Or 8 drives and no SLOG if you say it isn't necessary.

The drives don't have the best write performance, so I thought a SLOG would be a good idea especially because of the VM's.
And I thought with the bandwith the 900P has, it could have a double function as SLOG & L2ARC.
Maybe 40GB for SLOG, 100GB for L2ARC and leave the rest unpartitioned.

For Network tech, I wanted to bond the 4x 1GB adapters and have 2 HOST PC connected with ZFS over iSCSI to store VM's on block level. (https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI)

What would you advise?

I would use the 8 SATA ports for your storage drives. For boot, either mirrored USB drives, or another small SSD on an adapter card. SLOG will be necessary since you are planning to host VMs, unless you are okay with the potential of data loss/rolling back to a previous snapshot.

Normally, it's been recommended not to use the same device for L2ARC and SLOG, because the two workloads are very different and metaphorically speaking tend to "step on each other's toes" and cause performance for both to be poor - but in the case of the Optane drives with their significant increase in bandwidth and low-queue-depth performance, it might be possible to actually get away with this. Your SLOG partition doesn't need to be anywhere near that big though - 4GBs should be fine, but the generally accepted size is 8GB. With 64GB of RAM, L2ARC could safely be 192GB or even 256GB - you can adjust this on the fly after pool creation. Whether or not you actually need the L2ARC is a different question though, but based on the proposed use case you might actually benefit from having some of that data be "hot."

There's probably some room for big wins based on record size for those SQL and Oracle DBs as well.

For networking you don't actually mix link aggregation and iSCSI - you set them up as multiple independent links in multiple subnets, and then use MPIO to enable the multipathing. To get the full utilization you'd need four NICs in each host though.
https://pve.proxmox.com/wiki/ISCSI_Multipath

As a note on the "ZFS over iSCSI" page at ProxMox, it says "Note: iscsi multipath doesn't work yet, so it's use only the portal ip for the iscsi connection." - so you may want to use a regular iSCSI volume and manually created QCOW2 images with a tuned recordsize. Default internal blocksize in ProxMox is 64K though which is not good for performance. The linked blog below actually goes into a bit of detail about this and benchmarking of ZVOL vs QCOW2 images.
http://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/
 

normanu

Dabbler
Joined
Jul 19, 2018
Messages
17
A database could be anywhere from a few 100s of IOPS to several 1000s. This depends on the number of users and how the database works.
Java applications could be 0 to a million. Give us a hint.
The DC should be a few hundred at most again depending on the number of users
Mail Server like email? How many users?
VPN should be negligible
Web Server..... for what? a simple static page? Is it backed by a database? Does it server video to millions of people? Is it text only?

I don't mean to be rude but if your "engineering" a storage solution, you need to have quantifiable targets and an understanding of what workloads your servicing.

I understand.

All services are very lightweight as only max. 15 users are using them.
e-mailserver is very light, the most heavy part is clamav.
rspamd gives hardly any load.

Webserver is a simple static page and a owncloud server, where only 3 users are using it actively.
The heaviest app would be the application running on Oracle and Oracle itself.

I'll try and see if I can measure some iOPS.

I would use the 8 SATA ports for your storage drives. For boot, either mirrored USB drives, or another small SSD on an adapter card. SLOG will be necessary since you are planning to host VMs, unless you are okay with the potential of data loss/rolling back to a previous snapshot.

Normally, it's been recommended not to use the same device for L2ARC and SLOG, because the two workloads are very different and metaphorically speaking tend to "step on each other's toes" and cause performance for both to be poor - but in the case of the Optane drives with their significant increase in bandwidth and low-queue-depth performance, it might be possible to actually get away with this. Your SLOG partition doesn't need to be anywhere near that big though - 4GBs should be fine, but the generally accepted size is 8GB. With 64GB of RAM, L2ARC could safely be 192GB or even 256GB - you can adjust this on the fly after pool creation. Whether or not you actually need the L2ARC is a different question though, but based on the proposed use case you might actually benefit from having some of that data be "hot."

There's probably some room for big wins based on record size for those SQL and Oracle DBs as well.

For networking you don't actually mix link aggregation and iSCSI - you set them up as multiple independent links in multiple subnets, and then use MPIO to enable the multipathing. To get the full utilization you'd need four NICs in each host though.
https://pve.proxmox.com/wiki/ISCSI_Multipath

As a note on the "ZFS over iSCSI" page at ProxMox, it says "Note: iscsi multipath doesn't work yet, so it's use only the portal IP for the iscsi connection." - so you may want to use a regular iSCSI volume and manually created QCOW2 images with a tuned recordsize. Default internal blocksize in ProxMox is 64K though which is not good for performance. The linked blog below actually goes into a bit of detail about this and benchmarking of ZVOL vs QCOW2 images.
http://jrs-s.net/2018/03/13/zvol-vs-qcow2-with-kvm/

Thanks!!!
Very usefull!
I have for every host 2 NIC's, so I can do Multipath over 2 hosts.
Would NFS maybe be a better solution, I thought iSCSI would have a much higher performance?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks!!!
Very usefull!
I have for every host 2 NIC's, so I can do Multipath over 2 hosts.
Would NFS maybe be a better solution, I thought iSCSI would have a much higher performance?
Each has their advantages.

NFS will be simpler to set up, but iSCSI will give you higher speeds to a single host because it will use multiple links. NFS will only travel along a single path, even if it's in link aggregation. If you have multiple hosts, your load-balancing algorithm might be able to balance each host to a separate link from the FreeNAS system - it might need some twiddling to try to find an algorithm that generates the right results though.

If you use QCOW2 files, having them on an NFS store will mean you can treat them as individual files, for better or worse - you might have less fragmentation, better options for space reclamation - but you'll be limited by the single per-host performance.

For iSCSI you will also want to set the value "sync=always" on the dataset/zvol presented to ProxMox; that will ensure that your writes are "safe" for the VMs. For NFS you should be able to adjust this just within ProxMox's write caching behavior (writethrough or directsync, I believe?) and let the SLOG protect the writes while giving you the performance you desire.

In either case, I'd suggest multiple datasets (or zvols) using different recordsize (or volblocksize) based on the type of data (eg: your Oracle DB tables/index should be matched to your DB recordsize, typically 8K, but you'll also need to manually create the QCOW2 image with the same internal block size)
 
Status
Not open for further replies.
Top