Mix protocol advise

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
Hi,

I have an XCP-ng storage backend running on TrueNAS (1x raid z2 pool with 2vdev with6 HDD in each pool). I want to use iscsi for my VM and NFS for by backup storage.

Is it ok to mix both iSCSI and NFS protocol or should I use 1 or the other? I am running the storage on a 10G link as a dedicated vlan. The 10G link also carry othe traffic via other vlan separation.

Thank you all in advance
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I want to use iscsi for my VM and NFS for by backup storage.
This looks like different protocols for different datasets on the same pool, and there's nothing wrong with that.
Serving block storage from a raidz2 pool, however, is not recommended.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
Serving block storage from a raidz2 pool, however, is not recommended.
What pool configuration would you recommend?
I have 12x 2TB SAS and they are currently setup as 2Vdev of 6 disks in raidz2.

The storage is used as storage for our Asigra backup. I wanted to do NFS share but without a slog, I cannot get good performance without turning sync off.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
6*2 way mirrors would be best for block storage as per the link that @Etorix provided (did you even read it?)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The storage is used as storage for our Asigra backup. I wanted to do NFS share but without a slog, I cannot get good performance without turning sync off.

I think the question to ask here is: Why do you think you need sync on for writing backups?
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
6*2 way mirrors would be best for block storage as per the link that @Etorix provided (did you even read it?)
Yes I did read it but my understanding was that mirror is for performance orientated solution. I am using the storage to store backup.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
I think the question to ask here is: Why do you think you need sync on for writing backups?
I don't know this is why I am asking here. I am trying to understand how things work. My understanding is when sync is off I can lose data. Am I wrong? Using 6*2 way mirrors is faster but I will loose a lot in storage space. I did a 6*3 way mirrors for my ssd pool for the xcp-ng hypervisor storage
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes I did read it but my understanding was that mirror is for performance orientated solution. I am using the storage to store backup.

Your understanding is backwards. RAIDZ is really only good at storing large data files that are sequentially written or sequentially read, and tends to suffer when used for activities involving smaller data blocks, block rewrites, concurrent accesses, etc. If you have a workload that involves random access, etc., you may be better off with mirrors. ZFS doesn't force it, of course, you can choose. Mirrors are not just a performance thing. Mirrors were the vdev type that ZFS was originally designed with. RAIDZ was added because not everyone needs the performance of mirrors, and would rather take advantage of RAIDZ space efficiency instead.

You should not use RAIDZ for block storage such as iSCSI or NFS storage of active VM virtual disks. You *can* use it for storing backup copies of VM virtual disks, but if you do so, you should make sure that you are copying them onto the RAIDZ in a contiguous sequential format like "cp" would do. This would be, for example, ghettoVCB style full image backups on ESXi or something like that, *ideal* for RAIDZ.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My understanding is when sync is off I can lose data. Am I wrong?

Sorta. ZFS never loses data unless something weird happens. System panicks, power goes out, etc. So think about this: let's pretend you are a bank and your pool stores account ledgers for your customers. It's the middle of the night, and you're processing payroll deposits coming in off the ACH network. You get an ACH deposit for Fred Jones for $5,000. You open the ledger for read/write, do your stuff, then append a line to the ledger for the new $5,000 deposit. UNIX does a write() syscall and ZFS puts this into the next transaction group to be written to disk. The transaction group (think: "write cache") builds in main RAM memory. Now, meanwhile, Boozer McDrunko has decided to drive under the influence outside on the way home from the bar, smacks into a power pole serving your building, and the power goes out. Your deposit, which already came in over the network and was "written" to the pool, vanishes when the power goes away.

Sync writes force that write to go to stable storage BEFORE the write() syscall returns, which is what the ZIL mechanism is all about. You are guaranteed to get your written data back intact with sync writes.

In a virtual machine environment, the question is what happens when your VM datastore goes away for awhile. If your Windows VM was doing software updates, and the filer goes away, does it hang? Does it report a disk error? What happens if you were writing to the virtual disk and that data was lost due to an async write? When the filer comes back online, your VM will think that data has been written to the virtual disk, but what is read back off the disk is old data, not what was supposed to have been there. This can cause problems for VM's, and can cause crashes. So usually VM data is written synchronously out of an abundance of caution.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
@jgreco thank you very much for your very good analogy and taking the time to explain it to me. So I understand that any storage attached to an hypervisor will need to be mirror but what about the asigra storage backend?
 
Joined
Jun 15, 2022
Messages
674
Use an Eaton or APC Uninterruptible Power Supply and you'll not only have clean, filtered power, but hooking the UPS to the server and configuring network-wide power management will keep your data safe and clients happy. Plus once you set it up the software isn't hard to maintain. I normally suggest:

After 5 minutes on UPS a pending shutdown message is sent to all clients they have 5 minutes to save their work. All backups and running jobs are sent either a suspend, halt, or KILL signal, depending on what they need.

At 10 minutes all workstations are sent the shutdown signal.

At 15 minutes the servers start shutting down.

Hopefully if the power goes out for 14 minutes and comes back for 5 minutes then goes down for the day the UPS has enough capacity for that 29 minutes of outage.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
So I understand that any storage attached to an hypervisor will need to be mirror but what about the asigra storage backend?
Storing backups can use raidz2, so ideally you may want to setup two pools.
Say, with your 12 drives, 3 * (2-way) mirrors for VMs — not to be filled over 50% for good performance with block storage — and 6-wide raidz2 for bulk storage —here the limit is about 80 % full.
Adjust the number and/or size of drives if capacities do not match requirements.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
what about the asigra storage backend?

I don't know. If they are writing in large sequential files, RAIDZ. If there is a lot of random access going on, mirrors may be a better choice. You may have to ask your Asigra tech support folks about what sort of access patterns their product generates on the NFS share. If it's iSCSI, there are cases where RAIDZ might be usable, but usually not; see the following articles:



Not all storage problems neatly fit these articles, but it's still good material to be aware of.

Say, with your 12 drives, 3 * (2-way) mirrors for VMs — not to be filled over 50% for good performance with block storage — and 6-wide raidz2 for bulk storage —here the limit is about 80 % full.

And I'll add that it's important to be aware that these are really the two only types of storage ZFS offers, sequentially-oriented RAIDZ bulk storage good for large static files, and random-access oriented mirror storage good for all kinds of access, but less space efficient.
 

Fred974

Contributor
Joined
Jul 2, 2016
Messages
190
Forgive my ignorance here but what is the difference in the setting between 'standard' and 'always' ?
1676572814921.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Forgive my ignorance here but what is the difference in the setting between 'standard' and 'always' ?
View attachment 63675
Standard - Sync writes will be honored if requested by the client. Many iSCSI initiators do not request sync writes by default, but most NFS clients do.

Always - ZFS will treat every write as synchronous, even if not requested by the client. This is likely the setting you want for your ZVOLs presented to Xen.

Disabled - ZFS will treat every write as asynchronous, even if a sync write is requested by the client. This is likely the setting you want for your NFS backup target exports, in order to improve performance - but bear in mind that if a backup is in progress during a power outage, it may fail and need to be restarted.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
sync=always forces each block of data written to be done synchronously. Your client will issue a write request, and before that request is acknowledged, the filer guarantees that it has been written to stable storage (either the in-pool ZIL or the SLOG -- it does not necessarily mean written to the pool itself). This means that ANY read attempt against that data block is guaranteed to return the data just written, even if the filer crashes or power is lost.

sync=standard ... you know, I'm not even entirely sure what "standard" means in this context. I believe that it writes sync if the client requests it or probably when a cache flush is requested. From my perspective, I have no use for this behaviour, because when I use iSCSI, I generally need sync=always. Sorry for the inexact answer.
 
Top