Migrating OmniOS to FreeNAS Issues?

Status
Not open for further replies.

Matt Mabis

Dabbler
Joined
May 1, 2017
Messages
12
Hey all,

I have been toying around with migrating over from OmniOS to FreeNAS was wondering if anyone has done it lately and had any issues migrating the container over from Solaris base to FreeNAS and experienced any issues?

My current complaints about OmniOS for which i am migrating over.
  • Requires NAPP-IT GUI and a lot of replication and features in the GUI are Paid for features only including ACLs
  • NFS Dropouts when one specific DNS server is offline (even though multiple are set and even with an /etc/hosts file on all ESXi Boxes and Omni Server)
  • NFS Lag even with Decent ZIL and SLOG (using Hitachi SAS SSD for SLOG and Samsung SM1625 for ZIL) (Ram @64GB)

I also had a question that hasn't really been answered recently but in the past was a very well talked about topic (Mixing NFS with CIFS on same Container different Shares) The only time i mix shares is with my ISO's Drive where i mount NFS as Read-Only and my CIFS i use to put new ISOs on it. The rest of the shares never mix between CIFS and NFS its either one or the other.

Do you see any issues with this configuration? I would appreciate any raw opinions as well @cyberjock :) as i know you have spoken to the mixed implementations before.

Main reason i want to come over to FreeNAS is i use Mellanox ConnectX-3 VPI based cards for 10Gb Throughput (QSFP -> 10Gb SFP -> 10Gb Switch) i did run a 40Gbe switch a long time ago got too loud so i shut it down and packed it away for a bit... but i have had some FUNKY config issues with OmniOS as of late with my dual 10Gb Intel NIC and now its randomly lost an uplink... either way not bueno...

Old Hardware (X8DTE w/64 GB Ram + 3x SAS-2008 Controllers, Intel 10Gb SFP and Quad Core Proc)
New Hardware (Gigabyte GA-7PESH2 w/96 or 128GB Ram + 1x SAS-3008 Controller + 1x SAS-2008 Controller (onboard), Mellanox ConnectX-3 Dual VPI (10Gb SFP Ethernet)

The new hardware will also be using the SAS-3008 Controller to connect to a JBOD Chassis instead of being onboard storage... got a SE3016 that i have modded with an Intel Expander. and the internal 2008 Controller will be connected to the ZIL/SLOG Devices.
 

Durkatlon

Patron
Joined
Aug 19, 2011
Messages
414
Also note that OmniTI has said they're gonna stop working on OmniOS, if you need another reason to make the switch.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm a bit confused... What's the problem, exactly?
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
I'm a bit confused... What's the problem, exactly?
They are asking if anyone has migrated from OmniOS to FreeNAS and if they have experienced any issues. They want to know if it's possible to migrate the existing "containers" to a dataset. They then want to know if it's okay to share a common dataset via both NFS and SMB.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I guess the meat of the post was lost in the last 7/8.

I assume the containers are Docker, right? You'd need to spin up a VM with a Docker manager thingy and handle things from there. Storage would be mounted via network shares (inefficient, but reliable).
They then want to know if it's okay to share a common dataset via both NFS and SMB.
Not simultaneously. It won't end well due to locking issues.
 

Matt Mabis

Dabbler
Joined
May 1, 2017
Messages
12
My Apologies for the long story...

When i say Containers i mean my Pools/Filesystems. In the picture attached you can see what i'm referring to. Most all of these shares Use CIFS/SMB to connect to the data, however there are shares in there (example Zeus/apps;Zeus/VM_Backups;Hades/VM_Images;Hades/VM_Migrate) which share both CIFS/SMB and NFS capabilities...

My Questions are
  1. Is it going to be problematic migrating from a Solaris OS with these Pools/Filesystems over to FreeNAS
  2. Are there still long standing issues with SMB/NFS Sharing the same Share? - in my configs its usually CIFS has Read-Only and NFS has R/W or vise versa.
  3. Will the existing permissions get reset when importing and will i have to reset them?
The ESXi Hosts will have Free Reign on NFS Shares with Zeus/VM_Backups; Hades/VM_Images; Hades/VM_Migrate and Will only have Read-Only to Zeus/apps. I do not store User Data in ESXi Shares, and i only add data to Zeus/apps from CIFS as thats where i store all of my ISOs
 

Attachments

  • Screen Shot 2017-05-01 at 1.22.46 PM.png
    Screen Shot 2017-05-01 at 1.22.46 PM.png
    84.9 KB · Views: 263

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
When i say Containers i mean my Pools/Filesystems.
If it's ZFS, you can import it fine. It's not ideal due to specifics with the partitioning, but it should work.
Are there still long standing issues with SMB/NFS Sharing the same Share? - in my configs its usually CIFS has Read-Only and NFS has R/W or vise versa.
If one is read-only, it should be fine.
Will the existing permissions get reset when importing and will i have to reset them?
Not if you do it carefully. Preserving UIDs/GIDs is a requirement, naturally.
 

Matt Mabis

Dabbler
Joined
May 1, 2017
Messages
12
If it's ZFS, you can import it fine. It's not ideal due to specifics with the partitioning, but it should work.

Interesting response, i guess i would like to know more about this if possible, i would assume a FileSystem is a FileSystem but im guessing FreeNAS does something differently?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The filesystem itself is fine, but FreeNAS adds a swap partition for emergencies, as well as a dummy boot partition that tells you that you're trying to boot from a data disk.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You could do a lot worse than setting up a new pool on a spare disk on your omnios setup, and try a test migration with just that disk first.

Depends how important your data is to you
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Totally doable to do what you want, if:

1. If your zpool is v28 or older, it will work. It would be better to make a new zpool on FreeNAS, but it *will* work. If this is a long-term server, I'd make a new zpool if/when possible.
2. NFS and CIFS can be mixed, but permissions may make it impossible. Like #1, CIFS requires Windows ACLs, which tends to imply a dataset made by FreeNAS to work, so you may have problems getting permissions working properly on CIFS if you use your old zpool, unless you happen to use all the same settings and permissions that FreeNAS will use with Samba. Also NFS and CIFS can work if one is read-only (which you said you plan to do) and if NFS and CIFS can have permissions that will coexist. For
example, you can't have one owner on CIFS and another owner on NFS without problems. Again, if this is long-term, better to do it all in FreeNAS rather than import old shares and such.
3. You will likely have to redo all of the permissions from scratch. If you have 100+ users, this could be complicated. If its a home installation and you're looking to get stuff working for you, your wife, and a kid or two, that shouldn't be too hard. Just do recursive permission setting.
 

Matt Mabis

Dabbler
Joined
May 1, 2017
Messages
12
Appreciate the input, my ZPOOL version is 5000, i will def take your advice on the recreation of the pools, ill have to scrounge some drives as i will have to move around about 20TB of data...

My intention is to do it right, the only thing i tend to want to share is ISO/App Files between the ESXi Servers and the VM Desktop/Servers and from an ESXi standpoint i don't need it writing to that area. This is for a homelab/small environment so i get ill have to reset the ACLs i just wanted to see how bad the transfer would be...

Got the Serverboard/Memory (96GB) with my SAS-2008/SAS-3008 Adapters ready and this board happens to have onboard Intel 10Gb which i might use instead of the MLNX but havent gone 100% on that one... Ordered a pair of SATADOMs and running through the memtest right now so hopefully soon ill have more to show down the pipeline.

Again thanks for your help!
 
Status
Not open for further replies.
Top