Setting up hot and cold tiers using SSD and HDD

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Hello Everyone,

I'm building a system using a supermicro X10QBI machine, my initial plan was to have a 6 SSD's for my hot-tier and 8 HDD's for my cold-tier. I have 2 individual raid controllers on the box and i'm ready to install freenas. I wanted to ask the community as I've googled around but not found any information on hot & cold tier-ing. I've discussed it briefly with someone and it sounded like a good implementation so any data in use is loaded on the SSD's for speed and anything not used on the HDD's for archiving.

Can anyone shed some light on this? Does freenas just do this automatically?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Hi,

You're writing "raid controllers" - hopefully it's just at typo, instead of HBAs? Otherwise you're in for a lot of trouble and not using ZFS to its potential.
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
893
With regard to hot/cold tiering, perhaps this would be a good thing for a feature request? Use the same jira as for bug reports to do so, but search first to see if it was requested already. It would also be helpful if the devs were to comment on this as well.
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Hi,

You're writing "raid controllers" - hopefully it's just at typo, instead of HBAs? Otherwise you're in for a lot of trouble and not using ZFS to its potential.

Yes raid controllers this is the machine i plan on using, im trying to create ISCSI machine for file serving IIS directories


Supermicro 4U 24 Bay X10QBi 4x E7-4820 V2 2Ghz 32-Cores 128GB 4x PSU
2X LSI 9361-8i Raid Card 12g installed w/ Cables
Backplane that can be split between the cards
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I'm not aware of any OS that offers hot/cold data tiering natively. Even on Windows, this is the provenance of third party add-ons, or specialized storage equipment with data retention policies.
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Hi,

You're writing "raid controllers" - hopefully it's just at typo, instead of HBAs? Otherwise you're in for a lot of trouble and not using ZFS to its potential.
I'm still learning about ZFS, so all the drives should be plugged into the HBA rather than a raid card?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
With lots of ram, L2ARC and special allocation classes, you can basically get very close to comparable performance for frequently used small-io datastreams imho...
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I'm still learning about ZFS, so all the drives should be plugged into the HBA rather than a raid card?

The LSI 9361-8i won't work for FreeNAS, as it can't be flashed to IT mode. Your drives will need to be plugged into an HBA, not a RAID card.

.

To achieve pseudo hot-cold, you could use your SSD pool for active sharing, and replicate them to the disk pool on a regular cycle.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Yes raid controllers this is the machine i plan on using, im trying to create ISCSI machine for file serving IIS directories

Trying to understand what you want to do :smile:

Do you want to use FreeNAS/TrueNAS as storagebackend for a single IIS server? Or for a cluster? Since I'm asking is that iscsi will only let you connect one frontend pr. zvol, where SMB (CIFS) will let you have a whole cluster in front of a dataset, and easily add extra frontends as needed.
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Trying to understand what you want to do :smile:

Do you want to use FreeNAS/TrueNAS as storagebackend for a single IIS server? Or for a cluster? Since I'm asking is that iscsi will only let you connect one frontend pr. zvol, where SMB (CIFS) will let you have a whole cluster in front of a dataset, and easily add extra frontends as needed.
Yes, i want to build a storage back-end for a cluster. I have 4 IIS nodes that i'd like to make redundant so my plan is to store the website files on the storage server in 4 seperate ISCSI drives that i will add to my IIS nodes. I was thinking about hosting the VHd's for the machines themselves on the storage server but for security reasons i think just storing the data itself would be a better solution for me.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Yes, i want to build a storage back-end for a cluster. I have 4 IIS nodes that i'd like to make redundant so my plan is to store the website files on the storage server in 4 seperate ISCSI drives that i will add to my IIS nodes. I was thinking about hosting the VHd's for the machines themselves on the storage server but for security reasons i think just storing the data itself would be a better solution for me.

Why do you want 4 individual iscsi drives, instead of a single SMB share? If you're going to serve the same data through the frontends you should get much better performance - and less work - by using a common SMB share for the data. That way the ZFS caching can really shine and keep the "hot" files in ARC (and L2ARC if you add that later), and you don't have to duplicate (quadruble actually) the data.
A posibility - if your timeframe allows for it - could be to test the new "special allocation" devices. Using a mirror of SSDs for metadata to speed up reads. But this is a TrueNAS 12 addition, so not something a lot of people have experience with yet (including myself - but sounds promising).
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Why do you want 4 individual iscsi drives, instead of a single SMB share? If you're going to serve the same data through the frontends you should get much better performance - and less work - by using a common SMB share for the data. That way the ZFS caching can really shine and keep the "hot" files in ARC (and L2ARC if you add that later), and you don't have to duplicate (quadruble actually) the data.
A posibility - if your timeframe allows for it - could be to test the new "special allocation" devices. Using a mirror of SSDs for metadata to speed up reads. But this is a TrueNAS 12 addition, so not something a lot of people have experience with yet (including myself - but sounds promising).
I wanted to split up into different logical drives for security so that if 1 site gets exploited it wouldn't affect all the sites in the same directory and split up the risk. It's not going to be the same data across 4 drives, different data 1 drive for each IIS instance (4 total) i'm splitting the workload up of the network across 4 IIS installs.

I did want to implement the HOT and COLD tier which i think is what you mentioned at the end. So that data in use is running of SSD's and the unused data stored in COLD on HDD's. I haven't found much information on this but any references you could share would be highly appreciated.

I looked into buying some HBA cards to replace my Raid cards so i can implement ZFS correctly.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
With regard to hot/cold tiering, perhaps this would be a good thing for a feature request?
I tried to propose something and drum up some support for it over a year ago (https://www.ixsystems.com/community/threads/tiered-storage.75346/)

It seems the interest would be low from the iX side and that it needs to come from FreeBSD/OpenZFS... I never got a response from Allan Jude.

Conceptually, we now have a lot more options that might make it less interesting anyway with TrueNAS 12 and special VDEVs.
 
Top