fastest controller supported

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079

miip

Dabbler
Joined
Oct 7, 2017
Messages
15
Get the IBM M1215, they are available at (german) ebay for around 100€, should be the same for US. They are very easy to flash to IT mode.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
do any of these have battery backup and cache on the controller?
Normally, that is only used with hardware RAID controllers and you should never use hardware RAID controllers with FreeNAS because the ZFS file system is itself a software RAID controller implementation and the hardware RAID controller would interfere with ZFS being able to properly address the drives.
 

David Sheetz

Dabbler
Joined
Jul 15, 2017
Messages
18
I agree to a point but if power goes out even ZFS cant save you and caching data allows for better transfer and has nothing to do with the file system type - data transfer is rarely a steady perfect stream -just my 2 cents...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I agree to a point but if power goes out even ZFS cant save you and caching data allows for better transfer and has nothing to do with the file system type - data transfer is rarely a steady perfect stream -just my 2 cents...
First, a server, any server, but a FreeNAS / ZFS server especially, should be on a good UPS so that power does not just go out.
Second, you need to learn more about how ZFS works. It is a copy on write file system which is very different in how it works to any other file system. If power fails to a server running ZFS, any data in flight will be lost if the uber block has not been updated.
There is plenty of reference material on the forum, look in the Resources section. Until you learn more, do not make guesses about how you think it should work.
 

David Sheetz

Dabbler
Joined
Jul 15, 2017
Messages
18
we have UPS but professionals like me prefer mega redundancy. No I dont understand ZFS well yet except that is a resource hog with plenty of benefits. I am trying to learn and appreciate your input.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
we have UPS but professionals like me prefer mega redundancy.
So, that is to imply that you are more professional than anyone else?
Do say.
How many years have you been administering storage systems and of those many years in the industry how many of them have been working with ZFS?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
we have UPS but professionals like me prefer mega redundancy. No I dont understand ZFS well yet except that is a resource hog with plenty of benefits. I am trying to learn and appreciate your input.
Not even close buddy. You don't want cache in your card and you don't need a battery backup in your card with zfs. Zfs is also not a resource hog. It is designed to use ram as a read cache to improve performance but that does not mean it's a hog. For a professional you have lots to learn and pretending you know everything just shows your ignorance.

With only 8 hdds you also don't need a sas3 card. Something that does 6GB/s would still never get maxed out. Why do you think it has to be the fastest? Your hdds are the bottleneck.
 

David Sheetz

Dabbler
Joined
Jul 15, 2017
Messages
18
OK so I didnt mean it like it came out but yes 25 years of IT mostly in fortune 500 companies with over 10,00 users and have supported many storage systems, ZFS is newer. Freenas recommends 1GB ram per TB of data that to me is a resource hog. maybe not the right term but to me it is. I dont know everything that's why I am here but you all seem to put me down because I ask a question.Sorry I offended
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
OK so I didnt mean it like it came out but yes 25 years of IT mostly in fortune 500 companies with over 10,00 users and have supported many storage systems, ZFS is newer. Freenas recommends 1GB ram per TB of data that to me is a resource hog. maybe not the right term but to me it is. I dont know everything that's why I am here but you all seem to put me down because I ask a question.Sorry I offended
That 1gb per tb is not a good way to measure things. Basically once you go past 16Gb and especially 32 it's all about expected performance. 10y in IT and have you ever used an non windows Enterprise storage solution? Because they will all have 128gb or more memory because it's the best design. The more memory I can cram into a system the better!
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
ZFS is newer
Development of ZFS started in 2001 with two engineers at Sun Microsystems. http://open-zfs.org/wiki/History
So it is newer that many other systems but the developers have incorporated lessons learned to make ZFS better, smarter.
The reason that a caching controller is bad for ZFS is that ZFS is monitoring the disk activity to determine the fitness of a disk to remain part of the pool. A caching controller can give responses to the operating system that make ZFS think a disk is doing something it is not supposed to do and can cause ZFS to determine that the disk is bad. ZFS needs to have direct and exclusive access to the disk to ensure that ZFS knows what is happening with the data. It can actually create problems for the file system if a caching controller is used. If you don't have a cache on the controller, you don't need a battery on it either. These things are documented. It is not a new development. The function of ZFS replaces hardware RAID and the cache that is usually included in a hardware RAID controller. The system memory and CPU do that work instead so that ZFS can monitor disk health and offline disks that are not performing correctly or report disks that develop issues like bad sectors so that the administrator can replace those disks before they fail. Since the system is taking on the additional responsibility of managing all the individual disks instead of offloading that to a sub-processor on a hardware RAID card, it could appear that ZFS is a resource hog, but you don't need a fancy RAID card either, just a simple SAS HBA and maybe a SAS expander if you want to connect a massive number of disks. The Sun/Oracle SAN where I work uses the proprietary version of ZFS and has a massive number of disks attached that are all managed by the operating system. It is just a very different way of doing storage than the way of hardware RAID.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
caching data allows for better transfer and has nothing to do with the file system type - data transfer is rarely a steady perfect stream
ZFS uses the system memory (RAM) as what is called ARC (adaptive replacement cache) instead of a hardware cache on the controller and ZFS handles any flow control to the individual drives. A sufficiently slow drive may be set offline by the operating system. The amount of ARC available in a server is usually all of the memory except for 1GB but more memory can be reserved through settings.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
No I dont understand ZFS well yet except that is a resource hog with plenty of benefits
There is a wealth of experience on the forum from people that have been using ZFS for many years. If you give your usage scenario, what you are trying to accomplish, it is possible that someone may give you valuable advice based on real world experience.
 

David Sheetz

Dabbler
Joined
Jul 15, 2017
Messages
18
OK so lets backup, first I am here because I respect your knowledge of zfs and FreeNAS, storage has not been my main job for years. If I question you it is to understand not say I know better but I have older ways of thinking and dont want just answers but to understand. I do appreciate your assistance greatly.

Here is what I am trying to do:
1. add storage that can be common to all VM hosts I have (some are in vcenter and some are standalone free esxi) common ground is they can all use and iSCSI mapping.
2. make sure the sapce I have allocated in FreeNas is as fast as possible (within budget) as well as offer as much redundancy as I can. ( in my opinion never too redundant)
 

David Sheetz

Dabbler
Joined
Jul 15, 2017
Messages
18
Development of ZFS started in 2001 with two engineers at Sun Microsystems. http://open-zfs.org/wiki/History
So it is newer that many other systems but the developers have incorporated lessons learned to make ZFS better, smarter.
The reason that a caching controller is bad for ZFS is that ZFS is monitoring the disk activity to determine the fitness of a disk to remain part of the pool. A caching controller can give responses to the operating system that make ZFS think a disk is doing something it is not supposed to do and can cause ZFS to determine that the disk is bad. ZFS needs to have direct and exclusive access to the disk to ensure that ZFS knows what is happening with the data. It can actually create problems for the file system if a caching controller is used. If you don't have a cache on the controller, you don't need a battery on it either. These things are documented. It is not a new development. The function of ZFS replaces hardware RAID and the cache that is usually included in a hardware RAID controller. The system memory and CPU do that work instead so that ZFS can monitor disk health and offline disks that are not performing correctly or report disks that develop issues like bad sectors so that the administrator can replace those disks before they fail. Since the system is taking on the additional responsibility of managing all the individual disks instead of offloading that to a sub-processor on a hardware RAID card, it could appear that ZFS is a resource hog, but you don't need a fancy RAID card either, just a simple SAS HBA and maybe a SAS expander if you want to connect a massive number of disks. The Sun/Oracle SAN where I work uses the proprietary version of ZFS and has a massive number of disks attached that are all managed by the operating system. It is just a very different way of doing storage than the way of hardware RAID.

Thanks this explains it much better!
 

David Sheetz

Dabbler
Joined
Jul 15, 2017
Messages
18
That 1gb per tb is not a good way to measure things. Basically once you go past 16Gb and especially 32 it's all about expected performance. 10y in IT and have you ever used an non windows Enterprise storage solution? Because they will all have 128gb or more memory because it's the best design. The more memory I can cram into a system the better!
actually I have used many non Windows storage systems HP MSA, Dell, EMC, etc. , back in the day systems would not take 128 GB ram and we supported Petabytes.
Although ZFS has been around since 2001 I did not see it take mainstream acceptance until 2008 or so.

I do agree the more memory the better when possible. I am not dissing ZFS or FreeNAS just saying it is a completely different animal than I am used to. trying to learn the new way of thinking.
 
Status
Not open for further replies.
Top