Planning to try FreeNAS - review of hardware+goals before I start?

Status
Not open for further replies.

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
I'm managing a fair size domestic/small office file server for family. At the moment it's ordinary Windows + file sharing. I'm not happy that this is the best route, and likely to move to FreeNAS. I'd like to check my needs (which aren't complicated) and get feedback on anything I've missed before I start, as well as checking whether my goals are realistic, and steps I might need to consider, to have a good chance of achieving them.

Existing hardware, data stored, and usage:

  1. Hardware - Current platform is a Haswell i7 quad core + 16GB fast RAM with 4x4TB 7200 HDDs for data + fast 64GB SSD for the system. The data disks are all 5 year 24/7 enterprise range (not consumer), configured as 2 separate soft-mirrored volumes of 4TB+4TB each. (At the time of purchase, 6-8TB's weren't sold or cost too much). My other workstation uses SSD for HDD caching and I'd like to add that to any future build, and I can upgrade the SSD size if needed.
  2. Cache/RAID - I'm currently using Intel's RST for SSD caching on my main workstation, and the file server uses Windows dynamic disk mirroring for the mirrors. It's far from ideal but choosing a RAID card is a question all of its own and it's worked well enough to handle disk failure risk. But it's time to do better.
  3. Data - Usable space is 7.3TB and data physically stored is 5.8TB (7.7TB less 1.9TB Windows dedup saving). The data is made up of 3 distinct types - digitised movies + home movies (1-3GB files = 3.5TB, almost no dedup), disk images/backups (4-40GB files = 1.4TB but 97% deduped), and the rest is a mix of hundreds of thousands or millions of files - photos, drivers, installers, random stuff, work documents, kids homework, and repeated backups.
  4. Usage - mainly 3 kinds of file-server use: streaming (for movies), bulk file copying to/from server, and storing backups. As the Windows platform is a general one, it's also used as a media player and TV server/recorder too. Also when large open source ISOs are released, if the release is on bittorrent, they're written directly by the torrent client to shares on the server, to save later moving between devices. (Obviously some of these will migrate to FreeNAS and some won't).
  5. Users - while users are trusted to try and do right, some users are experienced and others aren't. So permissions are set up accordingly. I tend to spend a lot of time managing and organising files "bulk dumped" on the server, in my spare time.

Desired outcomes/concerns with FreeNAS:

  1. System adequacy - is my system (especially RAM) sufficient or should I get more, since ZFS and caching are memory intensive? Is SSD/RAM caching effective and "fast" on FreeBSD? Does ZFS handle RAID itself or do I need an extra card? Do I need a battery backed card to ensure that the SSD cache completes any writing in its queue in the event of power issues or is that handled by an SSD onboard capacitor (if present)?
  2. Data reliability - bit rot/corruption and guaranteed ability to revert to older versions or snapshots - inherent in ZFS and fundamental to FreeNAS, so these should be a "given". At worst in the case of a ransomware attack involving escalated privilege or admin account compromise via a Windows client, I'd like to be confident it couldn't damage/delete recent snapshots without also having access to the FreeNAS admin panel itself (which won't usually be logged in), and ideally would also be able to alert a possible attack if observed, based on the inevitable and very unusual file system activity.
  3. Performance - GbLAN with a mix of Windows (mainly), OSX and *nix clients. I'm a bit concerned about how well Samba/SMB/airplay would inter-perform, and whether server on FreeBSD + clients on Windows and subtle differences between SMB/Samba might mean that LAN file shares are less performative or "freeze" more due to locking issues. Reassurance/info/impressions on this would be very appreciated. I'd like to have extremely good and consistent performance even if it means extra RAM and SSD caching, as I like a consistent fast response and it's also very useful with large-scale data shuffling :) I'm not sure what to realistically expect :)
  4. Client access change - needs to be easy for an 'admin' user to help someone on their windows client, and to switch between FreeNAS accounts fairly easily when needed (eg to modify a file that the helped user can't write) without change of access rights being a pain in the backside :)
  5. File search - I do a lot of file searching from clients, for example "files in FOLDER_LIST matching BOOLEAN_FILENAME and BOOLEAN/REGEX_CONTENTS and DATES+SIZES". I don't use a pre-built search index, and I accept the slowdown this causes, the search program searches anew each time by re-reading the disk or cached files (I prefer it that way and want to keep it like that). When the files are well known properietary formats (docx, xlsx, ppt, pdf etc) the usual inbuilt filters also allow searching of the contents as well. But I'm not sure of the mechanics of searching a FreeNAS volume/directory in that way and how it would be affected by moving to FreeNAS - would I do it the same as at present (via CIFS/SMB), or via a specific "search UI" while logged into the NAS, or either? The program I use is "FileLocator Pro" (better known under old name of "Agent Ransack") if anyone knows it. Would this work the same as it does now, and would FreeNAS be any/much slower compared to a Windows server for this? (It's a big concern)

Quick answers/info very appreciated, so I can start enjoying FreeNAS - thank you!!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
Your hardware is "adequate" in that has sufficient CPU power and RAM quantity, but "not recommended" mainly due to a lack of ECC RAM (even if your RAM is ECC, which I doubt, i7s don't support the ECC function). ECC RAM is strongly recommended in any server application. Nothing in the use case you indicate suggests that a read or write cache will be of any benefit to you. You do not want or need a hardware RAID controller; ZFS handles the RAID itself. You should plan for a UPS as well.

Performance should not be an issue with the usage you describe.

I would expect that your file search software would work identically--whether the server's running Windows, Linux, or FreeBSD, the CIFS share should behave in the same way.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Thanks

Your hardware is "adequate" in that has sufficient CPU power and RAM quantity, but "not recommended" mainly due to a lack of ECC RAM (even if your RAM is ECC, which I doubt, i7s don't support the ECC function)
Crap - forgot that. Good catch, thank you! I'll look at the implications. I'll have to check out platform choices, I can do that myself now you've mentioned it.

Nothing in the use case you indicate suggests that a read or write cache will be of any benefit to you.
Really? That surprises me. Can you say more, or are there pages/studies/in-depth discussions anywhere on how to know when ARC or L2ARC or whatever caching is used, will or won't make much difference to responsiveness and I/O latency/speed?

You do not want or need a hardware RAID controller; ZFS handles the RAID itself. You should plan for a UPS as well.
Good, and yes.

Performance should not be an issue with the usage you describe. I would expect that your file search software would work identically--whether the server's running Windows, Linux, or FreeBSD, the CIFS share should behave in the same way.
This was a HUGE worry for me. It would be far from implausible for *nix interfacing with Windows to be less performant compared to *nix-Samba-*nix or Windows-SMB-Windows, since Microsoft wouldn't have a motive to make Windows perform better with *nix servers than Windows servers, and *nix drivers would have had to be reverse engineered to fix any gaps or undisclosed APIs. Plus they have to interact with different methodologies, NIC drivers and stacks... I have had a great concern that the upshot would be a drop in throughput or only "quite close" API matching (and issues for some programs using shares?), as the price of the benefits of ZFS/FreeNAS. More info on this very appreciated!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
Really? That surprises me. Can you say more, or are there pages/studies/in-depth discussions anywhere on how to know when ARC or L2ARC or whatever caching is used, will or won't make much difference to responsiveness and I/O latency/speed?
I'm afraid I'm not intimately familiar with greater details of how L2ARC is used, other than that it generally isn't useful unless you have at least 64 GB of RAM. Perhaps one of the more knowledgeable folks here (@jgreco perhaps) can elucidate further.
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Really? That surprises me. Can you say more, or are there pages/studies/in-depth discussions anywhere on how to know when ARC or L2ARC or whatever caching is used, will or won't make much difference to responsiveness and I/O latency/speed?

ARC is totally awesome for responsiveness. Stuff being served from ARC is lightning fast.

I'm afraid I'm not intimately familiar with greater details of how L2ARC is used, other than that it generally isn't useful unless you have at least 64 GB of RAM. Perhaps one of the more knowledgeable folks here (@jgreco perhaps) can elucidate further.

At 16GB, L2ARC is not recommended. If you must, limit yourself to about 60GB of L2ARC. However, 16GB is small enough that ZFS isn't likely to be getting a good idea of what your file access patterns are, unless you aren't actually storing all that much, or if maybe you're only frequently accessing a small subset.

ARC and L2ARC is somewhat misunderstood by those who think of it as "cache"; it is perhaps more useful to think of it as a pool accelerator for busy pools. If your pool isn't busy, the amount of benefit you'll see from it is substantially reduced. The more you have of it, the more ZFS can cache, but in a situation where ZFS has to fetch something from a pool versus fetching it from cache, the difference in latency is probably not that noticeable when you're a user sitting at the far end of a NAS protocol.

The place where you *might* win is if you're doing searches on a limited set of data with sufficient frequency that ZFS recognizes the value of the data and caches it in ARC/L2ARC. However, you would probably want to invest in a much larger ARC. Sizing of L2ARC is typically about 5x ARC, but the ratio can grow a bit as RAM increases, so if you have a 128GB RAM system, you can definitely go out to at least 512GB L2ARC, and quite possibly even 1TB. The trick is that you need to have sufficient ARC that ZFS can actually identify repetitive access patterns and learn what to punt out to L2ARC, so this has to be sized with some understanding of the dataset you are interested in searching. Or you can keep a separate pool of the things you want to be optimized for searching, which would help to some extent.

This never ends up working quite the way people envision, because it is complex to be a piece of software with limited insight into what is going on and to be able to make truly excellent decisions. That's my warning. :smile:
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Regarding file access patterns, other than file streaming (eg video opened direct from a file share), there probably aren't regular opened files as such. In other words with few exceptions if files are opened now, it won't be a great predictor of file accesses in future. So I think unless more goes on than meets the eye, "learning patterns" are going to be very limited and I wasn't looking for that. More access will be "read-> 10 mins to forever -> maybe eventually write" or the reverse. Repeated accesses to the same file in a short time interval won't be so common. Predictable access to other files in the same folder might be (eg a copy or backup session).

So I'm thinking of caching far more because of benefits to handling read/write of very large files, or of sizeable folders of smaller files, and the ability to hold a part-opened files quickly available to meet subsequent random or streamed access. That should give faster saving of large files or large folders of small files to the server, mayyyybe reading other files in a folder to cache when repeated access detected during a copy session, or multiple people watching videos and using cache/RAM for the files being played so the disk isn't thrashing, even if file access itself is not very predictable.

That sort of thing. Basic but helpful.

My performance hopes are in a way, more about underlying "disk to NIC" performance, meaning low latency for I/O requests, reasonably high I/O rates (reading presumably direct and fast, writing overheads mitigated by fast caches), and not much "freezing" or undue pausing for file locks and other problems Windows sometimes gets.

A couple of questions kicked off by some of the links above - is CIFS or NFS better for FreeNAS to Windows clients (especially allowing for threading)? Some threads suggest forcing NFS might be better? Also how do people find FreeNAS->Windows client when measured up against similar standalone Windows Server -> Windows client?

[Question edited for clarity after answer below]
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
Regarding file access patterns, other than file streaming (eg video opened direct from a file share), there probably aren't regular opened files as such. In other words with few exceptions if files are opened now, it won't be a great predictor of file accesses in future. So I think unless more goes on than meets the eye, "learning patterns" are going to be very limited and I wasn't looking for that. More access will be "read-> 10 mins to forever -> maybe eventually write" or the reverse. Repeated accesses to the same file in a short time interval won't be so common. I'm thinking of caching more for its write benefits, and for its ability to hold part-opened files quickly available for further random or streamed access. So faster saving of large files or large folders of small files to the server, mayyyybe reading other files in a folder to cache when repeated access detected during a copy session, or multiple people watching videos and using cache/RAM for the files being played so the disk isn't thrashing. That sort of thing. Basic but helpful.

My performance hopes are therefore more about underlying "disk to NIC" performance, meaning low latency for I/O requests, reasonably high I/O rates (reading presumably direct and fast, writing overheads mitigated by fast caches), and not much "freezing" or undue pausing for file locks and other problems Windows sometimes gets.

A couple of questions kicked off by some of the links above - is CIFS or NFS better for FreeNAS to Windows clients (especially allowing for threading)? Some threads suggest forcing NFS might be better? Also how do people find FreeNAS->Windows client when measured up against similar standalone Windows Server -> Windows client?

Use samba ("CIFS") for windows clients.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
BTW, while FreeNAS supports deduplication, we don't (generally) recommend it.

The rule of thumb is that you need 5GB of RAM for each 1TB of storage that will be deduped. This is just an estimate. There is no upper bound.

We had a user show up this week. After shutting down his system, he doesn't have sufficient RAM to load his pool. The last I heard was that he was going to take his disks to work, to see if he can load them on a system with 64GB of RAM.


Data
- Usable space is 7.3TB and data physically stored is 5.8TB (7.7TB less 1.9TB Windows dedup saving). The data is made up of 3 distinct types - digitised movies + home movies (1-3GB files = 3.5TB, almost no dedup), disk images/backups (4-40GB files = 1.4TB but 97% deduped), and the rest is a
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
I see you want raw performance. But if going over 1Gb NIC then that will be you bottle neck, and limiting factor. All the "cache" in the world probably won't make a noticeable difference to the client at the other end.

You don't need ECC memory but it's highly recommend to ensure data integrity. If you are going to repurpose your rig you mentioned you should be ok. Try to only use SATA ports on the Intel chipset and stay away from the others ports. If you have to use the discs the data is currently on you will need some storage discs to offload that until you can get it on the NAS.


Sent from my iPhone using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
BTW, while FreeNAS supports deduplication, we don't (generally) recommend it.

The rule of thumb is that you need 5GB of RAM for each 1TB of storage that will be deduped. This is just an estimate. There is no upper bound.

Well, there is an upper bound, it's just hard to quantify, just like the "how much RAM do I need" is hard to quantify for the average non-dedup user.

Put differently, I'm contemplating dedup on my 128GB filer that has ~7TB of VM datastores on it. It's probably the sort of situation where dedup would be fine, but you have to be a little careful. If you have 7TB of data stored as 16KB blocks (volblocksize), that implies maybe around 420 million blocks. That is an absolute upper bound. Depending on your ZFS version, the in-core DDT records are 320 bytes. That implies that there's no way the system could need more than 137GB of ARC to hold 420 million DDT records. Looking at more realistic numbers, you should be able to trim that significantly. :smile:

This actually reminds me, I wanted to run a zdb -S ...

and it won't let me? Intriguing.

We had a user show up this week. After shutting down his system, he doesn't have sufficient RAM to load his pool. The last I heard was that he was going to take his disks to work, to see if he can load them on a system with 64GB of RAM.

How the hell did I miss out on that one? 8GB? Sheeeesh.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Looks like there might be a bug in 9.10 that breaks zdb. I have no idea how that passed QA, heh.

Anyways I did actually run this on the VM filer. I should probably note that the filer doesn't actually have 7TB in use, more like 2.2TB at the moment,

Code:
zdb -S storage3
Simulated DDT histogram:

bucket  allocated  referenced
______  ______________________________  ______________________________
refcnt  blocks  LSIZE  PSIZE  DSIZE  blocks  LSIZE  PSIZE  DSIZE
------  ------  -----  -----  -----  ------  -----  -----  -----
  1  13.6M  217G  174G  185G  13.6M  217G  174G  185G
  2  3.05M  48.8G  41.6G  43.6G  7.07M  113G  97.1G  101G
  4  1.65M  26.4G  25.2G  25.5G  8.07M  129G  123G  124G
  8  368K  5.76G  5.70G  5.71G  3.64M  58.3G  57.7G  57.9G
  16  249K  3.89G  3.88G  3.88G  4.89M  78.2G  78.1G  78.1G
  32  13.0K  209M  206M  206M  508K  7.94G  7.80G  7.83G
  64  2.10K  33.7M  32.8M  33.0M  176K  2.76G  2.69G  2.70G
  128  409  6.39M  6.06M  6.13M  71.6K  1.12G  1.06G  1.08G
  256  401  6.27M  6.05M  6.10M  141K  2.20G  2.13G  2.15G
  512  309  4.83M  4.67M  4.70M  217K  3.40G  3.28G  3.31G
  1K  74  1.16M  1.10M  1.11M  96.1K  1.50G  1.44G  1.45G
  2K  10  160K  130K  136K  27.1K  434M  364M  378M
  4K  66  1.03M  1.03M  1.03M  279K  4.35G  4.35G  4.35G
  8K  2  32K  32K  32K  17.1K  273M  273M  273M
  16K  1  16K  1K  4K  16.4K  262M  16.4M  65.5M
  32K  1  16K  16K  16K  58.1K  930M  930M  930M
  64K  1  16K  16K  16K  65.4K  1.02G  1.02G  1.02G
  1M  1  16K  16K  16K  1.20M  19.2G  19.2G  19.2G
 Total  18.9M  303G  251G  264G  40.1M  641G  574G  591G

dedup = 2.24, compress = 1.12, copies = 1.03, dedup * compress / copies = 2.43



So that's really interesting, 40.1 million blocks reduced to 18.9 million. That means if I were to actually run it out to the full 7TB, it probably still means less than 100 million unique blocks, or 32GB DDT if 320 bytes is still the DDT in-core size, which I think it maybe isn't anymore. Interesting.

I have to assume that 1 million refcount bucket has to be the zero-filled record. Heh.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
@maglin, @gpsguy - thanks, that helps.

I see you want raw performance. But if going over 1Gb NIC then that will be you bottle neck, and limiting factor. All the "cache" in the world probably won't make a noticeable difference to the client at the other end. You don't need ECC memory but it's highly recommend to ensure data integrity. If you are going to repurpose your rig you mentioned you should be ok. Try to only use SATA ports on the Intel chipset and stay away from the others ports. If you have to use the discs the data is currently on you will need some storage discs to offload that until you can get it on the NAS.
BTW, while FreeNAS supports deduplication, we don't (generally) recommend it. The rule of thumb is that you need 5GB of RAM for each 1TB of storage that will be deduped. This is just an estimate. There is no upper bound.
Yes, decent raw performance but with good consistency (I haven't got that very reliably right now).

Realistically, I'd like to be reasonably close to what's achievable for my HDDs/NICs/LAN, and not be kept down significantly due to my CPU/RAM/caching device choices being insufficient to offset the slowdowns caused by zfs. I think that's a reasonable aim.

I guess that translates to not seeing intermittent hangups and freezes, consistency, throughput "good for the type of activity", and decent handling of system/NIC stress when busy. Quick guide to the current usage patterns:
  • Usage is moderately "bursty" and there can be congestion/contention/locking issues if a lot's going on. But for hours or days at a time there might be almost no/negligible usage. I won't need encryption for 95% of it and maybe not at all, and only LAN access. I'd like to use snapshots/dedup (far more frequent backups for far longer) but I'd only enable dedup for folders of disk images/VMs/documents, where very large savings are normal (about 2-4TB uncompressed size). The rest won't benefit. But when all said, it's an enthusiast family's home LAN+server, not MegaCorp Inc :)
  • There isn't much predictability of individual file use. But often specific folders are r/w/rw as a whole, or a large file is gradually read/written, if that helps. Any intense activity is almost all ordinary large file/folder copying to/from the server, which can be 1~100GB, or client-side manipulation of files/folders on the server (hashing, content searching). Some programs write a lot in the background directly to the server when active (bittorrent grabbing new distros, client backups/imaging, windows updates cache, etc. At the same time movies could be watched or a large ISO/tar.gz uncompressed from a server share. So there could be quite a lot of different threads all doing near-continual reading or writing - it works mostly quite fast on Windows but if lucky I'd like it better on FreeNAS.
Like I said before, I'm mostly torn between hope that decades of work on the *nix side means FreeNAS will be better than my current setup for these loads and not as prone to freezing/slowing; and fear that Windows clients, tuned to 'talk' most efficiently with a Microsoft protocol+server, might be less smooth/efficient under stress paired with *nix+Samba server.

I don't mind upgrading the h/w to make up for zfs needing more resources for good throughput, but I'd like the end result to be good r/w performance even if dedup's enabled (on 2-4TB of data) when it's done. I already plan to increase RAM to the board maximum (32GB @ 2400) but beyond that I'd have to upgrade the CPU+MB+RAM. If I upgrade everything, I'd probably look at E5-16xx on Supermicro X10SRi-F with ECC, or similar, but can't afford it for a while. SATA/NIC/CPU are and will stay Intel throughout.

Anyhow that's my server use. I hope it's enough to get an idea of likely system adequacy and for comments on any upgrade needs or blatant underspecced components, especially CPU/MoBo/RAM/SSD. At the end of the day I just want to get my final performance mostly down to HDD/NIC throughput and at least no worse than it is now (with Windows on the server), and not bottlenecked by zfs demand on the CPU/RAM/cache.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
This actually reminds me, I wanted to run a zdb -S ...
I know this is veering OT for this thread, but is there a way to do something similar for a dataset? Since you can enable/disable deduplication on a per-dataset basis, it'd be good to see what (if any) benefit it would have on any given dataset. But when I try zdb -S, this is what I get:
Code:
[root@freenas2] ~# zdb -S tank/newsme
Dataset tank/newsme [ZPL], ID 465, cr_txg 35556, 2.11T, 1690 objects
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Not that I can think of. The problem is that deduplication is _enabled_ on a per-dataset basis, but the deduplication happens across the entire pool. This means that if you have datasets A, B, and C, and A and B have dedupe, a block that's referenced in B might also be referenced in A, but C will have its own copy.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
Not that I can think of.
Bother. I have a few datasets that store backups, and one in particular seems like it would be a good candidate for dedup--it stores four weeks' worth of backups, each set consisting of a full backup weekly followed by incrementals daily. If the new server has 128 GB of RAM, I figure I should be able to safely enable dedup and reclaim a TB or so until I can get some new drives into it.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
  • Usage is moderately "bursty" and there can be congestion/contention/locking issues if a lot's going on. But for hours or days at a time there might be almost no/negligible usage. I won't need encryption for 95% of it and maybe not at all, and only LAN access. I'd like to use snapshots/dedup (far more frequent backups for far longer) but I'd only enable dedup for folders of disk images/VMs/documents, where very large savings are normal (about 2-4TB uncompressed size). The rest won't benefit. But when all said, it's an enthusiast family's home LAN+server, not MegaCorp Inc :)
  • There isn't much predictability of individual file use. But often specific folders are r/w/rw as a whole, or a large file is gradually read/written, if that helps. Any intense activity is almost all ordinary large file/folder copying to/from the server, which can be 1~100GB, or client-side manipulation of files/folders on the server (hashing, content searching). Some programs write a lot in the background directly to the server when active (bittorrent grabbing new distros, client backups/imaging, windows updates cache, etc. At the same time movies could be watched or a large ISO/tar.gz uncompressed from a server share. So there could be quite a lot of different threads all doing near-continual reading or writing - it works mostly quite fast on Windows but if lucky I'd like it better on FreeNAS.
I'm wondering if Dedup is even going to be much if any of a benefit and more of a system resource hog if you are talking about such little amount of files that may need depuplication. I would probably just get that Idea out of our your head. I think the space savings vs. the setup and extra hardware needed coupled with wanting to run a pretty active jail that is going to require some ram of it's own is enough to just let the files take up a few extra TB of disc space.

There is no reason you can't run FreeNAS now. I'm not positive but I think you where thinking of running a SLOG SDD for cache, but with only 16GB of RAM it's not going to be very effective (this is from reading not first hand). I would just install FreeNAS on either a SSD or USB drive and give it a run. Use a basic standard install with no cache device and a probably a stripped mirror array since you only have 4 discs. As far as NFS vs. CIFS with your limiting factor being the 1GB NIC I would go CIFS as it's easy to setup and will run just fine. When I transferred about 6TB over my 1Gb NIC I averaged around 95mb/s over the transfer using CIFS. The only slow downs was with thousands of small files.

I would look into getting some more HDD's. I'm currently running those cheap Seagate 8TB archive drives. They are slow but still faster than my 1Gb NIC so the cost/TB was the factor for me running them. I run my jails on a 3 disc RAIDz1 that I'm working on moving to a stripped mirror. I plan to have a hot spare for all my Pools. Performance has been excellent and far exceeded my expectations. I to was looking into dedup and SSD SLOG/ARC like most of the big guys here are running. That requires a lot of setup and RAM to work properly, but I'm maxed out with my 32GB of RAM as that was the course I chose to go with the MB I have. BTW with no clients on my Plex server my jail is using 8.5GB of RAM. That is something you have to consider as well. I also have some outside people dumping data into the NAS via FTP for their backups. Also if you have people requesting shows/movies you can setup something like HTPC manager and they can use that to request. I imagine most the extra work you currently do on your file system can be automated and free up more time for you to do other things.

I look forward to seeing how this works out for you. I'm positive you will be happy once you get it up and running.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Thanks! That's quite helpful!
 
Status
Not open for further replies.
Top