A new NAS setup/advice on feasibility of first TreuNAS setup

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
@Arwen: I'll get back to you on most of the things you said in your last two posts. Your posts deserve a well thought through reply and I feel I have some reading up to do before I can you a give you a meaning full reply.

As for the bricked system, yeah I could do that and thank you for the advice! I will not be doing that, for many reasons however they're personal and I prefer not discussing them in an open forum. If you'd like to know why I do mind sharing the reasons through PM though, just ask! You've been a great help!

On the topic fragmentation:
Ah ok then it makes sense. So just to verify I understood what you said:
- A virtual block device like a iSCSI LUN probably has a lot of continuous changing data and thus fragmentation logicly becomes an issue.
-->This is then mittigated by a rule of use stating 50% usage where it won't become such an issue.
--> Another way of mitigating this might be to stop the VM during a service window in my use case moving the iSCSI lun away and back if I understand you correctly. I mean for me I could shutdown the VM's in the night for a few hours and do this. My LUN's will not be to big anyway.
- Searching data and a empty slot to put data is slowed considerably by fragmentation in ZFS and thus for "fragmentation heavy" workloads you need a lot free space to mittigate this as much as possible.

It seems to me that ZFS itself is ill suited for iSCSI then. I mean iSCSI is a virtual block device. If and when I start using iSCSI, I do not want this. Basicly when using iSCSI the consumer is responsible for any data in the block device. You want the block device to claim a piece of disk space on something that is preferably fast and has some disk level redundancy. If you need further redundany you setup replication and a second redundant iSCSI endpoint.

Often for iSCSI we would use raid5 or 6 for speed and some parity. Then have the SAN replicate the data to another DC with the same setup over a 10GB dark fibre. VMWare would then switch (which almost never happened) to the backup iSCSI endpoint if the main one died. In my case for anything iSCSI

Is there really no way to have storage in TrueNAS if ZFS just doesn't fit the use case for the pool you're making?

Personally I'm against abstract rules to simplify because as clearly can be seen around the internet to much regurgitation happens without understanding what they are regugitating. Thank you for taking the time to explain it!

On the BPR:
Yeah, that would be great. Basicly it's just moving reference around. Not sure how that would fix fragmentation though. It wouldprobably fix many of the search issue. This of course is pure conjecture ;)

On the iSCSI part:
Well my question is more, if that is the limitation why even use ZFS for that use case instead of traditional methods. Since I do not have any actual experience with ZFS yet I'll go deeper into this once I have a test system and performed some comparissons.

Facts:
- You have to rely on the hardware RAID card and it's drivers for telling you, and potentially recovering from, bad disk blocks or bad disks.
--> Agreed, that is what they're made for and I do see how this goes against the ZFS philisophy. When looking at it as a Architect I would say that is a consiquence of the abstraction. You add a extra level of indirection which has pro's and con's. I do see how heavy the cons are though even if they are difficult to quantify.
- If the RAID card fails, you generally have to replace with a similar model. (Or server with similar model of RAID card.) You can't just put the disks into another server and import your ZFS pool.
--> Yeah agreed that is a huge dissadvantage, for the DC's we'd have a replacement configured lying on the shelves in a preconfigured state
--> Not just a simular model, also the exact same firmware and configuration
--> Hardware RAID has some undeniable advantages though, which I will not be going into further since we've concluded it is a bad idea in this case and for my use case
- OS side SMART won't work
--> Yeah obviously since you're abstracting the hardware. SMART is handled by the RAID card. (Unless you use it as a IT in which case it should work fine)
- RAID cards can and do limit writing speeds. This is because ZFS bundles writes into transactions that will write across multiple disks. They can be larger than a hardware RAID card's battery backed up cache memory.
--> Hmm, yes I hadn't considered that. I would formulate it differently however I do agree. In the case of ZFS and how it handles data writes it'll defenitly impact performace SEVERELY!
- RAID cards can and do limit reading speeds. ZFS can trigger reads across all your data disks. This is especially noticeable during a scrub, which can read for hours. Even days. (But a ZFS scrub can't fix anything because it does not control the redundancy... see Note 1)
--> Well yeah in the proposed theoretical I assumed data scrubbing on zfs would be dissabled. Since that is the RAID cards responsibility.
- Last, ZFS will run fine on the hardware based RAID card, UNTIL IT DOESN'T!
--> Now we don't want that do we :P
- Their have been KNOWN cases of ZFS finding bugs in hardware RAID card firmware, (or in user implementations), the HARD way, meaning data loss. For example, not setting the hardware RAID card to automatically change write cache from write later to write through during battery failure. Then a crash occurs and you've lost data that was supposed to be "securely written".
--> Yes, however i would like to argue that a misconfigured ZFS system would probaly be just as bad. Furthermore the code base for ZFS (haven't dared to take a peak yet) is probably extremely complicated and thus has a lot of room for possible bugs. Not saying it's bad or not mature this is just a fact of software.

Other software vendor:
So yes, I assume your talking about synology in this case. This is a setup I am currently using. However in the past i have had a volume crash due to this leaving me with a read only version of my data (12TB at the time). Synology was/is convinced that this is due to memory issues eventhough numerous tests on the memory by me as well as Synology have come back possitive. Just to be clear I've been using Synologies for 10 years now and have had 2 "Fatal" issues. Both of which support helped me resolve. The other one being the stupic C2000 bug which isn't really Synologies fault.

Note1:
Cool. I definitly see the benefits of ZFS and I think TrueNAS is the best option for freeNAS software. However everything has con's and I would like to be as well informed as I can be before handing the keys to the missus :P

The rest I'll come back to. Again thnx. you've really helped me jump start my trek into ZFS and TrueNAS!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Regarding fragmentation on iSCSI, especially with regards to VMware/virtualization, you're entirely right that the copy-on-write nature of ZFS seems to be at odds with this. It's definitely "one of" if not "the most" challenging workloads to run at good levels of performance. There are a couple of well-written resources from @jgreco that dig into the reasons behind this and the mitigating factors, but you've already come to the conclusion here that it's largely driven by fragmentation.



On the writes side, having lots of free space is the way to mitigate the poor performance; on the reads side, you throw lots of ARC and L2ARC at it. As of TrueNAS 12, you can also add some SSDs for metadata which help a bit on both sides; however, that requires a bit of intentional pool design and ensuring that you have physical space for sufficient drives to get both the capacity (I'd suggest 1% of your pool size for VMware) and redundancy (mirrors at a minimum and a 3-way mirror wouldn't be unreasonable) - and of course, if "money is no object" you simply make your data vdevs faster by using SSDs instead of spinning disks, which removes the seek times that fragmentation causes.

For the storage goals stated in your original post though, it's quite a mixed bag, and suggests a two-pool design in my mind.

- Reliable backups for my important data set (currently have this setup to 2 external synologies and to a Azure share all of these are of course encrypted)
- Shares for Documents/Photos/Audio/Video/Binaries
- At least 30TB of usable storage capacity with at least a 1 disk tolerance level
- Plex server

This should work quite well as RAIDZ2, since you're not generally going to demand huge amounts of instantaneous performance for these workloads.

- iScsi
- Docker/Kubernetes capability
- VMWare (or other virtualization) capability

These on the other hand would benefit from being on mirrors - mirrored SSD, even, if the cost-to-capacity ratio is doable. Drives like the 860 EVO or WD Blue (SATA) are just over USD$100 for 1TB, and unless you really plan to bang on them should have enough endurance to survive most "home-lab" workloads.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@nemesis1782
It seems to me that ZFS itself is ill suited for iSCSI then. I mean iSCSI is a virtual block device. If and when I start using iSCSI, I do not want this. Basicly when using iSCSI the consumer is responsible for any data in the block device. You want the block device to claim a piece of disk space on something that is preferably fast and has some disk level redundancy. If you need further redundany you setup replication and a second redundant iSCSI endpoint.
Yes, in my opinion, ZFS is not suited to iSCSI / zVol work. It can work, especially with striped, Mirrored NVMe vDevs. But, zVols were probably added initially to ZFS for swap & dump volumes in Solaris 10. Then found they could be used for other reasons.

Is there really no way to have storage in TrueNAS if ZFS just doesn't fit the use case for the pool you're making?
The original FreeNAS had only UFS with FreeBSD mirroring. Whence ZFS came in to the picture, UFS and any other RAID was phased out.

In someways, TrueNAS, Core or Scale, won't fit many NAS uses. Their are people who need 2 disks in a mirror and don't want to bother with a "real" server, (meaning 8GBs of memory, separate boot device, limited hardware support). Or people that want very specific hardware & software configuration. (For example, I built my own media server, because I did not find a "package" that did what I wanted it to do.)

--> Yes, however i would like to argue that a misconfigured ZFS system would probaly be just as bad. Furthermore the code base for ZFS (haven't dared to take a peak yet) is probably extremely complicated and thus has a lot of room for possible bugs. Not saying it's bad or not mature this is just a fact of software.
Absolutely. We have had people mis-configure ZFS and loose data. Plus, their have been several data loss bugs in the last few years. They did not last long, as soon as they were discovered, they were squashed. (And in one case, special software was written to recovery of the lost data.)


There are some other "design flaws" in ZFS. For example, encryption was not initially planned or included. Both Oracle ZFS and OpenZFS had to improvise. Nothing dangerous to the data, but if encryption had been designed in, somethings would have been easier and cleaner.

Other "design flaws" have been fixed. Oracle ZFS had a problem where deleting a large Dataset or zVol could take a long time, and potentially cause normal access problems. One of the first features added to OpenZFS was "async destroy", in which any deleted Datasets or zVols have their free space incrementally freed up, in the back ground. Even rebooting or an out right crash in the middle of a back ground free won't cause problems. It simply resumes where it left off. Oracle ZFS continued to have this "problem" for years after OpenZFS fixed it.
 

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
@HoneyBadger: Thank you for the links I'll them to the pile :P I might need a zPool just for the references to things to be read once I get TrueNAS setup :)

On the writes side, having lots of free space is the way to mitigate the poor performance; on the reads side, you throw lots of ARC and L2ARC at it. As of TrueNAS 12, you can also add some SSDs for metadata which help a bit on both sides; however, that requires a bit of intentional pool design and ensuring that you have physical space for sufficient drives to get both the capacity (I'd suggest 1% of your pool size for VMware) and redundancy (mirrors at a minimum and a 3-way mirror wouldn't be unreasonable) - and of course, if "money is no object" you simply make your data vdevs faster by using SSDs instead of spinning disks, which removes the seek times that fragmentation causes.

For the storage goals stated in your original post though, it's quite a mixed bag, and suggests a two-pool design in my mind.
It's been/being a steep learning curve. So color me intrigued. I've already resigned my initial plans and realized that a few large disks in one pool is not the way to go! Now looking to acquire a few (hopefully 23x 2TB SAS 7.2k disks which I hope to hear back on) smaller disks. I'll add some SSD's for certain things (don't think I'll use L2ARC because I only have 64GB of memory). Then I'll create pools in different configurations according to the required use case. Much of the data is video which is written once and never changed, so little to no fragmentation.

This should work quite well as RAIDZ2, since you're not generally going to demand huge amounts of instantaneous performance for these workloads.

These on the other hand would benefit from being on mirrors - mirrored SSD, even, if the cost-to-capacity ratio is doable. Drives like the 860 EVO or WD Blue (SATA) are just over USD$100 for 1TB, and unless you really plan to bang on them should have enough endurance to survive most "home-lab" workloads.
This is close to what I'm thinking of. However for the high performance set I was thinking 2x2 or 3x2 (2TB SAS 7.2k disks) mirror vDevs in a zPool. A small L2ARC SSD, a mirrored 240GB SSD vDev for a Fusion pool, a mirrored 240GB SSD vDev for ZIL/SLOG and 2x1TB samsung 860 EVO out of my synology as a cache. This is because I have 2 workstations\desktops I want to provide iSCSI to once I upgrade to 10Gbe.
 

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
@Arwen: again thnx for the thoughtful insights. Glad at least a few can agree it's not all roses and sunshine :) Like I read in a nice blog post. ZFS is powerful but it aint magic!.

Of course I definitely see the benefits ZFS can bring or I wouldn't still be here :) Well apart from the interesting and challenging conversations that is.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@nemesis1782 while no one will ever suggest you store production data on it, you can very easily whip up a virtual-TrueNAS in a type 2 hypervisor (VMware Workstation, VirtualBox, etc) and set up some 10G vdisks to play with the layouts, terminology, etc.

Responses below, some questions or statements highlighted in bold so that they aren't missed.

It's been/being a steep learning curve. So color me intrigued. I've already resigned my initial plans and realized that a few large disks in one pool is not the way to go! Now looking to acquire a few (hopefully 23x 2TB SAS 7.2k disks which I hope to hear back on) smaller disks. I'll add some SSD's for certain things (don't think I'll use L2ARC because I only have 64GB of memory). Then I'll create pools in different configurations according to the required use case. Much of the data is video which is written once and never changed, so little to no fragmentation.

The L2ARC here will probably not be as valuable; not because of lack of RAM, but because your read pattern won't really fit it. Unless you plan to watch the same videos over and over (unlikely, unless you have small children) and the read speed from disks is insufficient (unlikely in any case) then there's not really a point in burning your SSD program/erase cycles trying to cache videos on NAND.

Where an L2ARC could help here is as "metadata only" if you have a lot of files. Speeding up those reads could help things like directory listings and folder navigation. This is safer than using special vdevs, because there's no need to make the L2ARC fault-tolerant. If it fails, you simply go back to reading from vdev.

This is close to what I'm thinking of. However for the high performance set I was thinking 2x2 or 3x2 (2TB SAS 7.2k disks) mirror vDevs in a zPool. A small L2ARC SSD, a mirrored 240GB SSD vDev for a Fusion pool, a mirrored 240GB SSD vDev for ZIL/SLOG and 2x1TB samsung 860 EVO out of my synology as a cache. This is because I have 2 workstations\desktops I want to provide iSCSI to once I upgrade to 10Gbe.

Devices chosen for SLOG have specific requirements in order to perform well, and special does as well to a lesser extent. The link in my signature helps dig into it a bit but the short version is that SLOG should ideally be an NVMe device with power-loss-protection for in-flight data, such as an Intel DC series or NVRAM-based PCIe card. Special vdevs are less stringent but still need to be able to handle a fairly mixed read-write workload at small recordsizes. Did you have specific SSDs in mind?

The very large L2ARC cache there of 2TB would almost certainly function better as a dedicated 2x1T mirror pool, as it would be challenging to have it warm up properly over time. Default fill rate will only allow the cache to grow at 8MB/s, while this can of course be increased there are tradeoffs to it, and unlike ARC that works based on "most frequently used" and "most recently used" the L2ARC is a simple, dumb ring buffer that's based on "first-in, last-out" - so you don't necessarily get as much benefit as you want from it.

Lastly, while it's possible to provide sustained 10Gbe reads if you're able to hit ARC (or even L2ARC if it's fast enough) - it's very challenging to get it from cache misses and reading off of vdevs. Trying to get sustained 10Gbe writes often demands pools with either dozens of spinning-disk vdevs (the 24x2T disks for your other pool might suffice) or a good number of SSD-based ones (an 8x2-wide SSD pool of mine put through around 1.9GB/s when it was empty) as well as a very fast SLOG device if you want those to be synchronous ("safe") writes. Optane devices can handle it at larger records, but to hit the level of 1GB/s writes no matter the size or duration, you're probably going to be in the world of NVDIMMs and all the "special motherboard configuration" those require. There are a few users here with them, but system vendors often want you to purchase complete systems (less drives) in order to validate NVDIMM compatibility.

Hopefully this has helped a bit, and I haven't rambled on too much, but I'm sure as with many things ZFS it's illuminated a bunch more rabbit holes that can be explored. The amount of knowledge out there in the field is vast - so I'm glad to see you're not only willing but eager to plumb its depths.

Cheers!
 

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
@HoneyBadger Yeah I read about the virtualization. Havin iESX run on the hardware and virtualize TrueNAS eventually is a option I'm playing around with. But before I do that there is a lot of testing and reading to go through. There are also quite a few time constraints since my employer expects me to some work during the day. So annoying ;) and I have a family which does need my attention at times. Currently I have a few hours a day to "waste" on this and those are split between reading documentation and replying to the posts. Which since that needs to be meaningful and requires some tought and reflection takes a bit chunk of time.

I do feel that my knowledge on the subject is increasing rapidly and that is all due to the interaction you(@HoneyBadger), @Arwen, @sretalla, @jgreco and the once I forgot provide. So thank you for that!

The L2ARC here will probably not be as valuable; not because of lack of RAM, but because your read pattern won't really fit it. Unless you plan to watch the same videos over and over (unlikely, unless you have small children) and the read speed from disks is insufficient (unlikely in any case) then there's not really a point in burning your SSD program/erase cycles trying to cache videos on NAND.
Yes, this was also my conclusion. For this part of the workflow.

Where an L2ARC could help here is as "metadata only" if you have a lot of files. Speeding up those reads could help things like directory listings and folder navigation. This is safer than using special vdevs, because there's no need to make the L2ARC fault-tolerant. If it fails, you simply go back to reading from vdev.
Yes, I understand the risk of splitting the metadata from the data. However I do think the benefit for me would outweigh the added risks. Of course this HAS to be at least a mirror, since the metadata is the key to identifying what you data is and means I might even go with a 3 way mirror, depending on the size of the Fusion pool required. I would not imagine it being to large though.

Also I would put my documents and very important data on a sepperate zPool which will be as reliable and thus simple as possible.

Devices chosen for SLOG have specific requirements in order to perform well, and special does as well to a lesser extent. The link in my signature helps dig into it a bit but the short version is that SLOG should ideally be an NVMe device with power-loss-protection for in-flight data, such as an Intel DC series or NVRAM-based PCIe card. Special vdevs are less stringent but still need to be able to handle a fairly mixed read-write workload at small recordsizes. Did you have specific SSDs in mind?
Yeah I saw the special somewhere before. My plan is still very much based on ignorance. So the cheapest :P Realisticly. I'll come back to this do not have sufficient knowledge on the subject yet. Thank you for pointing it out and providing the information to deal with it.

The very large L2ARC cache there of 2TB would almost certainly function better as a dedicated 2x1T mirror pool, as it would be challenging to have it warm up properly over time. Default fill rate will only allow the cache to grow at 8MB/s, while this can of course be increased there are tradeoffs to it, and unlike ARC that works based on "most frequently used" and "most recently used" the L2ARC is a simple, dumb ring buffer that's based on "first-in, last-out" - so you don't necessarily get as much benefit as you want from it.
Hmm. Alright. Thank you I'll think about this, read up and get back to you.

Lastly, while it's possible to provide sustained 10Gbe reads if you're able to hit ARC (or even L2ARC if it's fast enough) - it's very challenging to get it from cache misses and reading off of vdevs. Trying to get sustained 10Gbe writes often demands pools with either dozens of spinning-disk vdevs (the 24x2T disks for your other pool might suffice) or a good number of SSD-based ones (an 8x2-wide SSD pool of mine put through around 1.9GB/s when it was empty) as well as a very fast SLOG device if you want those to be synchronous ("safe") writes. Optane devices can handle it at larger records, but to hit the level of 1GB/s writes no matter the size or duration, you're probably going to be in the world of NVDIMMs and all the "special motherboard configuration" those require. There are a few users here with them, but system vendors often want you to purchase complete systems (less drives) in order to validate NVDIMM compatibility.
Ow yeah. I'm not expecting to fill 10Gbe. However I do expect to be easily go over 1Gbe in most cases (small files which are cold are of course not part of that). As for the detail. I think I understand most of it. I'll get back to you :P

Hopefully this has helped a bit, and I haven't rambled on too much, but I'm sure as with many things ZFS it's illuminated a bunch more rabbit holes that can be explored. The amount of knowledge out there in the field is vast - so I'm glad to see you're not only willing but eager to plumb its depths.
Please do ramble :) This very helpful! And yes when going into I though well it's just another platform let's do this! I mean it's a system for simple users you can buy out of the box shouldn't be that difficult right? Safe to say I'm realizing that I defenitly **** the bed on that one...

I do wonder if a TrueNAS even a preconfigured one (without disk arrays setup). Is something that would be usable/safe in the hands of a average consumer. I mean I see myself as being fairly experienced and a power user when it comes to IT related topics and I'm struggling to get a handle on it, which does not happen that often. (of course I do love to over complicate thing :P )

Cheers mate!
 

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
Hi Guys, been busy with work and wife mandated projects ;)

I now have a nice R720XD sitting on my desk:
- 2x Xeon
- 64GB RAM
- 12x 3TB SAS disks (with 3 extra on a shelf in case one of em goes bad)
- 2x SATA SSD for boot drive (128GB)
- 2x SATA SSD which will go into a mirror to run some VMS's and persist some storage to kubernetes (1TB)
- 2x PCIE SSD for a later to be determined purpose (256GB)
- IDRAC7 Enterprise
- H710 Mini (since that's what it came with). I flashed it to IT mode using: https://fohdeesha.com/docs/H710-D1/ to upgrade it. Which according to the info seems to have been successfull

I'm planning to connect the boot disks to the SATA2 ports of R720 designated for CDDrive and TBU. However this isn't really working.

When I start the installation for TrueNAS SCALE it detects the two PCIE SSD's however it will not detect the 2 SATA SSD's which are connected to the SATA2 ports. They use the 4 pin Power connector.

What I do get is a unknown device in the system drive selectio part of the installation. Any one have a idea how to resolve this. I'd love to use these 2 SATA2 ports since that means I do not need to sacrifise any other slots.

After that is done. I'll add one more mini sas to 4 sata and another HBA so I have 12 more connections which will accomidate the two 1TB SSD's and 5x 6TB SATA drives. Using the two vacant backplane power connectors. I assume that means that the driver for the SAS9207-8i in IT mode is missing from the installer.

This way the server will support 26 disks which should be enough for now :P

The Power Supply is 1100WATT if any one is concerned so not going to max that anytime soon...

UPDATE1: I have tried connecting just 1 SSD which gives me the same result. The SSD's have been testen and are confirmed working. Removing said SSD(s) does not remove the error in the installer for the unknown device. I assume the driver for the H710 mini (LSI

UPDATE2: Just as a test I added a disk into the front and that is detected by the installer. So the H710 mini is doing it's job and is indeed in IT mode. Which is good news! Another thing I noticed is that it seems to be trying to mount the unknown devide to CDROM.

UPDATE3: Some more research may indicate that the R720XD I have has the sata2 ports dissabled. which would explain my troubles in finding the settings in the bios.

Any help or insights would be appreciated! I'll update as updates are available.

UPDATE4: So yeah, no more hep needed the SATA ports have been dissabled on the R720XD for whatever reason.

I'll have to boot from the SSD's connected to the HBA's with a boot ROM or maybe use the PCIE SSD's for that purpose which feels like a waste...
 
Last edited:

ddaenen1

Patron
Joined
Nov 25, 2019
Messages
318
Thnx, this is indeed what I found and meant. Still not sure if I want to do this or just sell the card and buy a cheap LSI though.

For my initial test setup I will leave it as is. Once I go to production I will of course have to do something about this.

Here is my simple take on on all of your questions/concerns not being a subject matter expert but i am running my third FreeNAS/TrueNAS server with SMB shares, backup, Plex and last but not least Nextcloud that functions for both professional and personal use after having tried the consumer solutions such as Netgear and Synology:

Unlike many other NAS software out there, TrueNAS is one stable piece of software. I have had one crash due to my own ignorance of using a thumbdrive as the OS boot pool on one of my first installs despite the warnings. ZFS has its pro's and con's but has saved me in the past from fatal data loss by notifying me well on time to replace a disk, which is quite easy once you get the hang of things.

I do strongly recommend a HBA instead of HW-RAID so that ZFS fully manages and monitors the pool(s). I have had a Perc H200 flashed in IT mode and now i am on an LSI9211-8i HBA. My system is completely homebuilt with mainly server bits and pieces from a local 2nd hand sales platform here in BE and runs great.

As i mentioned, i am not a subject matter expert but i have learned a lot by doing. Feel free to ask away. That's what this forum is for.

Cheers, Dominique
 
Last edited:

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
nice to hear from someone local(ish) from NL myself. This is my first self built NAS.

I do strongly recommend a HBA instead of HW-RAID so that ZFS fully manages and monitors the pool(s).
I have flashed my H710 to IT mode, which I have now confirmed working. Will also be adding a LSI9211-8i HBA once I receive the molexes I need to make the power cables to tap power from the free backplane power connectors.

Unlike many other NAS software out there, TrueNAS is one stable piece of software. I have had one crash due to my own ignorance of using a thumbdrive as the OS boot pool on one of my first installs despite the warnings. ZFS has its pro's and con's but has saved me in the past from fatal data loss by notifying me well on time to replace a disk, which is quite easy once you get the hang of things.
TBH I like Synology. Their support is stellar. I do still advise those to anyone needing a NAS but not wanting the knowhow. For me I have a fairly beefy one DS2415+ and I've been feeling limited by it for a couple of years now. Also if you leave them alone and don't fiddle with them they're stable for a long time. I have one running in a friends business which is now 13 years old all I do is occasionally run updates and check the logs.

As i mentioned, i am not a subject matter expert but i have learned a lot by doing. Feel free to ask away. That's what this forum is for.

Cheers, Dominique
One question. ATM, well many questions but one that is relevant :P

What do you use for your system/boot device now?

I read a few threads advising not to use a HBA to run system/boot devices. With my current server R720XD I do not have local SATA ports to boot off, they're dissabled for some weird DELL certified reason... Only other option I have is PCIE M.2 card with SSD which is a bit of a waste for two boot drives.

Thnx for weighing in![/QUOTE]
 

ddaenen1

Patron
Joined
Nov 25, 2019
Messages
318
nice to hear from someone local(ish) from NL myself. This is my first self built NAS.


I have flashed my H710 to IT mode, which I have now confirmed working. Will also be adding a LSI9211-8i HBA once I receive the molexes I need to make the power cables to tap power from the free backplane power connectors.


TBH I like Synology. Their support is stellar. I do still advise those to anyone needing a NAS but not wanting the knowhow. For me I have a fairly beefy one DS2415+ and I've been feeling limited by it for a couple of years now. Also if you leave them alone and don't fiddle with them they're stable for a long time. I have one running in a friends business which is now 13 years old all I do is occasionally run updates and check the logs.


One question. ATM, well many questions but one that is relevant :P

What do you use for your system/boot device now?

I read a few threads advising not to use a HBA to run system/boot devices. With my current server R720XD I do not have local SATA ports to boot off, they're dissabled for some weird DELL certified reason... Only other option I have is PCIE M.2 card with SSD which is a bit of a waste for two boot drives.

Thnx for weighing in!


I use 2 x 240gb SSD's configured in z-mirror plugged directly into the mobo. The HBA is connected to a SAS backplane and i run 4 2TB SAS Drives.

Did you look at this: https://www.youtube.com/watch?v=azSPpVf_zFc ?

Another thing: I also have an old Synology RS214 which i use for local document storage, backup and a low level log-server. The thing is, while it is solid for years without a glitch, it is not expandable, i find the GUI getting slower and slower after every update and last but not least, i always had the feeling that DSM makes things difficult just for NAS functionality. I had a Netgear RN212 before that which was way easier in configuration and usage. I just got rid of it because i installed a rack in the basement and want to stash everything inside rather than a desktop version.

I am very happy with TrueNAS. It is solid, fast and whilst some tinkering might be needed once it runs, it runs. I particulary like the combination of data safety and storage functionality.
 
Last edited:

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105

Did not see the movie, but yeah i saw the backplane. Where I can I try to limit my budged and get the optimum out of what I have. Actually the backplane is connected in a different way then I thought might give that another thought.

Also as far as i know the backplane or rather the sas connector which is used for it is connected to the H710 mini. So it would still not give me a disk not connected to a HBA

Another thing: I also have an old Synology RS214 which i use for local document storage, backup and a low level log-server. The thing is, while it is solid for years without a glitch, it is not expandable, i find the GUI getting slower and slower after every update and last but not least, i always had the feeling that DSM makes things difficult just for NAS functionality.

True, the main reason I'm moving to TrueNAS is more controle and expandebility. The GUI does indeed get slower overtime/updates. I personally like the GUI and the offering of functionality in the Synology.

I had a Netgear RN212 before that which was way easier in configuration and usage. I just got rid of it because i installed a rack in the basement and want to stash everything inside rather than a desktop version.

Never had a netgear. I use their switches though and have always been impressed by price vs functionality.

I am very happy with TrueNAS. It is solid, fast and whilst some tinkering might be needed once it runs, it runs. I particulary like the combination of data safety and storage functionality.

Thnx. That's good to here, still in the tinkering phase though. And of course I have to make it as difficult as possible on myself :P
 

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
Hi guys, after running into some unexpected issues with R720XD. The SATA ports as well as the mini SAS port on the mainboard are disabled I got another HBA which has 4 SSD's connected to it.

I installed the latest TrueNAS scale on 2 128GB SSD's.

On boot it halts after the CDROM, some errors came by. The ZPOOL cannot be mounted! After some googling I figured out that it basicly boots to quickly and the HBA's haven't been detected yet :eek:

"Solved" this by running zpool import 'boot-pool' and then exit. After this it boots! The UI looks amazing and comming from DSM it is crazy responsive. Also comming from Synology it's wierd having my NAS staying at 1% CPU...

I decided to create my first VDEV (well not counting the one for the boot-pool).
1621356698794.png


I give my OK for loosing all data and off it goes.
1621356749375.png


At least for a number of seconds that is :mad:
1621356806569.png


The more info has:
Code:
Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 378, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 414, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1001, in nf
    return await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 673, in do_create
    formatted_disks = await self.middleware.call('pool.format_disks', job, disks)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1239, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1196, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
    await asyncio_map(format_disk, disks.items(), limit=16)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
    return await asyncio.gather(*futures)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
    return await real_func(arg)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 32, in format_disk
    devname = await self.middleware.call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1239, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1207, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1111, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3/dist-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info_linux.py", line 97, in gptid_from_part_type
    raise CallError(f'Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sdb


Any help would be appreciated. It seems to be expecting a partition. However it asks if all data can be erased. The disks came from a Synology and were previously used as a write cache.

PS As a test I created a VDEV using the two NVME drives and that was successfull.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You may need to wipe the disks. If they are used, the old information has been known in the past to cause problems. In some ways I prefer this, as a kind of extra safety step. But, it's also annoying.

As for how you wipe the disks. Some times just wiping the partition table works. Other times removing more of the file system signature is needed. So, in essence, I can't tell you. I'd normally run a "dd if=/dev/null" against the disk(s) to clear them.

Perhaps someone else can give better info.
 

nemesis1782

Contributor
Joined
Mar 2, 2021
Messages
105
You may need to wipe the disks. If they are used, the old information has been known in the past to cause problems. In some ways I prefer this, as a kind of extra safety step. But, it's also annoying.

As for how you wipe the disks. Some times just wiping the partition table works. Other times removing more of the file system signature is needed. So, in essence, I can't tell you. I'd normally run a "dd if=/dev/null" against the disk(s) to clear them.

Perhaps someone else can give better info.
Thnx. Expected something like this to be the answer. A bit flabbergasted that this is not something that the system just does out of the box though.

I'll update when I have time to try it.
 
Top