Hi, sorry !Welcome to the TrueNAS Forums.
Please read the forum rules on posting questions. You have not provided some key information to help you out. Specifically a listing of your hardware (Motherboard, CPU, RAM, Drives, where the drives are connected, boot drive and how it's connected, etc.). Without that information my first guess is that you have run out of RAM based on the messages I read. The other option is your system is extremely unstable.
Also exactly what version of CORE/SCALE are you trying to bootstrap.
We don't need a novel but we do need some basic information to give you good help.
Cheers,
-Joe
Please explain, I think "4 to NVMe" means you have four NVMe drives. Are these plugged into an add-on card, if yes, what make/model?Drives : 4 To NVMe (+ 16 x 20 To HDD)
Hi,Thanks for the information. I need to download the user manual but while I'm researching all of that, have you 'burned' in your system? Specifically run MemTest86 or MemTest86+ for a few days to ensure you have no issues and then run a CPU stress test like Prime95 or similar for at least 30 minutes. Some people will run the CPU stress test for day or even up to a week if it will be used in a production environment. For this troubleshooting effort, 30 minutes is good enough.
And for this troubleshooting effort, I'd recommend you use just TrueNAS CORE just because it is a more mature product. Once the problem is solved then use whichever you desire.
It appears you have enough RAM, 16GB is the minimum I'd recommend and you have 32GB. Just test the RAM now, and one pass is not enough. However one fail means you can stop testing.
I would prefer if you could boot from a SSD or even a small hard drive.
Please explain, I think "4 to NVMe" means you have four NVMe drives. Are these plugged into an add-on card, if yes, what make/model?
And "(+ 16x20)" means what? Be descriptive. In you mind you know what you mean but out here we have to guess at what you mean.
Also, if you do have any add-on cards, unplug them, and disconnect all your drives. You should be able to bootstrap TrueNAS with just the USB Boot Drive alone. You will not be able to create a pool but when you troubleshoot a problem, you need to break things down to isolate the issue.
If you start MemTest86 and it looks to be running, toss a message here that you are running the the test and will report back once it's finished. If it fails, report that too. If it fails, ensure you have the RAM in the correct slots, you may even have to reseat them. And at least run MemTest86 for 24 hours. That should be many complete passes. It is very important to have system stability.
That is about al I can tell you right now and should get your started. Also, if you desire, you could remove any add-on cards first to see if you can boot up to the USB drive.
I was asking for you to disconnect all the drives in your system except the boot drive. You can bootstrap TrueNAS without any pools attached and it should start up just fine, but without any pools of course. This is one test. HOWEVER, I think you have found your problem.I can't boot on any disk if there is nothing on them... So I'm not sure what you're asking me ? To install my NVMe on another machine in order to install something on it which should allow me to boot on it ?
Memory controller is on the CPU so it could be the issue. Highly unlikely but still a possibility.The CPU interfaces with the RAM and is just another piece but I rarely see the CPU as being the problem.
More specifically, you're French.I have one (1) SSD NVMe of 4 To/TB and I have sixteen (16) HDD of 20 To/TB (I'm European, over here we say To for Teraoctect which is the equivalent of the Terabyte).
16… SATA ports? What is this card?I'll see to run the MemTest and keep you informed. Also, I'm using 2 PCIe cards : one with 16 SATA slots which is not a RAID card (12 HDD are connected on it and the resting 4 are connected to the motherboard) and the other is a 10 Gbps Ethernet card.
Yes, I know that Tb is Terabits and TB is Terabytes, no worries about that :)More specifically, you're French.
When writing in the English language section, please use "TB" with a capital 'B'.
Your W680 board and i9 CPU would be best used as a desktop than as a NAS. If they were working, that is.
Test the 32 GB RAM modules, if you have them. And/or test a different CPU.
Once you have identified the failing component and replaced it, boot a live Linux distro to "burn-in" your system. After that you'll be ready to install TrueNAS SCALE Cobia. (I hope that the 4 TB drive is not intended for boot!)
16… SATA ports? What is this card?
And what is the 10G NIC by the way?
The motherboard is too new, and hence too expensive for what it is (PCIe 5.0? no use for that in a NAS). And the last generation Core i9 is way too powerful for what a NAS requires; unless you plan to run CPU-intensive VMs/apps, this is massive overkill.Why the i9 wouldn't be great for a NAS ? I wanted this CPU because 1) he should be powerful enough for what I'm aiming to do with it and 2) I'll likely have the use of the HD 770 iGPU for transcoding. And what's wrong with the motherboard ? It was the cheapest compatible with ECC RAM that I found, and it's still about 600 € just for a motherboard.
Urgh! I more or less feared the Aquantia NIC (dubious driver support in TrueNAS; server-grade Chelsio T520 or Solarflare 5122F/6122F/7122F cards go for $50 on eBay) but the "SATA card"… HOLY CRAP! 16 ports on a 4-port controller (ASM 1064, PCIe 3.0x1) and multiple port multipliers… Do NOT use that with ZFS! This is litterally guaranteed to destroy your data. Get a SAS HBA instead (LSI 9200/9300 series, or the Dell/HP/IBM equivalents).The 10G card is the TP-Link TX401.
This is the SATA card.
I picked that up as well - what PCIe card - it doesn't sound like a proper HBA - which means its almost certainsly an issue although probably not the current issue16… SATA ports? What is this card?
And what is the 10G NIC by the way?
About the motherboard it was the cheapest that I was able to find which was compatible with DDR5 ECC RAM, but perhaps DDR4 wouldn't be worse for my use case, I don't know.The motherboard is too new, and hence too expensive for what it is (PCIe 5.0? no use for that in a NAS). And the last generation Core i9 is way too powerful for what a NAS requires; unless you plan to run CPU-intensive VMs/apps, this is massive overkill.
For a storage plateform with light apps, and with a 600 E budget "for the motherboard", you could have an A2SDi-H-TF instead. Except this one already includes the CPU, 12 SATA ports and a server-grade 10 GbE NIC—and, if you buy second-hand RAM, you can fit it with 128 GB for 120 E (RDIMMs are GREAT).
If you do need an iGPU for transcoding, the recommendation would be an earlier generation: C2x6 chipset and Xeon E-2000,or even a Core i3-8100/9100 going back to Coffee Lake. Less sexy, but powerful enough for a NAS and hopefully lighter on your wallet (although C246 motherboards are now hard to find and so more expensive than they should and than they used to be).
Edit. The above began as generic recommendation. Then I went back up and remembered you want 16*20 TB… 320 TB before taking out redundancy. At these heights, the "1 GB RAM per TB of storage" does not apply strictly, but 128 GB RAM feels like a bare minimum. On UDIMM platforms, a L2ARC would help; on RDIMM platforms, consider 192-256 GB RAM.
You're in a territory where a 1st (or 2nd) generation Xeon Scalable would be quite a reasonable option—put a GPU for transcoding, or just handle that on the CPU.
Urgh! I more or less feared the Aquantia NIC (dubious driver support in TrueNAS; server-grade Chelsio T520 or Solarflare 5122F/6122F/7122F cards go for $50 on eBay) but the "SATA card"… HOLY CRAP! 16 ports on a 4-port controller (ASM 1064, PCIe 3.0x1) and multiple port multipliers… Do NOT use that with ZFS! This is litterally guaranteed to destroy your data. Get a SAS HBA instead (LSI 9200/9300 series, or the Dell/HP/IBM equivalents).
Beside, with so many drives, a server enclosure with hot-swap trays on a SAS backplane would make your life easier than cramming 16 drives in a consumer-grade ATX tower (Define 7XL or the like?).
Don't be afraid to be SAS-sy ... a primer on basic SAS and SATA
With the introduction of SAS 12Gbps, seems like "it's time" to do a braindump on SAS. Work in progress, as usual. History By the late '90's, SCSI and PATA were the dominant technologies to attach disks. Both were parallel bus multiple drop...www.truenas.com
8-wide raidz2 is good (recommended not to go beyond 10 to 12-wide). But having two pools with the same layout raises the question whether you could just do bit a single pool with two vdevs.Also, I intend to do 2 pools of 8 HDD in RAIDZ2. Yet, I'm really new to the NAS "world" so thanks you for your help !
ZFS uses RAM for everything, so more RAM generally means more performance. I do not have personal experience with over 200 TB of storage, so I don't know how RAM requirements scale at that level, but I would expect that 128 GB is a reasonable start. Only using the system and running64 GB (DDR ECC) isn't enough considering the amount of storage ? I didn't know about that. If 128 GB is a minimum, what would be the ideal / recommendation (and why, if I may ?) ?
arc_summary
will tell whether there's enough for your workload or whether you'd benefit from more RAM or from a L2ARC.Preferably, yes. I note that you're not afraid of refurbished hardware (good for your wallet!) or from SFP+.So I shall change the 10G card, OK (to get something like this one to avoid driver issue ? but from what I've read the TP-link is supported on TrueNas Scale) ?
Depends what it is and what it does… ZFS was designed for enterprise requirements and assuming enterprise-grade hardware; ZFS is not that good at downgrading to consumer-grade hardware. (Think of ZFS as a the "Elon Musk boss-from-Hell" of filesystems, requiring all components to be "hardcore" or quit.)Would you also be afraid for my linux server that use the same card ?
Definitely a LSI 9200 (even the PCIe 2.0 generation is fine for HDDs) or 9300 (PCIe 3.0 generation, would be fine to even move up to SATA/SAS SSDs). A -8i and motherboard ports are fine. Two -8i are fine if you have the slots (and I suggest sending back your current motherboard, CPU and RAM for refund and moving to the class of hardware which has the multiple slots). A 9305-16i is fine (but the most expensive option).And I shall also change the SATA card, for what one card of 16 slots or 2 or 8 slots ? or 1 card of 8 slots and using the ports on the motherboard ? Perhaps this model on ebay at 228 $ (even if I'll be charged by the customs) instead of 585 € on Amazon FR ?
16 spinning drives are not going to be "quiet". Having once tried to temporarily convert my Define 7 (not XL) I did not like the experience of sliding the trays in their little notches, and there are reports that the trays somewhat isolate the drives from airflow and hinder colling (my ambiant temperature is generally low so that was not an issue in my case). But if you're happy with the propsect of installing and individually wiring 16 drives in there, I'm not going to insist about 3U/4U storage racks, their convenient SAS backplanes (one -8i HBA and expanders to bind all drives) and their wind tunnel fans to keep everyhing cool).Yes, I'm using the Define 7XL, what would you suggest ? It is a silent case, with good airflow and space.
Fixed that for you.(I'mEuropeana french speaker, over here we say To for Teraoctect which is the equivalent of the Terabyte)
Not guaranteed to destroy data, just likely. But it is guaranteed to suck away years of your life with all sorts of issues, from flaky disks to sloooooooow disk I/O. And that's if you get far enough along to set things up (far from certain when port multipliers are involved).the "SATA card"… HOLY CRAP! 16 ports on a 4-port controller (ASM 1064, PCIe 3.0x1) and multiple port multipliers… Do NOT use that with ZFS! This is litterally guaranteed to destroy your data. Get a SAS HBA instead (LSI 9200/9300 series, or the Dell/HP/IBM equivalents).
To be fair, there are honest sellers and reliable dismantlers in China, and there are crooks and traders in fake goods in Europe/UK/US/wherever. The general issue is sorting out the good ones from the bad apples.For the LSI card - ideally you would buy a used card from a dismantler in Europe, rather than a new card from China
I already did the installation...But if you're happy with the propsect of installing and individually wiring 16 drives in there, I'm not going to insist about 3U/4U storage racks, their convenient SAS backplanes (one -8i HBA and expanders to bind all drives) and their wind tunnel fans to keep everyhing cool).
I replaced the fan without prior testing as per your recommendation. I use Thermaltake tough fans (17 Eur each iirc), they were relatively cheap for high static pressure fans.I use a Fractal Design 5 - I added a 3rd Fractal Design Fan (1 Rear, 2 front) stuffed in 7 HDD's and the airflow wasn't enough to keep the disks cool. The fans may be quiet, but they don't shift much air. So I replaced all 3 fans with some high static pressure fans on a seperate fan controller and turned them down till the case was nearly quiet.