But how does it works in huge NAS ? Like for example the one from LinusTechTips, he has multiple racks that's like computer with a lot of hard drive, how does he manage that with TrueNas ?
Asking how LTT does something is often pointless. It never seems to be the right way; anyone who knows anything either watches only for amusement/amazement/love-of-trainwrecks, or cannot even tolerate the sheer idiocy. I'm in the latter group, I'd rather watch shards of glass get flung at my eyes than watch LTT.
But the question's great.
Big storage happens two ways (ignoring SAN strategies):
1) You can have a large server. You may be used to thinking of a "server" as a single chassis that has a bunch of pieces in it, but this only goes so far. Once you need to hook up dozens or hundreds of drives, you use shelves of JBOD disks, connected to the computer via SAS expanders and cables. This is
discussed in the SAS Primer. You can easily get
90 HDD's into a 4U space with JBOD enclosures such as this Supermicro jobbie. Putting 10 of those into a 42U rack means 900 HDD's, and with general availability of 18TB HDD's, that works out to 16 petabytes in a single rack. ZFS can theoretically scale far beyond that, although you'll probably run into some practical issues unless you have expert design and implementation guidance.
2) You can have smaller servers. Back in the early 2000's, my
shop was one of the pioneers of 24-drive-in-4U chassis designs, and this worked out well because we needed storage for a protocol for which I had added hashing distribution extensions. There's a strong upside here in that the bunch of smaller servers is more resilient to failures, but you don't get a natural "single view" presentation as you would with a larger ZFS pool.
For #2, the past 10 or 15 years have seen a lot of evolution in this direction, as the sheer volumes of data being managed in the world of computers has exploded dramatically. Projects allowing distributed file storage, such as GlusterFS, allow the creation of larger storage systems that do not involve having a single massive server that also acts as a single point of failure.
I would like to be able to link both to the same NAS in order to have more power for my cron jobs and other VMs.
A NAS is by definition network-attached storage, and the NAS is happy to let another computer, or other computers, mount its storage and take advantage of it. This doesn't have to be end user computers. It can be servers as well.
For example, lots of people run Plex Server as a jail on top of FreeNAS, but you can just as readily use a second computer, mount your video fileshare via NFS, and run Plex Server on that. That could be either another physical computer or in a VM on a second computer that is set up as a hypervisor.
This actually makes the NAS a lot simpler, because it is just doing NAS functions like sharing files. You just load your favorite FreeBSD or Linux on the second computer, add an NFS mount to /etc/fstab, and go to town. Shared files via NAS are extremely common on larger networks, because it is so practical and easy to do.