3x2 Mirror or 1x6 Z2

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
What do you think of the idea of switching the optane drive for more ram and disable sync?
 

patrickjp93

Dabbler
Joined
Jan 3, 2020
Messages
48
What do you think of the idea of switching the optane drive for more ram and disable sync?
As long as the NAS is on a UPS and can do graceful shutdown, sounds perfectly fine to me. If not, having the SLOG disk or a scratch disk is recommended.
 

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
Okay, yeah UPS is available so powerloss shouldnt be a problem.

I will definitely let you guys know, how it performs for us when its assembled.
 

ccav

Dabbler
Joined
Apr 28, 2019
Messages
15
Okay, yeah UPS is available so powerloss shouldn't be a problem.

I will definitely let you guys know, how it performs for us when its assembled.
i want see it,happy!
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700

patrickjp93

Dabbler
Joined
Jan 3, 2020
Messages
48
Unless the PSU of the system failes.
Which is why you buy quality components with 10 or 12-year warranties, such as the EVGA Super Nova Platinum/Titanium lines (10-year) or the Seasonic Titanium Prime (12-year).

When the Seasonic fanless 700W becomes available, that's going to be my go-to for all new PC builds.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Seasonic Titanium
In all the years (30 plus) I own and run computers in my home I lost a PSU only one time (about 18 months ago). Ironically it was the PSU of my FreeNAS box and it was a Seasonic (Prime 650W) . It is replaced under waranty and the replacement is running fine, though now in my ESXi box. So, failures can happen, no mather how good a product is. It's naive to think otherwise. And when it does there will be no gracefull shutdown. It's good to have a UPS. I have one myself. But it won't protect against every possible failure in the power delivery.
 

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
Hey guys,
just wanted to update this here. The build is now running. Had some issues regarding the cpu, because the 2000series doesnt work with the chipset on the supermicro board. You have to stick to the older 1000series cpus, even though its the same socket.

We disscused internally the possible raid configurations and we ended up in using striped mirros instead of raid-z2. First tests show not so bad speed. Simple file transfer from windows pc to share is about 550mb/s writing and 1gb/s reading. We skipped the optane drive and using no cache or slog, but therefor upgraded ram to 64gb.

Only thing which doesnt work at the moment are the sata connectors conncted to the backplane of the case via SFF-8087 to sata connector. The drives are not getting picked up, even not in the bios. Do you need any special cables?
EDIT: I think I figured out the problem, there are two types of cables forward/reverse. It seems that mostly forward cables are selled (SATA as target and SFF as host) but I need it the other way around (SATA host and SFF target). Ordered another cable and hope this works.
 
Last edited:

Bozon

Contributor
Joined
Dec 5, 2018
Messages
154
I really hope that in the near future you can go on ssd only storage, but at the moment its just to pricey. We tried 2 years ago the approach of splitting project folders to use a ssd only pool on our qnap, but the performance wasnt that great (seems to be an qnap issue, that is one of the reasons why we try to move to the freenas route). But the bigger problem was that this splitting of active and inactive projects was quite tedious. Thats the reason why we abandoned that approach and want to move to one big pool.

Regarding the data usage. The team members are not working on the same data at the same time. But when rendering multiple workstations are loading the same data simultaneously (which can be up to 8 workstations), with mixed file types and file sizes, differ between 100 kilobytes up to 1gb per file in general. Writing is happening also on different files. The filesize in general for the bigger files differ between 300mb - 3gb. Which can also happen on the same time by 8 users, but wouldnt happend that often.

Would a rendering pool of SSD's solve that problem? When ready to do the rendering, copy the file to the SSD_POOL and let the 8 machines beat it to death. That pool could be constructed for speed only, since you have a copy on the main pool. Of course, that would be more money, but you wouldn't have to make the SSD_POOL that big, since it would only need to contain whatever was being rendered currently.
 

ropm

Cadet
Joined
Feb 17, 2020
Messages
6

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
@ropm
Yeah, I just searched for an xeon mainboard with integerated lsi controller. I then just did look for the cpu matching the socket. Normally choosing the right socket is the only thing you have to keep in mind and maybe that some cpus are not supported to a certain bios level. That was the first build system build in my life which didnt work. Because its quite a stupid idea from intel to do a refresh of a socket without backwards compatibility.

@Bozon
Not really. I already invested some time in possible ways to use a dedicated fast ssd storage. But it just doesnt work without drawbacks or a huge amount of work to get a kind of sync from "slow" to "fast" storage. It just doesnt work in the way our filestructure works.
 

Bozon

Contributor
Joined
Dec 5, 2018
Messages
154
@Bozon
Not really. I already invested some time in possible ways to use a dedicated fast ssd storage. But it just doesnt work without drawbacks or a huge amount of work to get a kind of sync from "slow" to "fast" storage. It just doesnt work in the way our filestructure works.
[/QUOTE]

How long does it take to render, and how much is because of hard disk bottleneck? I'm just curious, the answer to that would help you understand the issue you are facing? How large of the source files, and how large are the rendered files?

Thanks.
 

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
This differs from scene to scene.
The procedure is the following:
- When you render, the 3d scenes is being saved and after that transferred to the render slaves
- The scene is then getting loaded by every render slave at the same time. the file itself is around 300mb
- During the loading of the scene the linked files (textures, 3d proxy files, etc.) are being loaded. This can be around 200-2000 files, each differ in size. I would say max file size would be around 60mb. For big scenes the summarized file size can be up to 3-4gb.
- When everything is loaded and the rendering begins, there is no real file transmissions to the storage going on.
- The file/files which get saved are around 300-600mb in general
 

patrickjp93

Dabbler
Joined
Jan 3, 2020
Messages
48
Hey guys,
just wanted to update this here. The build is now running. Had some issues regarding the cpu, because the 2000series doesnt work with the chipset on the supermicro board. You have to stick to the older 1000series cpus, even though its the same socket.

We disscused internally the possible raid configurations and we ended up in using striped mirros instead of raid-z2. First tests show not so bad speed. Simple file transfer from windows pc to share is about 550mb/s writing and 1gb/s reading. We skipped the optane drive and using no cache or slog, but therefor upgraded ram to 64gb.

Only thing which doesnt work at the moment are the sata connectors conncted to the backplane of the case via SFF-8087 to sata connector. The drives are not getting picked up, even not in the bios. Do you need any special cables?
EDIT: I think I figured out the problem, there are two types of cables forward/reverse. It seems that mostly forward cables are selled (SATA as target and SFF as host) but I need it the other way around (SATA host and SFF target). Ordered another cable and hope this works.
Glad to hear the performance is good. I had a feeling RAID Z2 would hurt your I/O, and even if it was only a 5% hit, your write speeds are already less than Ideal I'd wager.
 

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
Yeah write speed could be better. But maybe it will get better in the future when we expand the storage and add more vdevs.

At the moment I'm more concerned about the 10gbe cards in the clients. They suck hard when doing heavy copying tasks. Every 10-15 seconds the network speed drops down to 0, like a big timeout. And then goes up again to max. Those Aquantia chips are just bad, no matter what I set in the configs. Should have gone the intel road.
 

patrickjp93

Dabbler
Joined
Jan 3, 2020
Messages
48
Yeah write speed could be better. But maybe it will get better in the future when we expand the storage and add more vdevs.

At the moment I'm more concerned about the 10gbe cards in the clients. They suck hard when doing heavy copying tasks. Every 10-15 seconds the network speed drops down to 0, like a big timeout. And then goes up again to max. Those Aquantia chips are just bad, no matter what I set in the configs. Should have gone the intel road.
Send bug reports to Aquantia. There's a good chance a firmware fix will solve your issues. They are working extremely hard to get into the market and be a real competitor. Frankly, Intel and Qualcomm need the competition. Both have a love of high profit margins.
 

Mugga

Dabbler
Joined
Feb 19, 2020
Messages
25
Dont really know how I should debug the cards on windows. I could make the timeouts much less frequent with settings the receivebuffers on the card to 1024 instead of 512 standard setting. Uping it to max 4096 introduced heavy timeouts again.
 
Top