Questions of full SSD storage server.

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Hi everyone here. I plan to build a full ssd Truenas core server in my office,but has few questions.
1:How is my hardware?If is running good for Truenas core full ssd pool?
Single Intel Xeon 8163 24c/48t
SuperMicro x11spl-f
128GB-1TB ddr4 2400 reg ecc memory
480GB sata SSD for boot
24*micro5200/5300 1.92TB sata ssd for pool(raid-z2)
no L2arc and no slog
3*LSI 9300-8I HBA (IT mode)
1or2 Intel XL710 2ports 40Gb network

2:Peoples said need atleast 1GB memroy for 1TB poolsize, still works for full SSD pool or need more memory?
3:Should I need setup a Slog and L2arc device for full SSD pool?

4:Should I upgrade the HBA from LSI9300 to 9400 or higher? Because they has different CPU and offer different IOPS performance.

Thanks advance.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
1,2. Should be fine. 1 RAM GB/TB storage is a rough guidance; 128 GB should be fine for a ca. 200 TB pool. (3 * 8-wide raidz2 I suppose?)
3. SLOG only if there are sync writes (and even then the ZIL on 24 SSDs should be fast). Only actual use would tell if a L2ARC might be useful… but with SSDs L2ARC is unlikely to help much.
4. I see no need for that. If you move the boot to M.2 NVMe you might use the motherboard ports and save one HBA. (No backplane?)
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
@skyyxy ?
1. Is that a 24 wide Z2 pool you are suggesting? Please don't

Whats your use case?
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
1,2. Should be fine. 1 RAM GB/TB storage is a rough guidance; 128 GB should be fine for a ca. 200 TB pool. (3 * 8-wide raidz2 I suppose?)
3. SLOG only if there are sync writes (and even then the ZIL on 24 SSDs should be fast). Only actual use would tell if a L2ARC might be useful… but with SSDs L2ARC is unlikely to help much.
4. I see no need for that. If you move the boot to M.2 NVMe you might use the motherboard ports and save one HBA. (No backplane?)
Bigthanks. I want the server can run fullspeed,so I dont have plan to setup a backplane, because it will limit the speed I think. all drivers direct connect to HBA sholud better.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
@skyyxy ?
1. Is that a 24 wide Z2 pool you are suggesting? Please don't

Whats your use case?
the case is a 4U storage case, has 2 options. 1 for desnt has backplane another has.
Why dont use 24disk z2?
Thanks a lot.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The suggestion is "don't put all 24 disks into a single Z2 vdev" - rather, break them up into multiple smaller vdevs. The number of vdevs and their geometry (RAIDZ vs mirrors) depends on your intended storage goal and desired redundancy levels.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Bigthanks. I want the server can run fullspeed,so I dont have plan to setup a backplane, because it will limit the speed I think. all drivers direct connect to HBA sholud better.
If you're desperately seeking maximal performance, ditch SATA and go all NVMe…
Why do you think that a backplane would limit speed?
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Thanks for reply.
I want only one smb share to my all clients (thats why only single z2 vdev).But if I create more vdevs that mean need more smb share-name to my clients. If any possible make all vdevs to create a new vdev?
The suggestion is "don't put all 24 disks into a single Z2 vdev" - rather, break them up into multiple smaller vdevs. The number of vdevs and their geometry (RAIDZ vs mirrors) depends on your intended storage goal and desired redundancy levels.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
No
A dataset - which is what you share with SMB is a part of the pool. You can create any number of datasets in a pool - they all share the total space available in the pool (though you can limit this with quotas)

A vdev is a virtual device of which one or more make up the pool. All data vdevs are striped together so if you have a 10TB vdev and another 10TB vdev then the pool will be 20TB. Every dataset will have access to the whole 20TB (obviously you cannot have 3 datasets each containing 20TB of actual data) subject to the maximum size of the pool.

A 24 wide Z2 is waaaaay to wide to be sensible. Maybe 3*8 wide Z2 would be far more sensible from an operational PoV. It does however mean 6 disks of parity in total
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
just a remark for your consideration for a 24x wide array;
imagine one of your drives fails, you plug in a news one, it starts to resilver. Due to the considered small size of <2tb per drive, the resilver time should not be that long, however the likelyhood, that 2 other drives will fail out of the remaining 23 is considered as rather high.
in case this event happens, your data is most likely gone.

Traditional risk management:

likelyhood:
resilver time is depenent on size per disk
likelyhood of another disk fails during resilvering is increased
the more discs you have in the intended raidz2 set up, the higher is the probability of a failure >2

Impact:
you might lose all your data

Mitigation
as stated in earlier posts

eat it!:
in case you can accept downtime or the data is not important or you have a good and working (test that!) backup and recovery method, the risk can be accepted.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
No
A dataset - which is what you share with SMB is a part of the pool. You can create any number of datasets in a pool - they all share the total space available in the pool (though you can limit this with quotas)

A vdev is a virtual device of which one or more make up the pool. All data vdevs are striped together so if you have a 10TB vdev and another 10TB vdev then the pool will be 20TB. Every dataset will have access to the whole 20TB (obviously you cannot have 3 datasets each containing 20TB of actual data) subject to the maximum size of the pool.

A 24 wide Z2 is waaaaay to wide to be sensible. Maybe 3*8 wide Z2 would be far more sensible from an operational PoV. It does however mean 6 disks of parity in total
Got it, Thanks :smile:
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
just a remark for your consideration for a 24x wide array;
imagine one of your drives fails, you plug in a news one, it starts to resilver. Due to the considered small size of <2tb per drive, the resilver time should not be that long, however the likelyhood, that 2 other drives will fail out of the remaining 23 is considered as rather high.
in case this event happens, your data is most likely gone.

Traditional risk management:

likelyhood:
resilver time is depenent on size per disk
likelyhood of another disk fails during resilvering is increased
the more discs you have in the intended raidz2 set up, the higher is the probability of a failure >2

Impact:
you might lose all your data

Mitigation
as stated in earlier posts

eat it!:
in case you can accept downtime or the data is not important or you have a good and working (test that!) backup and recovery method, the risk can be accepted.
Got it,Thanks!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Top