Apex Storage X21

T_PT

Dabbler
Joined
Mar 20, 2023
Messages
20
Has anyone looked at the Apex Storage X21 yet?

It only lists "Windows, Windows Server, Linux" on the product page, but it looks like the kind of device that could lend itself incredibly well to L2ARC and metadata vdev. I guess depending on the drives, with sufficient drive endurance it would be a benefit for SLOG too.

Are there any similar devices that are fully supported in TrueNAS 13 that could meet a similar objective of easily providing a large amount of NVMe connectivity to a server?

Thanks :)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's highly unlikely that this thing will work the way you think it will.

Most notably, it's seeming to put some kind of hardware RAID controller in front of the NVME devices (or some other kind of abstraction) due to the huge number of PCI lanes it is offering (100) to the attached NVME drives and the limited ones it will have with the host system (16).

I think that right there is enough to say that you don't want to run ZFS on that in any way.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
This is a 100-lane PCIe switch on a x16 card, leaving 84 lanes for 21 x4 drives. No technical issue with any OS, or with ZFS.
But obvious bandwidth issues if multiple drives are accessed at the same time—not to mention cooling issues.

There's no use for insane capacity for L2ARC, even less so for SLOG. And any extra latency is unwelcome for SLOG. I don't understand how this contraption would be useful for L2ARC or SLOG.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Are there any similar devices that are fully supported in TrueNAS 13 that could meet a similar objective of easily providing a large amount of NVMe connectivity to a server?

Sure, any of the PLX based storage chassis out there. What makes you think this would be useful, though? TrueNAS is a NAS product and you only need about six NVMe devices at 7,000 MBytes/sec, which in mirror configuration and assuming JUST one side (i.e. write) would be 21,000 MBytes/sec, or 168 Gbits/sec, or in other words far faster than your average network could sustain.

You can actually get very adequate performance out of SATA as long as you have enough devices.

So, again, why would anyone want this for a NAS?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
obvious bandwidth issues if multiple drives are accessed at the same time
That's the problem I was highlighting... I would have thought perhaps quite problematic for ZFS if many drives are in one pool and addressed by the same transaction groups.

84 lanes of PCI are little help to you if all the things you want to use lanes for (RAM and network) are on the other side of the x16 slot.
 

T_PT

Dabbler
Joined
Mar 20, 2023
Messages
20
It's highly unlikely that this thing will work the way you think it will.

Most notably, it's seeming to put some kind of hardware RAID controller in front of the NVME devices (or some other kind of abstraction) due to the huge number of PCI lanes it is offering (100) to the attached NVME drives and the limited ones it will have with the host system (16).

I think that right there is enough to say that you don't want to run ZFS on that in any way.
Thanks for taking the time to respond.

I wasn't sure whether it was going to present the drives individually or if it would put a controller between the drives and Truenas. While still relatively new to TrueNAS, I'm already well aware that we want TrueNAS/ZFS to handle the drives itself without anything else in the way.

The feature that peaked my inteerst there was a way of adding a lot of fast storage in a small footprint, but I do understand your point that if this is doing any RAID at all then it's undemining the advtanages of TrueNAS / ZFS.
 

T_PT

Dabbler
Joined
Mar 20, 2023
Messages
20
This is a 100-lane PCIe switch on a x16 card, leaving 84 lanes for 21 x4 drives. No technical issue with any OS, or with ZFS.
But obvious bandwidth issues if multiple drives are accessed at the same time—not to mention cooling issues.

There's no use for insane capacity for L2ARC, even less so for SLOG. And any extra latency is unwelcome for SLOG. I don't understand how this contraption would be useful for L2ARC or SLOG.
Thanks for your thoughts. As I'd mentioned in the last reply, I was interested in a lot of ports in a small footprint, while it would seem to a novice like me to be a suitable location for SLOG/L2ARC it certainly would only be a small amount of the capacity for those purposes. A highly resilient special metadata vdev wouldn't be too outlandish though would it? Or do you have a better idea/recommendation for that?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Even without hardware RAID (as suggested by @Etorix, it's possibly just a PCI switch), you're going to be in much the same position as with a SATA port multiplier... also not good news for ZFS.
 

T_PT

Dabbler
Joined
Mar 20, 2023
Messages
20
Sure, any of the PLX based storage chassis out there. What makes you think this would be useful, though? TrueNAS is a NAS product and you only need about six NVMe devices at 7,000 MBytes/sec, which in mirror configuration and assuming JUST one side (i.e. write) would be 21,000 MBytes/sec, or 168 Gbits/sec, or in other words far faster than your average network could sustain.

You can actually get very adequate performance out of SATA as long as you have enough devices.

So, again, why would anyone want this for a NAS?
I think this may be the better route for me to be looking at - thanks.
 

T_PT

Dabbler
Joined
Mar 20, 2023
Messages
20
Even without hardware RAID (as suggested by @Etorix, it's possibly just a PCI switch), you're going to be in much the same position as with a SATA port multiplier... also not good news for ZFS.
ok - back to reading around trying to decide how best to pack drives in to a system and still have some headroom for growing capacity later on :)
Thanks for taking the time to respond :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ok - back to reading around trying to decide how best to pack drives in to a system and still have some headroom for growing capacity later on :)

If you are going SSD and you want capacity, you almost certainly want SAS. This basically works out to building a 24-bay 2U enclosure, I feel like it's probably okay to use SAS expanders in this role, unless you are shooting to support 100GbE or something like that. A single SFF-8643/8644 link can handle 48Gbps, so a 2U server plus a 2U JBOD linked by 8644 gets you 96Gbps of theoretical transfer speed between the two shelves of disks. That is only a single HBA, adding another HBA can get you double the speed AND double the capacity. You can of course use cheap SATA SSD in this role as outlined above.

I don't have a clear idea of what you're thinking is "fast", so it is possible that this arrangement is not sufficiently fast if you start out with just the 2U server. In such case, start out right away with the JBOD as well and split the drives between the server and JBOD. That gets you in the neighborhood of 100Gbps capability to your pool.
 

T_PT

Dabbler
Joined
Mar 20, 2023
Messages
20
If you are going SSD and you want capacity, you almost certainly want SAS. This basically works out to building a 24-bay 2U enclosure, I feel like it's probably okay to use SAS expanders in this role, unless you are shooting to support 100GbE or something like that. A single SFF-8643/8644 link can handle 48Gbps, so a 2U server plus a 2U JBOD linked by 8644 gets you 96Gbps of theoretical transfer speed between the two shelves of disks. That is only a single HBA, adding another HBA can get you double the speed AND double the capacity. You can of course use cheap SATA SSD in this role as outlined above.

I don't have a clear idea of what you're thinking is "fast", so it is possible that this arrangement is not sufficiently fast if you start out with just the 2U server. In such case, start out right away with the JBOD as well and split the drives between the server and JBOD. That gets you in the neighborhood of 100Gbps capability to your pool.
I'm not sure we can go to all SSD, but I'll see what pricing looks like closer to the time of purchase. My original question here really comes from a place of trying to understand what kind of possibilities are out there initially and then thinking about real world constraints later. I'm definitely thinking SAS not SATA and was thinking about how useful a fusion pool would be.

We don't currently have the infrastructure to go 100Gbps, I'm looking at dual SFP28 to start with; so it appears my definition of "fast" for our use-case isn't anything like as quick as you've been considering here.
I think what you've described would certainly be more than adequate for the short-medium term, but when buying a significant piece of infrastructre I want to cover bases for at least as long as the hardware warranty will last - 5 years initially, but likely extendable beyond that when the initial term is up. I'd like to have capacity on the HBA to add drives and vdevs as required, but will have a mixed workload so it's really still a blank page regarding what our solution is going to look like. I've been trying to weigh cost/complexity of a fusion pool to get large capacity on rust and support performance with SLOG/L2ARC and possibly a meta vdev against just going all SSD, but I'm going in circles a little bit on this.

We're straying further and further away from my initial question here; I think that's been well answered and led me to the conclusion that the Apex card is not the solution I'm looking for!

Maybe it's best if I try to set out a seperate question about hardware spec and pool structure rather than go down a different rabbit hole here?
I'm more than happy to continue a discussion here if you don't mind, but as a newcomer thought it's better to take some direction about whether it's best to follow that thread away from the initial question here or in it's own (more appropriately titled) post?

I appreciate the time taken for all of the responses here - thanks for all the feedback and thoughts.
 
Top