fusion io is not recognized, looking for alternative

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
i just installed new freenas on my supermicro here are the hardware info:
  • SuperMicro Server 1U 1027R-72BRFTP
  • CPU 2 X E5- 2640v2 CPU's
  • RAM 64GB ecc
  • BOOT - SATADIMM - VIKING TECHNOLOGY SATADIMM 100GB
  • STORAGE 8x800gb SSD sas 6
  • Controller AOC-S2308L-L8e
  • Mellanox ConnectX-3 40Gigabit
  • Fusion MLC io-Duo 1.2TB Link for a with seller with some more info for it
the system is workion. (currently bulding the zpool for 8xssd)

any idea what i should to to make the system recognize the fusion ssd?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

iliak

Contributor
Joined
Dec 18, 2018
Messages
148

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
not here, the seller that sold me the hardware gave it.

i found that the driver can be found here http://dell-support.sandisk.com/
but which one should i try.

i also have a micron RealSSd p320 512G
Well, there are a few things to cover, first being that freeNAS is an appliance software which means you can't just install a driver you downloaded off a website. Second, freeNAS is a freeBSD based appliance and none of the drivers contained in the link you posted are for freeBSD. Third, all the drivers I've found for fusion-io cards (IBM, HPE, Dell...) are for windows/linux/VMware. So, the card doesn't appear to be supported.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
ill try to get something else that will work with freenas, any pci card with freeBSD will work?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What's the purpose of this card you want to get?
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
So you want to make a pool out of this pcie SSD? Just a single disk? This is a strange setup. Probably not going to work.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
kinda, just to share and manage it mostly via nfs
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
what models are supported and have good value size 600GB-1.2 TB with at least 1GBs write
does intel p3500,p3600 can work?
 
Last edited:

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
The Fusion IO card is a neat piece of kit, but I don't see how it'd help you given your use-case. You mention 1GB/s - do you mean gigabit/sec?
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
nope i ment 1 giga byte /sec
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
You mention shared storage for 30 servers. How are they accessing this shared storage? Are you thinking of using FreeNAS as a NAS (i.e. the 30 systems mount NFS or SMB to FreeNAS), or as a storage back-end for VMs residing on FreeNAS (i.e. one of more ESXi hosts connecting via NFS/ iSCSI, and hosting VMDK's on FreeNAS)?

How are you laying out your vdevs with those 8 SSDs?

Assuming you cannot use that Fuiso IO in this build (sadly), have you considered something like NVMe for log/ cache? If you don't have enough/ any NVMe ports, you might look into using something like a quad NVMe -> PCIe card. Dell makes one, as do several other vendors. FreeBSD support would need to be verified. I'd wager you'd see excellent latency, iops, and throughput from that. Depending on your SSD vdev layout, it may be unnecessary.. 1GB/sec isn't that much.

Have you tried building and benchmarking? If this is for a prod environment, it'd be worth the time investment.

Lastly, how are you backing up?
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
You mention shared storage for 30 servers. How are they accessing this shared storage? Are you thinking of using FreeNAS as a NAS (i.e. the 30 systems mount NFS or SMB to FreeNAS), or as a storage back-end for VMs residing on FreeNAS (i.e. one of more ESXi hosts connecting via NFS/ iSCSI, and hosting VMDK's on FreeNAS)?

How are you laying out your vdevs with those 8 SSDs?
the 8 ssds are set in single z1 pool and are used mostly as shared fns in read only mode

the fusion io (still looking for alternative trying to figure out if intel p3600 will work)
our servers will run some computational tasks and will write the output files (alot of small ones) to it, i assume 1-2 GBS will be our limit,
but we might later put 500GB sql db in it
Assuming you cannot use that Fuiso IO in this build (sadly), have you considered something like NVMe for log/ cache? If you don't have enough/ any NVMe ports, you might look into using something like a quad NVMe -> PCIe card. Dell makes one, as do several other vendors. FreeBSD support would need to be verified. I'd wager you'd see excellent latency, iops, and throughput from that. Depending on your SSD vdev layout, it may be unnecessary.. 1GB/sec isn't that much.
there will be alot of writes, so 4nvme migt be faster but not as stable

Have you tried building and benchmarking? If this is for a prod environment, it'd be worth the time investment.
not yet still configuring all the servers and the systems
this is not for production only for research
but if it will work later we migh add more dedicated freenas servers

Lastly, how are you backing up?
if you mean backup it is not needed it hold a copy of our date for reserch only
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
You've got one Fusion IO card right? You can recover a pool after losing your log device (theoretically), but I wouldn't want to have to deal with that pain. For that reason I mirror my log devices. Unless you have two Fusion IO's, you can't really do that.. and seeing as the Fusion IO isn't even recognized, it's a moot point. Also, 1.2 TB is overkill and wasteful for log (you'd never want a log device that large, and your memory is already very small at just 32GB).

there will be a lot of writes, so 4nvme migt be faster but not as stable

I'm curious why you'd suggest NVMe has stability issues.

If you want to drive your performance up, I'd sell that Fusion IO and buy a lot more RAM.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
the chassis have place for only 1 pci card ( 1 is used for 40gb network and 1 for lsi 2308 card)
the drive will be used to data such as tmp ( any data older then one week will be deleted) and no redundancyis needed at all

i can replace the fusion io card and get someting else (still i dont know which will work with freenas my best guess is intel p3600 but no one confirmed it yet)


ill have to declare that im not an IT of DevOps specialist, i am mostly a programmer with a task to buy configure and integrate a small medium scale server system (will grow for up to 30 servers each 20-40 cores) or last issue is the storage servers (such as this one)
but i am with good knowledge of linux and most others systems, but FreeBSD is first for me (as the core of freeNAS)

as much as i know of normal nvme card will not sustain the speed well due to two main reasons heat and when the cache is full the speed will go down very fast. second we buy most of the stuff as refurbished from a seller and we get good price for high end enterprise hw,

we buy ssds with 95%-100% of life left in them... (good as new) and new commercial still not as good ad 3-5 year old high end Enterprise hw
so ill prefer to stick with enterpirse pci ssd
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
Is this a file server with network clients? Are VMs stored on FreeNAS? Do you plan to enable sync on your shares, or, are you forced to due to how your clients will use them?

ZFS doesn't cache reads or writes the way you may assume it does. There's no read-ahead cache, for example. Do some reading to make sure you understand log vs. cache, the impact they have on RAM requirements, the impact a cache or log device failure can have on your entire pool, and use-cases that benefit from cache or log, particularly around sync workloads.

ZFS caches writes in-memory unless sync is enabled. It flushes writes to disk every 5 seconds. To avoid waiting on slow SSD, you need enough RAM to contain those writes. You also need enough RAM left over for your ARC for read caching and general OS. Assume 8GB just for the OS. That leaves you 24GB for ARC and ZIL, and if you add a cache, you lose more RAM from those. The bigger your cache, the more RAM you need.

- Can a PCIe SSD out-perform an array of 8 SAS6 SSDs in terms of read or write throughput?
- Can a PCIe SSD out-perform an array of 8 SAS6 SSDs in terms of write latency?

If the answer is "no" to either of those, you're wasting money and losing performance by adding PCIe SSD as log or cache.

PCIe NVMe will easily beat a PCIe SSD in latency and throughput. And remember, you're only pushing 1GB/sec at it. Set up a

If your workload actually forces you to use sync, then you may benefit from a log device, but you don't have the slots to do that as PCIe SSD, and PCIe SSD is likely less performant than your 8xSAS6 SSDs. You could go PCIe NVMe with multiple NVMe drives in a single PCIe slot and mirror them, but you're right that you'll need enough NVMe with enough buffer to contain all those writes you're doing without exhausting the buffer and falling off the NAND write-performance cliff.

If I were you, I'd:
1. Make sure I'm not using sync unless I absolutely have to, and then adjust my design to accommodate that
2. Get a lot more RAM. 128GB is practical and under a no-sync workload would easily handle 1GB/sec.

Just my $0.02.. and again.. build, benchmark with real-world workloads, then adjust.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
Is this a file server with network clients? Are VMs stored on FreeNAS? Do you plan to enable sync on your shares, or, are you forced to due to how your clients will use them?
no vms just basic file share, no sync

ZFS doesn't cache reads or writes the way you may assume it does. There's no read-ahead cache, for example. Do some reading to make sure you understand log vs. cache, the impact they have on RAM requirements, the impact a cache or log device failure can have on your entire pool, and use-cases that benefit from cache or log, particularly around sync workloads.
cache wont help me cause of a big rotation of the data each run all the data will be read.. aprox 2-4 tb each time
the ssd pool updates once a day (adding one new file) and all the clients just read it(read only mount)
and the pcie (fusion io alternative) no sync or protection needed

ZFS caches writes in-memory unless sync is enabled. It flushes writes to disk every 5 seconds. To avoid waiting on slow SSD, you need enough RAM to contain those writes. You also need enough RAM left over for your ARC for read caching and general OS. Assume 8GB just for the OS. That leaves you 24GB for ARC and ZIL, and if you add a cache, you lose more RAM from those. The bigger your cache, the more RAM you need.
at first ill try to work witout cache il write directly to the drive.

- Can a PCIe SSD out-perform an array of 8 SAS6 SSDs in terms of read or write throughput?
- Can a PCIe SSD out-perform an array of 8 SAS6 SSDs in terms of write latency?
i did not understand, the 8 ssds are used for read only
and the one pice ssd as read write

If the answer is "no" to either of those, you're wasting money and losing performance by adding PCIe SSD as log or cache.

PCIe NVMe will easily beat a PCIe SSD in latency and throughput. And remember, you're only pushing 1GB/sec at it. Set up a

If your workload actually forces you to use sync, then you may benefit from a log device, but you don't have the slots to do that as PCIe SSD, and PCIe SSD is likely less performant than your 8xSAS6 SSDs. You could go PCIe NVMe with multiple NVMe drives in a single PCIe slot and mirror them, but you're right that you'll need enough NVMe with enough buffer to contain all those writes you're doing without exhausting the buffer and falling off the NAND write-performance cliff.
as i said ill wont use sync for now.

If I were you, I'd:
1. Make sure I'm not using sync unless I absolutely have to, and then adjust my design to accommodate that
2. Get a lot more RAM. 128GB is practical and under a no-sync workload would easily handle 1GB/sec.
after testing ill check if the ram usage and if more will be worth it ill be able to add up to 320 gb more ram.

Just my $0.02.. and again.. build, benchmark with real-world workloads, then adjust.
ill take all the help i can get..
still have some issues with configuring the network, but it is asked in another post
 

acquacow

Explorer
Joined
Sep 7, 2018
Messages
51
At Fusion-io, we used to have a FreeBSD driver and people used to use our cards with FreeNAS, but that was AGES ago at this point and you'll have zero luck getting it going in the current versions.

Your best use of the ioDrive would be to just drop it in another box and NFS share it out on your 40gig network as scratch space.

You can put it in your freenas box and possibly pass it through to a VM and then share it out that way.
 
Top