Recommends for Retired EMC Isilion x400

Status
Not open for further replies.

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I have a retired EMC isilion x400 that has FreeNAS:latest on it. I am looking for some advice on how to build this machine out properly for maximum performance.It will be used for a couple different purposes. It will be a VM backstore for Ovirt, and it will also perform backups of various computers on the network.

2X hex core intel procs
96GB of RAM
2x 8GB SSD's for OS in mirror
36x 7200RPM Hitachi 3TB Drives
2x Chelsio 10GB nics

I am just getting started, so I am open to hear anything.
 
Last edited:

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I assume that you have some time to play with it and so forth before it goes into production.

Your RAM/pool size ratio looks fine. I think the right way to play this is just to go ahead and build the FreeNAS, and see if you encounter any issues. I guess one question I have, is how you are going to drive those 36 drives; what is the HBA here? (I am not familiar with the isilion x400). FreeNAS is notoriously touchy about what HBA's can be used reliably in configurations like this.

But assuming all of that is fine, you have 36 drives. You probably don't want vdev's over 12 drives per, so one reasonable configuration would be 3 vdevs in RAID-Z3, each containing one dozen drives. That would essentially give you a single pool of what...quick mental math here....maybe 73TiB or so. That might be a good place to start. Your use case may benefit from L2ARC and/or a separate write cache/ZIL device as well, but I think it's reasonable to see how your performance looks first.

Perhaps @cyberjock or @anodos or @jgreco would have more specific expertise than I would and would care to comment.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If there's going to be live VM storage, then RAIDZ isn't really appropriate and mirrors would be better. You might be able to get away with a modest amount of non-stressy VM storage on RAIDZ because the system's large-ish.

For better performance, use smaller RAIDZ2 vdevs, such as six 6-wide RAIDZ2. If you prefer space over performance, you can try 9-wide or 12-wide RAIDZ2 (or Z3).
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I used mirror with 18 vdevs. I can saturate a 10gbe link. I'm very impressed with this setup. 3k for 50tb usable space, and performance is top notch. I'm a happy camper.

Would a ssd help with performance when this gets busy.
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I didnt answer the questions.

It is used as a vm store for ovirt
It is also used as cinder storage for openstack.
I have a dell m1000e fully populated with blades, half openstack half ovirt.

Its has an LSI SAS2008 controller that I had to upgrade to fw v20

Any other suggestions would be great to maximize performance
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You should be able to put in a ~500GB SSD for L2ARC. Do not use something stupid-expensive like the Intel DC series, but rather something cheapish like a Samsung 850 Pro 512GB, or if you want that last little bit of speed, get a PCIe-to-M.2 converter and a Samsung 950 Pro 500GB. When the pool gets busy, this will give it an extra turbo charge for reads.
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I have some sandisk ssds sitting around, would they be ok?
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I am looking for a good way to benchmark this system. Any pointers?
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
Should I stay away from getting something for write cache? Seems that write speeds are pretty good, 1gb per sec using dd
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
root@freenas] /mnt/emc/home/cifs# dd if=/dev/zero of=test bs=4096 count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes transferred in 9.661067 secs (423969733 bytes/sec)
[root@freenas] /mnt/emc/home/cifs# dd if=/dev/zero of=test bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 94.744656 secs (432319899 bytes/sec)
[root@freenas] /mnt/emc/home/cifs#


3.5 gb/sec writes

I do understand that this is not a realistic test of performance, however I have setup a couple freenas machines before, and never have i gotten anywhere near those numbers.

Thanks for any input anyone has to get this box purring along even faster
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
Ive got an hp p410 sitting around with a 512mb write cache. Could I hook that up with a couple ssds in on raid 1 for slog and l2arc?


Your right compression is on. At least I know I can push my lagg at full speed.

Thanks for the tips guys, I really appreciate your time.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You'd probably be better off directly connecting the L2ARC to a mainboard or HBA port. For SLOG, *ifff* the RAID controller actually works well with FreeNAS, I used to favor a RAID write cache plus a conventional hard drive as one of the paths to high endurance low latency SLOG. But you need the battery backup and the cache to be configured properly. These days, just getting a cheap Intel 750 is more likely to be a practical solution unless you need massive write endurance.
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
In have another one of these servers arriving tomorrow to do some bench marking with
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
Ok here are some simple dd tests because I don't really know how to do any better ones. If anyone wants to point me to a good article on real world testing, I am happy to do them.

EMC Isilion x400 with 34 disks in mirror without SSD's in L2ARC. *Compression is disabled*

Write
dd if=/dev/zero of=img.test bs=1024k count=25k
25600+0 records in
25600+0 records out
26843545600 bytes transferred in 20.279746 secs (1323662823 bytes/sec)
Or about 10.6 GB's per second
dd if=/dev/zero of=img.test bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes transferred in 0.910241 secs (460790430 bytes/sec)
3.6 GB per second

dd if=/dev/zero of=img.test bs=4k count=1M
1048576+0 records in
1048576+0 records out
4294967296 bytes transferred in 9.221067 secs (465777693 bytes/sec)
About the same as above

More than enough to saturate a 10GB Link

Read
dd of=/dev/zero if=img.test bs=1024k count=25k
25600+0 records in
25600+0 records out
26843545600 bytes transferred in 4.969406 secs (5401761279 bytes/sec)
Or about 43.2 GB's per second

dd of=/dev/zero if=img.test bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes transferred in 0.366118 secs (1145615491 bytes/sec)
About 9.2 GB/per second



I have an NVME SSD enroute and I am going to give that a spin. My SSD's are not very good performers. I think I might be better off to just stick with the disks the appliance came with.

Not bad for a 3K server with 50TB usable. I could get more usable in a different RAID.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Make sure to write and read more data than you have ram.
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
I am working on that now, I have 100GB. It might take a few minutes
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
Write
dd if=/dev/zero of=img.test bs=4k count=10000000000000
42369433+0 records in
42369432+0 records out
173545193472 bytes transferred in 402.148444 secs (431545108 bytes/sec)

173.5 GB file in 6.7 minutes
3.5 GB Per second

Each disk can write at about 3.3 M per second x 17 in the Stripe. Math seems to work out about right

Read

dd of=/dev/zero if=img.test bs=4k count=10000000000000
28217056+0 records in
28217056+0 records out
115577061376 bytes transferred in 145.448422 secs (794625749 bytes/sec)

115.6 GB file in 2.5 minutes
6.4 GB per second
 
Status
Not open for further replies.
Top