Add vdev to existing server vs. adding 2nd server

Status
Not open for further replies.

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Hey Everyone,

Well, it is that time of the year again, I am (again) running out of available drive space on my plexnas server (freenas server for plex). I am currently at about 78% utilization of my volume and I know that above 80% Freenas will start sending out errors.

I am trying to decide if I should just add another 6 drive vdev to my vol1 utilizing 4TB or 6TB drives or if I should just roll out a second server instead.

Here is my current configuration:

Supermicro Superserver 5028R-E1CR12L
Supermicro X10SRH-CLN4F Motherboard
1 x Intel Xeon E5-2640 V3 8 Core 2.66GHz
4 x 16GB PC4-17000 DDR4 2133Mhz Registered ECC
12 x 4TB HGST HDN724040AL 7200RPM NAS SATA Hard Drives
LSI3008 SAS Controller - Flashed to IT Mode V5 to match Freenas driver
LSI SAS3x28 SAS Expander
Dual 920 Watt Platinum Power Supplies
16GB USB Thumb Drive for booting
FreeNAS-9.10.1 (d989edd)


I have the 12 drives setup like this:

Code:
[root@plexnas] ~# zpool status
  pool: vol1
 state: ONLINE
  scan: scrub repaired 0 in 9h16m with 0 errors on Thu Sep  1 10:16:51 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        vol1                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/f46fb4ec-ed62-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/f69f4e21-ed62-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/f8cde372-ed62-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/faeb3d6d-ed62-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/fd087ff0-ed62-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/ff28300a-ed62-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/013d5491-ed63-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/0357b342-ed63-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/05811f51-ed63-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/079f5f22-ed63-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/09b81318-ed63-11e4-a956-0cc47a31abcc  ONLINE       0     0     0
            gptid/a82dda8c-ef5f-11e4-bb0a-0cc47a31abcc  ONLINE       0     0     0

errors: No known data errors



My NAS is connected directly to my plex server via a 10ge connection, and my plex server usually has a max of about 7 people on it at any given time, but averages more like 3 or 4. Plexnas is strictly used just for Plex and has no jails, etc on it.

Looking at the graphs, it seems like the system is basically idle even when my Plex server is under it's highest load, so I am thinking I should be able to drop another vdev or two on the box by adding in another LSI controller. Initially I am thinking another 6 x 4TB drives in the same raidz2 config.

To me, this would be easier and simpler than managing a completely separate server, but I guess I am looking for advice from those running larger systems than I as to the best approach give my particular server configuration.


10Gb Interface Traffic to Plex server:
29568412432_4f06f2ab19.jpg



My average disk IO looks like this (Partial list, they all look the same):
29054819043_983c640821.jpg



Memory and Swap:
29054827403_5b68bbf452.jpg



CPU & System Load:
29054823963_f9b801dc93.jpg
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I vote for a 3rd vDev of RaidZ2. Of course it looks like you may be out of drive slots on that case, so a JBOD is under consideration? That or get a bigger case which houses more drives, has bigger PSUs and can take your parts..

As far as JBODs go, I prefer to have them use their own Pool/Volume just for the safe feeling that part of any vDev is not housed somewhere else. But that is just me...
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
I vote for a 3rd vDev of RaidZ2. Of course it looks like you may be out of drive slots on that case, so a JBOD is under consideration? That or get a bigger case which houses more drives, has bigger PSUs and can take your parts..

As far as JBODs go, I prefer to have them use their own Pool/Volume just for the safe feeling that part of any vDev is not housed somewhere else. But that is just me...

Yes, either way I am out of bays on the current server so a second device of some type is in order.

I did think about a separate Pool/Vol as well, but then I would need to reevaluate how I have my Plex server configured. I run Plex, NZBGet, Couchpotato, Headphones, and Sonarr on it and everything is configured for /mnt/media. No that it would be the end of the world to change the settings...

I am also looking for good JBOD enclosures. Servers are easy, lots of good, used SuperMicro's on Ebay!
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
Vote for Another case, 846 f.e.
Same Hardware and Another vdev.
You don't Even need Another hba.
Just get a 846 with backplane that connects with a 8087 Port.


Gesendet von iPhone mit Tapatalk
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I did think about a separate Pool/Vol as well, but then I would need to reevaluate how I have my Plex server configured.
Yeah, I would avoid that personally. You seem to have things running sweet so no sense in mucking with a good thing. Think they make 36 drive cases, perhaps that is in your future. ;)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Get an SC847E16 JBOD (45 drives in 4RU). If you are concerned about splitting your pool, just put your existing drives in the JBOD enclosure.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I would slap a jbod on that current freeNAS as it has more then enough resources to manage a ton more disk. I'd go with a SuperMicro SC847E16 JBOD (45 drives) connected with a LSI 9200-8e.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
For those of you using tapatalk and can't see my signature:

-- freenas1 Specs: SuperMicro SuperStorage Server 5028R-E1CR12L | Xeon E5-2637V3 | 128GB DDR4 2133MHz ECC RDIMM | X10SRH-CLN4F | On-board LSI 3008 (Flashed w/Avago v9-IT) | 12x 4TB WD-Re SAS| 2x Intel S3700 (SLOG and L2ARC)
-- freenas1 expansion: SuperMicro SC847E16 JBOD (45 drives) | connected via an LSI 9200-8e flashed to v20 IT | 20 x WD Red 2TB + 6 WD Red 3TB
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
For those of you using tapatalk and can't see my signature:

-- freenas1 Specs: SuperMicro SuperStorage Server 5028R-E1CR12L | Xeon E5-2637V3 | 128GB DDR4 2133MHz ECC RDIMM | X10SRH-CLN4F | On-board LSI 3008 (Flashed w/Avago v9-IT) | 12x 4TB WD-Re SAS| 2x Intel S3700 (SLOG and L2ARC)
-- freenas1 expansion: SuperMicro SC847E16 JBOD (45 drives) | connected via an LSI 9200-8e flashed to v20 IT | 20 x WD Red 2TB + 6 WD Red 3TB
Nice, you've got the jbod in production! Did you have to increase the hw.mps.max_chains since you've got 8 SAS channels for all the disks in the jbod? Or no bottle-necks so far with it?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Nice, you've got the jbod in production! Did you have to increase the hw.mps.max_chains since you've got 8 SAS channels for all the disks in the jbod? Or no bottle-necks so far with it?
I haven't had any issues (nor seen any out of chain frame errors), but I only use that JBOD to house a backup pool of the main pool, so I don't really worry too much about the performance. I wasn't even aware of that configurable.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Thanks everyone for the input, I think that is the way to go, get a JBOD chassis and run it back to my main freenas server. I guess the only other question I would have for those of you that really know hard drives concerns the RPM.

I currently have 7200RPM drives because I thought I might need the extra speed, but given my overall load on the vdevs, can I get by with 5400RPM drives with my new vdev and would it cause any issues adding it my a volume that already has 2 vdevs with 7200RPM drives. Again, to me it does not look like I am using anywhere need the capabilities of the drives I currently have, so going to 5400RPM would save some $$ or allow me to go to a larger overall drive.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Thanks everyone for the input, I think that is the way to go, get a JBOD chassis and run it back to my main freenas server. I guess the only other question I would have for those of you that really know hard drives concerns the RPM.

I currently have 7200RPM drives because I thought I might need the extra speed, but given my overall load on the vdevs, can I get by with 5400RPM drives with my new vdev and would it cause any issues adding it my a volume that already has 2 vdevs with 7200RPM drives. Again, to me it does not look like I am using anywhere need the capabilities of the drives I currently have, so going to 5400RPM would save some $$ or allow me to go to a larger overall drive.
Random IOPS will be a bit lower with 5400RPM drives but WD Reds are suppose to have a higher platter density so the sequential throughput isn't much different. With your work load I doubt you'd see an effective drop in performance by adding 5400 drives. On the plus side they are easier to keep cool, depending on where your server is housed it might make a difference.
 
Last edited:

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Thanks Mlovelace - my biggest concern was impacting performance but since I barely utilize the 7200RPM drives I didn't think there would be an issue, but wanted input from others as well!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Personally, I wouldn't worry about the difference either.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Personally, I wouldn't worry about the difference either.

Thanks!! I appreciate the input and think that is the way I am going to go. I can stick with 5400RPM and spend the $$ on going from 4TB to 6TB instead!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Exactly! Or you could buy additional 4TB drives. :smile:
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Exactly! Or you could buy additional 4TB drives. :)

That is true, but I would really need to buy 4 more drives to stay raidz2 correct (4, 6 or 10 drives for raidz2)? I was thinking stick with 6 drives now, then add another 6 drives when the 8tb drives really drop in price driving down the 6tb prices in the process :smile:
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
That is true, but I would really need to buy 4 more drives to stay raidz2 correct (4, 6 or 10 drives for raidz2)?
Not really. The number of drives isn't really applicable anymore. The only real constraint would be to keep the performance of the multiple vdevs in a pool somewhat similar.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
I must have missed something....is the number of drives in a vdev not important anymore as a result of a software change in freebsd/zfs/freenas or for some other reason...?
 
Status
Not open for further replies.
Top