started build in 2012... can it be resurrected?

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
I accumulated some 2nd hand server hardware back in 2012 for a home freenas/plex server and life happened. 8 yrs later, I'm clearing out the basement trying to revive the build.

Here is what I have:

Dell 1950 III, E5420 2.5ghz processors, 8gb ram (specs here, 1x internal SATA, 2x 10gb nic, PERC 6i SAS)
2x M1015 / LSi 9220-8i
2x IBM 46M0997
2x Supermicro X5DPE-G2 3U 16 BAY servers - gutted and only used for backplane/caddies
SuperMicro CSE-PTJBOD-CB2
2x Supermicro PDB-PT825-8824 power distribution
4x Supermicro PWS-920P-SQ switching power supply
19x Seagate Barracuda ES 750GB (9BL148) (looks like I lost one somewhere!)
6x Samsung HD103SJ 1TB

left over/other: 2x Seagate Cheetah 15k SAS 73GB, 1x Samsung SSD, 2x WD 1tb (1 blue, 1 black)

What do you recommend for the rest of the buildout / setup? I'm reading up on current guides now but what I'm seeing is:

-8gb ram was fine back in the day but maxing out the mobo with 64gb can be had for reasonable money
-dual port 10gbs card or SLOG device would be a better use for PCI slot PERC is taking up, but which?

My anticipated usage is:
media server (up to 3 video streams)​
long term archival storage (VHS home video>digital project was original reason for project)​
surveillance video storage​

Thanks!





 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
You do not need or want a SLOG, that's sorted. SLOG just helps with sync writes, which means it's narrowly useful when FreeNAS backends a VM host via NFS or iSCSI. Write without sync is always the fastest you can get, so: No SLOG for you, keep it simple.

You have really light file serve usage. You don't even need 10Gb for that use case, but go for it if if it tickles you. Since you already have 2x10Gb NIC, why add another, though?

More RAM is always nice for more ARC. 16GiB is fine; 32GiB allows you to run a few VMs; with 64GiB you have more RAM than you'll need for that use case.
 

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
Thanks for the response

typo... dual port 1gbE (Broadcom NetXtreme II)

I was thinking an x-over SFP+ connection to my workstation as in iSCSI connection because of all the video recording. I do not have any 10gb hardware yet.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Sure why not. You have enough rust there to build a pool that can be useful with that. You'll want lots of vdevs if you want to play with 10Gb throughput. I'd build it a few different ways, test performance with each - keep in mind empty pool will be much better than full - and then settle on the configuration you like best.
 
Joined
Jul 2, 2019
Messages
648
I don't think you will be able to do any virtualization. The E5420 lacks Intel's VT-x and Extended Page Tables (EPT).
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Good point. File storage and jails it is, then.
 

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
I don't think you will be able to do any virtualization. The E5420 lacks Intel's VT-x and Extended Page Tables (EPT).

It has VT-x but no EPT. I'm not sure where that puts it for VM's. More reading required. The only chip in that series that supports EPT is L5248 but it seems to be a complete unicorn, so L5430 looks like the only useful processor upgrade (low voltage)

Any thoughts on topology? I've not seen much talk of how best to distribute vdev LBAs between two HBAs...

I'm thinking 4 drive radiz2 or 3 way mirror mainly for upgrade ease

Thanks
 
Joined
Jul 2, 2019
Messages
648
I used to have a PE2940 III for experientation I seem to recall that I could get jails to work but not VMs...
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
It has VT-x but no EPT. I'm not sure where that puts it for VM's.
The hypervisor used by FreeNAS for VMs is "bhyve", which requires EPT.
 

AdrianB1

Dabbler
Joined
Feb 28, 2017
Messages
29
The only problem with a very old build is that it will act as a secondary heater for your house. The 12 year old CPU's are extremely inefficient by today's standard, for example a single $120 Ryzen 3300x should be a lot faster with a quarter of the power consumed. Similar for RAM, 2 DDR4 16 GB sticks will be a lot more efficient (perf per Watt) while still affordable. Similar for the disks, etc. You can use most of what you have, but it will cost you in electricity, noise and performance.
 

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
Thanks for the replies.

It LIVES! 64g ram, low voltage processors in, crazy turbo fans replaced with 'normal' noctura fans, HBAs and expanders flashed. I'll get some wattage numbers soon.

how crazy is pool of 3way mirrors with one of mirrors offline/powered down and resilvered once a week?

Is bonnie++ best way to benchmark?

cheers!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
It LIVES!

1590096316511.png
 

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
Switching power supplies seem to mess with my old killawatt meter but JBOD box is idling at 2.38amps, server is sitting at 1.67-1.87amps so 445watts or $217/year

I setup 8x3 mirrors and am pulling this through my home network with other traffic on NFS share

Code:
------------------------------------------------------------------------------
CrystalDiskMark 7.0.0 x64 (C) 2007-2019 hiyohiyo
                                  Crystal Dew World: https://crystalmark.info/
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

[Read]
Sequential 1MiB (Q=  8, T= 1):   409.970 MB/s [    391.0 IOPS] < 20408.82 us>
Sequential 1MiB (Q=  1, T= 1):   388.992 MB/s [    371.0 IOPS] <  2694.37 us>
    Random 4KiB (Q= 32, T=16):   254.231 MB/s [  62068.1 IOPS] <  8235.91 us>
    Random 4KiB (Q=  1, T= 1):    28.979 MB/s [   7075.0 IOPS] <   140.76 us>

[Write]
Sequential 1MiB (Q=  8, T= 1):   231.089 MB/s [    220.4 IOPS] < 35945.48 us>
Sequential 1MiB (Q=  1, T= 1):   230.899 MB/s [    220.2 IOPS] <  4531.19 us>
    Random 4KiB (Q= 32, T=16):   144.600 MB/s [  35302.7 IOPS] < 14443.22 us>
    Random 4KiB (Q=  1, T= 1):    45.723 MB/s [  11162.8 IOPS] <    88.91 us>

Profile: Default
   Test: 1 GiB (x5) [Interval: 5 sec] <DefaultAffinity=DISABLED>
   Date: 2020/05/22 11:49:52
     OS: Windows 10 Professional [10.0 Build 16299] (x64)

 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
Switching power supplies seem to mess with my old killawatt meter but JBOD box is idling at 2.38amps, server is sitting at 1.67-1.87amps so 445watts or $217/year

I setup 8x3 mirrors and am pulling this through my home network with other traffic on NFS share
What do you mean by 8x3 mirrors? How much capacity did you end up with? That seems like a lot of redundancy for your anticipated usage.
 

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
I setup 8 vdevs, 3 drives each, all mirrored. My plan is to manually disconnect the 3rd drive and have them as cold spares. Reconnect and allow resilver once a week/after major changes which ever makes sense. I ended up with 5tb with current disks which is fine for now. I dumped 3tb of data on the pool last night. I didnt time it but it seemed like resilvering took 1hr or so. I still have 8 open bays.

edit: -units hard-
 
Last edited:

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
Resilvering time is dependent upon the amount of data you have and the size of your disks. Small disks will resilver relatively quickly.

There is nothing wrong with lots of mirrored vdevs, except that you give up a lot of capacity. Are all the vdevs configured into one pool? If yes, remember that if you lose one vdev completely, then you lose the pool.
 

sadpanda

Dabbler
Joined
May 8, 2020
Messages
19
Are all the vdevs configured into one pool? If yes, remember that if you lose one vdev completely, then you lose the pool.

Everyone mentions this anytime mirrors are involved... Granted I'm a n00b but this caution seems unnecessary and counter to other observations regarding drive failure.

If I have a typical raidz2 with 5 discs vdevs you can lose two drives... In most diy situations, all 5 drives would have been commissioned at the same and likely from the same lot. So as soon as one drive fails, you are sweating bullets waiting for a whole vdev/pool to start dropping like flies.

With 2 way mirror + a mostly re-silvered 3rd mirror on standby, as soon as one drive fails in a vdev, the third can be plugged in. Assuming re-silvering has any intelligence and isn't just a clone operation each time (I'm still not 100% on this) the vdev is back up to having 2 healthy drives in 'no time' compared to a new bare drive. The swapped 3rd drive is essentially new compared to its vdev mates. Between that and the reduced resilver time, the dropping like flies risk is reduced.

Assuming I'm buying larger capacity discs as I go, when the 2nd old disc in the vdev starts to go, I can purchase two more discs and shuffle the original lower capacity 3rd mirror to the shelf waiting for the next old disc to drop. Of my current 24 drive pool, only 16 are going to age out near each other, the other 8 are still going to have lots of life. If you had a raidz setup with the same 24 drives, they would ALL be the same rank on the dead-pool list.

Is it a lot of 'wasted' space? Sure. However, my chassis is WAY overkill for my use case so space is not an issue. I'm trading some electricity (cheap in my area) for ease of upgrade and piece of mind.

Am I wrong?
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
Everyone mentions this anytime mirrors are involved... Granted I'm a n00b but this caution seems unnecessary and counter to other observations regarding drive failure.

If I have a typical raidz2 with 5 discs vdevs you can lose two drives... In most diy situations, all 5 drives would have been commissioned at the same and likely from the same lot. So as soon as one drive fails, you are sweating bullets waiting for a whole vdev/pool to start dropping like flies.
Please don't misunderstand my comment. There is nothing inherently wrong with mirrors. In fact, they will generally give better performance than a large RaidZ2 pool. However, in a RaidZ2 configuration one can lose any two disks before losing data. In a pool of mirrored vdev's, if you lose the wrong two disks, then you can lose a vdev which will in turn cause the loss of the pool. Is this very likely? No. But, it's worthwhile to point it out because some folks don't understand the difference.
 
Top