Hard Drive Troubleshooting - Massive Failures - Need Help Isolating the Problem(s)

Status
Not open for further replies.

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
And of course I agree too, in order to troubleshoot a power supply problem you really need another power supply or some test equipment. Trust me, a new power supply is cheaper. And we here like Seasonic. You don't need a Gold or Platnum power supply, a Bronze rated one is good too. Look for a sale and buy one. Right now I'm not thinking it's your power supply, likely your HBA, but one never knows until you prove it. So if your hard drives are doing what is expected and the pool recovers and operates fine and looks proper, then I'd move that pool to the HBA connectors and use it for a while, testing it out to ensure it works fine. I suspect it will start throwing errors again.

If you need access to your "Main" pool, you can connect it to the motherboard SATA connectors again but it doesn't help the troubleshooting efforts at all right now. If you have another computer that has 8GB RAM, you could use that to host your main pool, but I don't know if you have one.

Sure i would like to have my main pool online, but I can go on without it for longer time. I calculated on having copies of stuff making it possible to have the serverdata offline for longer time, in case of troubleshooting, just like this case. So that is ok for some more time, for now, until we fiugre out the problem.
I had another PC, with non ECC RAM, but sold it recently.

I understand Seasonic is the PSU to get here :)
Why would I not need a Gold or Platinum one ? I dont mind spending a litte more money to save me some issues in the future. Sure bronze is still good, but no need for me to save a little money and getting into issue faster. Thinking of the facet the all hardware fails sooner or later.
I don't know if I would say my drives are doing what they are supposed to. You can take a look at my update and it doesnt seem better.
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Some update, pool status:

Code:
  pool: Secondary_Raidz3																											
state: DEGRADED																													
status: One or more devices is currently being resilvered.  The pool will														 
		continue to function, possibly in a degraded state.																		
action: Wait for the resilver to complete.																						
  scan: resilver in progress since Fri Oct 13 21:56:33 2017																		
		452G scanned out of 34.8T at 535M/s, 18h43m to go																		 
		28.9G resilvered, 1.27% done																								
config:																															
																																	
		NAME											STATE	 READ WRITE CKSUM												
		Secondary_Raidz3								DEGRADED	 0	 0	 0												
		  raidz3-0									  DEGRADED	 0	 0	 0												
			gptid/8275e396-a83c-11e7-9cee-002590f5b804  ONLINE	   0	 0	 0  (resilvering)								 
			gptid/3a44142c-931c-11e7-b895-002590f5b804  ONLINE	   0	 0	 0												
			gptid/33c047e7-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/34749735-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			6370505857967419013						 UNAVAIL	  0	 0	 0  was /dev/gptid/3536bf51-2292-11e7-9626-002590f5b
804																																
			gptid/35e2d6ec-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/368b679d-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/3730ee56-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/37de7e53-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			replacing-9								 UNAVAIL	  0	55	 4												
			  5660221525628801207					   UNAVAIL	  0	 0	 0  was /dev/da8p2								
			  da0p2									 FAULTED	  0	94	 0  too many errors  (resilvering)				
			da1p2									   ONLINE	   0	 0	 1  (resilvering)								 
																																	
errors: Permanent errors have been detected in the following files:																
																																	
		Secondary_Raidz3:<0x0>																									
		/mnt/Secondary_Raidz3/A...																				
		/mnt/Secondary_Raidz3/M...																 
[root@freenas ~]#


Seem the resilvering never ends, I don't know how many times it has resilvered. Something strange that happened a while ago, that even happened a few times before is: When I Shutdown the server through the GUI, the server restarts, instead of shuts down. This happened just a while ago when i wanted to shut down and change the power cables in the server. Don't know if the restart instead of shutdown could mean anything, but I find it strange.
 
Last edited by a moderator:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Not sure if I followed you with "3 A total for both 5 and 12 V for a few seconds is common for the drives used in our NAS". But do you think too many harddrives and some fans on one PSU trail could be a problem and causing this ?

It's the current a drive will suck during start-up, and depending on the rest of your config, with 16 drives you can be pretty close of the max of your PSU but as it's a good PSU it shouldn't be a problem.
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Now a PSU report.

The power was fairly distributed among the 5 trails of the PSU, I did a good job there since before apparently.
No fans are connected to any trail together with any harddrive. Some fans are connected to the motherboard while most of them are just connected to the case, and the case gets its power from the motherboard by separate powercable. So trails are not powering up any fans.
The power distribution for the drives looks like this:
Trail 1 powers up 3 drives, those drives have these power specifications +5V +12V 0,55A (like most other drives in the system)
Trail 2 powers up 3 drives
Trail 3 powers up 4 drives
Trail 4 powers up 2 hotswaps and a fan, one of the hotswaps is powering a drive the other is empty. so one drive and one fan.
Trail 5 powers up 5 drives of the main pool. But this pool is currently not connected and not drawing any power at all. Worth mentioning it is that these drives and this pool powered by one trail had no issue, despite been the biggest drives, with 7200 RPM and 5 of them :eek:

Despite no fans connected to any trail, I did unplug 3 fans that were being powered from the case.
And I removed the HBA physically from the motherboard, its outside a case now in an ESD bag.

Some remarks. My boot system had lost one of the USBs and wasn't dual booting anymore. I tried to replace it with different USB memories but got several messages about FreeNAS not being able to GPT format the drive. I did try to erase the USB but it didn't help, I just gave up on that thing. Now after the system restart the second USB memory seems back and the system is dual booting again :confused:

FreeNAS has started resilvering, is there now anything more I can test or try?
There is no point to do any tests now, until the latest resilver is done in a day.
 
Last edited by a moderator:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Hmm... The more I think about that the more I think it's not the PSU. Maybe it's the USB drives where the system is, or maybe it's something else (my first guess would be the MB, but I really hope it's not that)... Let's start with something easy and cheap to try: have you any other USB drive or, better, a SSD to try as a new system drive? (backup the config file, install the same version of FreeNAS on the new drive, discard the wizard (if it's still here, I don't know the details about the last versions of FreeNAS as I was pretty much away) and restore your config with the config file)
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Hmm... The more I think about that the more I think it's not the PSU. Maybe it's the USB drives where the system is, or maybe it's something else (my first guess would be the MB, but I really hope it's not that)... Let's start with something easy and cheap to try: have you any other USB drive or, better, a SSD to try as a new system drive? (backup the config file, install the same version of FreeNAS on the new drive, discard the wizard (if it's still here, I don't know the details about the last versions of FreeNAS as I was pretty much away) and restore your config with the config file)

I have 15 sandisk cruzer blade 16gb USB memories that I recently bought, the reason I bought so many was I got so tired of FreeNAS complaining about lots of older USBs I used. So I invested in a lot and made sure there is allways a new USB to insert once one of the dualboot USBs starts to get rejected by FreeNAS. I say rejected becasue sometimes they seem fine for FreeNAS and are working without any issues.
I had issues with the USB memories before the pool issues started. But considering those memories were older and some were memorie cards, I rather blamed the USB memories than somethign else like the USB port. And I invested in lots of "good" USB memories.
But at least 1 USB port on the front, on the case, is behaving strangly. I noticed that when trying to replace one failing USB memories, it wasnt working to replace in one port, but it worked in another port. But then again I suspect rather the USB memory it self, specially after testing many of the drives on a windows machine with checkflash (read write tests) and getting errors on 15 usb memories of 16. So it ended not making sense, not all 15 of 16 memories could have issues.
Anyway, I do have a spare SSD here. Sure I could install it in the system. But then it would just add another factor to the equation and that is mostly not good :)
You suggesting trying to run FreeNAS for a period on the SSD disk to eliminate USB as the problem here ? (I hope that wount kill my SSD in a week with all the FreeNAS activity :))
Another option is that I could move away from the case USB ports, and use the ones on the motherboard on the backside of the Case ofcourse. I have been trying to save those from plugging and unplugging all time, those are the last USB ports I want to have malfunctioning.
 
Last edited:

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
One thing I don't get, considering the state of the pool. Isn't it easier to destroy the pool and create a new one when troubleshooting ?
how can someone tell that the state of the pool is not because of drive/s malfunctioning? why are so many suspecting something else than the harddrives?
Can an interrupted not be a sign of drive issue ?
and if looking at the pool statues I showed earlier, is there anything indicating that this is not harddrive issue only? instead of something else ?
Again we shouldn't forget that the main pool had no issue while only this pool had issues :rolleyes:

Code:
  pool: Secondary_Raidz3																											
state: DEGRADED																													
status: One or more devices is currently being resilvered.  The pool will														 
		 continue to function, possibly in a degraded state.																		
action: Wait for the resilver to complete.																						
  scan: resilver in progress since Fri Oct 13 23:37:52 2017																		
		 588G scanned out of 34.8T at 287M/s, 34h47m to go																		 
		 27.6M resilvered, 1.65% done																								
config:																															
																																	
		 NAME											STATE	 READ WRITE CKSUM												
		 Secondary_Raidz3								DEGRADED	 0	 0   374												
		   raidz3-0									  DEGRADED	 0	 0 1.46K												
			 gptid/8275e396-a83c-11e7-9cee-002590f5b804  ONLINE	   0	 0	 0  (resilvering)								
			 gptid/3a44142c-931c-11e7-b895-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 gptid/33c047e7-2292-11e7-9626-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 gptid/34749735-2292-11e7-9626-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 6370505857967419013						 UNAVAIL	  0	 0	 0  was /dev/gptid/3536bf51-2292-11e7-9626-002590f5b
804																																
			 gptid/35e2d6ec-2292-11e7-9626-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 gptid/368b679d-2292-11e7-9626-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 gptid/3730ee56-2292-11e7-9626-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 gptid/37de7e53-2292-11e7-9626-002590f5b804  DEGRADED	 0	 0	 0  too many errors								
			 replacing-9								 UNAVAIL	  0	 0	 0												
			   5660221525628801207					   UNAVAIL	  0	 0	 0  was /dev/da8p2								
			   10093850100708201031					  UNAVAIL	  0	 0	 0  was /dev/da0p2								
			 da0p2									   FAULTED	  0	56	 0  too many errors								
																																	
errors: Permanent errors have been detected in the following files:																
																																	
		 Secondary_Raidz3:<0x0>																									
[root@freenas ~]#																									
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
If you are willing to destroy the pool then by all means I think you should do that. Unfortunately there is no quick fix but here is what I'd do in your situation:

1) Destroy the pool.
1a) Connect up all your hard drives (power only for the "main" pool) in order to load down your power supply.
2) Run Memtest86 on the system for at least 2 full passes (for peace of mind)
3) Run a CPU stress test for 30 minutes (for peace ogf mind)

If both tests pass then odds are your power supply is fine.

4) Disconnect your "main" pool hard drives again. You won't need them for the rest of this troubleshooting. The testing hard drives should all be plugged into your motherboard SATA connectors.
4a) Remove your HBA card. Lets just take this out of the equation.
5) If you have a SSD laying around, use if or use only a single USB Flash Drive for the FreeNAS OS. Install FreeNAS. Do not restore your configuration file, just make a clean build.
6) Did all of your SMART Long/Extended tests come out okay? If yes then skip step 7.
7) Run a SMART Long Test on all your hard drives. If they fail, replace the failed ones or run badblock on them and rerun the Long test.
8) Create a pool as you desire it to be configured.
9) Put data on your new pool, lots of data, you need something to SCRUB.
10) Perform a SCRUB, ensure it works fine.
11) Shutdown, Power Up, Reboot. We are testing to ensure your system response since you had issues before. You may want to do this right after you build up the system vice waiting so long.
12) If everything works then I think you are all good with the motherboard, PSU, and the rest of your system.
13) Plug in your HBA and move the hard drives SATA connections to it.
14) Run a SCRUB, does it still work fine?
15) Transfer some data around and ensure nothing bad happens.
16) Shutdown, Power Up, Reboot. This time we are just checking to see if your system doesn't corrupt the drives. And run another SCRUB.

So all of this was just off the cuff and will hopefully get you to a point to identify a problem item. Lets say you do not encounter any problems, then I'd say it may have been the USB Flash drives you were using. My advice is to only use a single boot device and just keep a backup configuration file somewhere should you need it. USB Flash drives are terrible, any cheap SSD is much better from a reliability point of view. Using multiple mirrored boot devices is not a great idea either IMO, just use the config file, it will save you the complications of the mirror.

17) Attach your "Main" pool drives and power up that system.
18) Copy files from the "main" pool to the other pool and ensure things are going well. Run another scrub on your suspect pool after you have done what you feel is enough data copying. Cycle Power again. Basically test out your system. If you have no further issues then I think you are done. I would run badblocks on all your previously failed hard drives to ensure that they are actually a problem.

One last test you could pull if you really want to piss off your hard drives is to start a SMART Long test on all your drives in the suspect pool and then start a SCRUB running. This will place a heavy workload on all of these hard drives. If they all pass then those are good hard drives. Remember that doing both operations at the same time will make both operations take a very considerable time to complete the tests, it's longer than double the time, you just need to sit back and wait a day or so.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
...
The 8TB which complains about spin-up could be an indicator, as the larger drives usually have high spin-up power requirements.
...

This was true in the days of 14" platters but with 3.5" drives you will find the startup power usage does not vary much from 2TB to 14TB drives.

Check out the spreadsheet for further numbers....

First column is size in TB and second is max startup watts (both 5V/12V)

WD Red

Code:
1  14.40
2  20.76
3  20.76
4  21.00
5  21.00
6  21.00
8  21.48
10 21.48


WD Red PRO

Code:
2  22.80
3  22.80
4  21.48
5  22.80
6  21.48
8  21.48
10 21.60


Here is a graph to make it easier to see the results. I'll add this pivot table/graph into the spreadsheet for next weeks update.

upload_2017-10-13_19-25-3.png
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
...
Try to balance drives avoiding too many drives on one line. Actually this is something I have been thinking about before. Even the facet, that I didnt mentione yet, that allmost all 8TB NAS drives I recently tried to add were causing different issues while trying to replace 4TB drives temporary. This somehow is pointing towards your theory that it could be related to the PSU.
Same time, larger drives, and drives in general, dont require much. not even during spinup, but again, I am not the expert here
...

I quickly graphed the data for Seagate Ironwolf and IronWolf Pro for you... According to Seagate there is not a difference between 4TB and 8TB startup power usage....

Ignore the graphs title. I was too lazy to change it.

Here is Ironwolf max startup watts:

upload_2017-10-13_19-38-39.png


Here is Ironwolf Pro max startup watts:

upload_2017-10-13_19-39-14.png
 

Attachments

  • upload_2017-10-13_19-34-6.png
    upload_2017-10-13_19-34-6.png
    35.5 KB · Views: 291
  • upload_2017-10-13_19-35-31.png
    upload_2017-10-13_19-35-31.png
    35.5 KB · Views: 241

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
...

as of bigger wattage PSU. what do you recommend of Wattage ? considering this system in normal operation has one pool of 5 x 8TB seagate ironwolf 7200 RPM drives and another pool of 11 x 4TB Seagate Ironwolf 5400 RPM (those may in future get upgraded to the 8TB drives, meaning more power and RPM - seems hard to find bigger disks with lower RPM).
Anyway, with that disk setup and my hardware in the signature, what PSU wattage do you suggest ? keeping in mind that it would be able to handle even bigger drives in future.

Your 16 drives use roughly 400 watts of STARTUP power, using rounded up 25 watts per drive.

Do not go to a SLOWER RPM as the power usage difference is small at 21.6 vs. 24 watts. This is an OLD WIVES TALE that larger newer drives use a lot more power. The numbers show otherwise. The difference between 7200 and 5400 is not that much.

The startup power usage for 4 TB to 10 TB is the same! This is true for Ironwolf and Ironwolf Pro in their respective lines.

The graphs just keep coming...

Done for Seagate, Size in TB, RPM, warranty (3 | 5) -- 3 is Ironwolf and 5 is Ironwolf Pro. (If I wasn't listening to REO, I don't think I would crank out these graphs! LOL)

upload_2017-10-13_19-56-18.png
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Ok, I saw some reviews and it's a pretty good PSU. Unless it's old or there is a defect of some sort it should not be the PSU. The thing is, unless you can test with another PSU and/or you can do some measurements with an oscilloscope then you can not be sure it's not the PSU.

I agree with @danb35 I've made the measurements myself, 3 A total for both 5 and 12 V for a few seconds is common for the drives used in our NAS, the details are in this thread if you want more :)

For the Seagate's Ironwolf, the max startup watts, 4TB - 10TB, 21.6 Watts. For Seagate Ironwolf Pro 4TB - 10TB, 24 watts.

Seagate actually shows the startup power usage strip chart in their manuals online. Other manufacturers have the numbers but do not share it online.

Very few drives use more than 24 max startup watts any more. HGST is the exception and some datacenter drives by various manufacturers - even those are disappearing.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
And of course I agree too, in order to troubleshoot a power supply problem you really need another power supply or some test equipment. Trust me, a new power supply is cheaper. And we here like Seasonic. You don't need a Gold or Platnum power supply, a Bronze rated one is good too. Look for a sale and buy one. Right now I'm not thinking it's your power supply, likely your HBA, but one never knows until you prove it. So if your hard drives are doing what is expected and the pool recovers and operates fine and looks proper, then I'd move that pool to the HBA connectors and use it for a while, testing it out to ensure it works fine. I suspect it will start throwing errors again.

If you need access to your "Main" pool, you can connect it to the motherboard SATA connectors again but it doesn't help the troubleshooting efforts at all right now. If you have another computer that has 8GB RAM, you could use that to host your main pool, but I don't know if you have one.

And I like the Corsair RMx line ;)

Single rail, 10 year warranty, nice efficiency at low utilization, silent and can be significantly cheaper in some markets.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Some update, pool status:

Code:
  pool: Secondary_Raidz3																											
state: DEGRADED																													
status: One or more devices is currently being resilvered.  The pool will														
		continue to function, possibly in a degraded state.																		
action: Wait for the resilver to complete.																						
  scan: resilver in progress since Fri Oct 13 21:56:33 2017																		
		452G scanned out of 34.8T at 535M/s, 18h43m to go																		
		28.9G resilvered, 1.27% done																								
config:																															
																																	
		NAME											STATE	 READ WRITE CKSUM												
		Secondary_Raidz3								DEGRADED	 0	 0	 0												
		  raidz3-0									  DEGRADED	 0	 0	 0												
			gptid/8275e396-a83c-11e7-9cee-002590f5b804  ONLINE	   0	 0	 0  (resilvering)								
			gptid/3a44142c-931c-11e7-b895-002590f5b804  ONLINE	   0	 0	 0												
			gptid/33c047e7-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/34749735-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			6370505857967419013						 UNAVAIL	  0	 0	 0  was /dev/gptid/3536bf51-2292-11e7-9626-002590f5b
804																																
			gptid/35e2d6ec-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/368b679d-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/3730ee56-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			gptid/37de7e53-2292-11e7-9626-002590f5b804  ONLINE	   0	 0	 0												
			replacing-9								 UNAVAIL	  0	55	 4												
			  5660221525628801207					   UNAVAIL	  0	 0	 0  was /dev/da8p2								
			  da0p2									 FAULTED	  0	94	 0  too many errors  (resilvering)				
			da1p2									   ONLINE	   0	 0	 1  (resilvering)								
																																	
errors: Permanent errors have been detected in the following files:																
																																	
		Secondary_Raidz3:<0x0>																									
		/mnt/Secondary_Raidz3/A...																				
		/mnt/Secondary_Raidz3/M...																
[root@freenas ~]#


Seem the resilvering never ends, I don't know how many times it has resilvered. Something strange that happened a while ago, that even happened a few times before is: When I Shutdown the server through the GUI, the server restarts, instead of shuts down. This happened just a while ago when i wanted to shut down and change the power cables in the server. Don't know if the restart instead of shutdown could mean anything, but I find it strange.

Could be caused by PSU...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Trail 1 powers up 3 drives, those drives have these power specifications +5V +12V 0,55A (like most other drives in the system)
Trail 2 powers up 3 drives
Trail 3 powers up 4 drives
Trail 4 powers up 2 hotswaps and a fan, one of the hotswaps is powering a drive the other is empty. so one drive and one fan.
Trail 5 powers up 5 drives of the main pool. But this pool is currently not connected and not drawing any power at all. Worth mentioning it is that these drives and this pool powerd by one trail had no issue, depsite been the biggest drives, with 7200 RPM and 5 of them :eek:

It’s not enough to just check how many drives are on each run/string/strand, you also need to check which strands/cables are connected to which rail, out of the 4 12V rails you have! Ouch

This is why I prefer single rail designs.

Of course, some multi-rail ‘designs’ are actually only multi rail on the specs and are actually single rail in reality.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Two amps per disk on 12v for startup would be a better planning number.
That's good to know, and makes a big difference.

I think the secondary_raidz3 pool should be destroyed and re-created for testing. The attempts to resilver it and bring it back to any redundancy seem to have failed. This could be because there actually is corrupt data in the pool, cause unknown. At this point, start a new pool using the HBA, and you know that you aren't dealing with pre-existing corruption.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Well, for testing purposes it’s useful to have a bunch of data in a pool, and it’s non trivial to get that data in with realistic fragmentation.
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
I’m 99% certain you have drives with intermittent issues, dying but not dead yet. Download Parted Magic and boot from that, use the Disk Health app, double click each drive individually and check if any of the drives have attributes that are Pre Failure or near end of life.

Also check the Error Log for each drive and see if there any.

I’m sure you will find problems. The reason I say this is because I personally used FreeNAS and Rockstor on the same machine with bad drives and none of them warned me.

I then installed Xpenology (Synology bootloader) which has much better disk health reporting and bam!! Warnings about drives.

Fired up Parted Magic and found all the duds that were half working/working but nearly dead drives with intermittent issues.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
You suggesting trying to run FreeNAS for a period on the SSD disk to eliminate USB as the problem here ? (I hope that wount kill my SSD in a week with all the FreeNAS activity :))

Yes, exactly, a SSD is far more reliable than USB sticks. FreeNAS is a very light load for a SSD, far less than Windows and other OS.

For the rest @joeschmuck has been quicker than me and his advices are very good ;)

@farmerpling2 Are those numbers from the web or are they real measurements?
 
Status
Not open for further replies.
Top