Can't install TrueNAS (Scale OR Core)

WoisWoi

Dabbler
Joined
Nov 20, 2023
Messages
32
Hi,

I keep trying to install either TrueNAS Scale (or Core to then try to upgrade on Scale), but I can't, I keep having multiple issues... Here is the following screenshoots of the different problems :

IMG_7940[1].jpg


Capture d'écran 2023-12-16 151020.png
Capture d'écran 2023-12-16 151203.png
Capture d'écran 2023-12-16 151302.png

Thanks you !
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Welcome to the TrueNAS Forums.
Please read the forum rules on posting questions. You have not provided some key information to help you out. Specifically a listing of your hardware (Motherboard, CPU, RAM, Drives, where the drives are connected, boot drive and how it's connected, etc.). Without that information my first guess is that you have run out of RAM based on the messages I read. The other option is your system is extremely unstable.

Also exactly what version of CORE/SCALE are you trying to bootstrap.

We don't need a novel but we do need some basic information to give you good help.

Cheers,
-Joe
 

WoisWoi

Dabbler
Joined
Nov 20, 2023
Messages
32
Welcome to the TrueNAS Forums.
Please read the forum rules on posting questions. You have not provided some key information to help you out. Specifically a listing of your hardware (Motherboard, CPU, RAM, Drives, where the drives are connected, boot drive and how it's connected, etc.). Without that information my first guess is that you have run out of RAM based on the messages I read. The other option is your system is extremely unstable.

Also exactly what version of CORE/SCALE are you trying to bootstrap.

We don't need a novel but we do need some basic information to give you good help.

Cheers,
-Joe
Hi, sorry !

The very last stable version (Core : 13.0-U6.1 and Scale : 23.10.0.1).

Motherboard : ASUS Pro WS W680-ACE IPMI
CPU : i9-14900K
RAM : 2x16 Go DDR5 (will go on 2x 32 Go DDR5 ECC later)
Drives : 4 To NVMe (+ 16 x 20 To HDD)

Everything is brand new.

I'm using a USB key to boot the system in order to install TrueNAS, and I used Rufus and balenaEtcher to do it.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Thanks for the information. I need to download the user manual but while I'm researching all of that, have you 'burned' in your system? Specifically run MemTest86 or MemTest86+ for a few days to ensure you have no issues and then run a CPU stress test like Prime95 or similar for at least 30 minutes. Some people will run the CPU stress test for day or even up to a week if it will be used in a production environment. For this troubleshooting effort, 30 minutes is good enough.

And for this troubleshooting effort, I'd recommend you use just TrueNAS CORE just because it is a more mature product. Once the problem is solved then use whichever you desire.

It appears you have enough RAM, 16GB is the minimum I'd recommend and you have 32GB. Just test the RAM now, and one pass is not enough. However one fail means you can stop testing.

I would prefer if you could boot from a SSD or even a small hard drive.

Drives : 4 To NVMe (+ 16 x 20 To HDD)
Please explain, I think "4 to NVMe" means you have four NVMe drives. Are these plugged into an add-on card, if yes, what make/model?
And "(+ 16x20)" means what? Be descriptive. In you mind you know what you mean but out here we have to guess at what you mean.

Also, if you do have any add-on cards, unplug them, and disconnect all your drives. You should be able to bootstrap TrueNAS with just the USB Boot Drive alone. You will not be able to create a pool but when you troubleshoot a problem, you need to break things down to isolate the issue.

If you start MemTest86 and it looks to be running, toss a message here that you are running the the test and will report back once it's finished. If it fails, report that too. If it fails, ensure you have the RAM in the correct slots, you may even have to reseat them. And at least run MemTest86 for 24 hours. That should be many complete passes. It is very important to have system stability.

That is about al I can tell you right now and should get your started. Also, if you desire, you could remove any add-on cards first to see if you can boot up to the USB drive.
 

WoisWoi

Dabbler
Joined
Nov 20, 2023
Messages
32
Thanks for the information. I need to download the user manual but while I'm researching all of that, have you 'burned' in your system? Specifically run MemTest86 or MemTest86+ for a few days to ensure you have no issues and then run a CPU stress test like Prime95 or similar for at least 30 minutes. Some people will run the CPU stress test for day or even up to a week if it will be used in a production environment. For this troubleshooting effort, 30 minutes is good enough.

And for this troubleshooting effort, I'd recommend you use just TrueNAS CORE just because it is a more mature product. Once the problem is solved then use whichever you desire.

It appears you have enough RAM, 16GB is the minimum I'd recommend and you have 32GB. Just test the RAM now, and one pass is not enough. However one fail means you can stop testing.

I would prefer if you could boot from a SSD or even a small hard drive.


Please explain, I think "4 to NVMe" means you have four NVMe drives. Are these plugged into an add-on card, if yes, what make/model?
And "(+ 16x20)" means what? Be descriptive. In you mind you know what you mean but out here we have to guess at what you mean.

Also, if you do have any add-on cards, unplug them, and disconnect all your drives. You should be able to bootstrap TrueNAS with just the USB Boot Drive alone. You will not be able to create a pool but when you troubleshoot a problem, you need to break things down to isolate the issue.

If you start MemTest86 and it looks to be running, toss a message here that you are running the the test and will report back once it's finished. If it fails, report that too. If it fails, ensure you have the RAM in the correct slots, you may even have to reseat them. And at least run MemTest86 for 24 hours. That should be many complete passes. It is very important to have system stability.

That is about al I can tell you right now and should get your started. Also, if you desire, you could remove any add-on cards first to see if you can boot up to the USB drive.
Hi,

First of all, thanks you for the feedback !

I have one (1) SSD NVMe of 4 To/TB and I have sixteen (16) HDD of 20 To/TB (I'm European, over here we say To for Teraoctect which is the equivalent of the Terabyte).
I can't boot on any disk if there is nothing on them... So I'm not sure what you're asking me ? To install my NVMe on another machine in order to install something on it which should allow me to boot on it ?

I'll see to run the MemTest and keep you informed. Also, I'm using 2 PCIe cards : one with 16 SATA slots which is not a RAID card (12 HDD are connected on it and the resting 4 are connected to the motherboard) and the other is a 10 Gbps Ethernet card.

EDIT : I just ran the Memtest and I got this in a few seconds :
IMG_7947.jpg

IMG_7948.jpg


Now, I'm testing the RAM individually.

One of the individual test :
IMG_7949.jpg


And the other one :
IMG_7956.jpg
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I can't boot on any disk if there is nothing on them... So I'm not sure what you're asking me ? To install my NVMe on another machine in order to install something on it which should allow me to boot on it ?
I was asking for you to disconnect all the drives in your system except the boot drive. You can bootstrap TrueNAS without any pools attached and it should start up just fine, but without any pools of course. This is one test. HOWEVER, I think you have found your problem.

My advice for troubleshooting the RAM failure is:
1) In the BIOS, restore to BIOS defaults. Retest and hope it passes.
2) Any overclocking can result in this type of failure, as well as a poor power supply. Recheck your power connections and if you have a spare power supply, try that out.
3) Unplug any add-on cards, disconnect the hard drives power, get to a very basic motherboard, keyboard, monitor, and boot drive (MemTest86), nothing else. Run Memtest86 again, if it fails you may have incompatible RAM for your motherboard or a bad motherboard or a bad power supply.
4) One other possible failure is the CPU. The CPU interfaces with the RAM and is just another piece but I rarely see the CPU as being the problem.

These things are not fun to troubleshoot however the one thing going for you is the failure happens quickly. Imagine if a failure didn't happen until 10 hours into the test. Make sure your RAM is on the QVL for your motherboard.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
The CPU interfaces with the RAM and is just another piece but I rarely see the CPU as being the problem.
Memory controller is on the CPU so it could be the issue. Highly unlikely but still a possibility.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I have one (1) SSD NVMe of 4 To/TB and I have sixteen (16) HDD of 20 To/TB (I'm European, over here we say To for Teraoctect which is the equivalent of the Terabyte).
More specifically, you're French. :wink:
When writing in the English language section, please use "TB" with a capital 'B'.

Your W680 board and i9 CPU would be best used as a desktop than as a NAS. If they were working, that is.
Test the 32 GB RAM modules, if you have them. And/or test a different CPU.
Once you have identified the failing component and replaced it, boot a live Linux distro to "burn-in" your system. After that you'll be ready to install TrueNAS SCALE Cobia. (I hope that the 4 TB drive is not intended for boot!)

I'll see to run the MemTest and keep you informed. Also, I'm using 2 PCIe cards : one with 16 SATA slots which is not a RAID card (12 HDD are connected on it and the resting 4 are connected to the motherboard) and the other is a 10 Gbps Ethernet card.
16… SATA ports? What is this card?
And what is the 10G NIC by the way?
 

WoisWoi

Dabbler
Joined
Nov 20, 2023
Messages
32
More specifically, you're French. :wink:
When writing in the English language section, please use "TB" with a capital 'B'.

Your W680 board and i9 CPU would be best used as a desktop than as a NAS. If they were working, that is.
Test the 32 GB RAM modules, if you have them. And/or test a different CPU.
Once you have identified the failing component and replaced it, boot a live Linux distro to "burn-in" your system. After that you'll be ready to install TrueNAS SCALE Cobia. (I hope that the 4 TB drive is not intended for boot!)


16… SATA ports? What is this card?
And what is the 10G NIC by the way?
Yes, I know that Tb is Terabits and TB is Terabytes, no worries about that :)

Why the i9 wouldn't be great for a NAS ? I wanted this CPU because 1) he should be powerful enough for what I'm aiming to do with it and 2) I'll likely have the use of the HD 770 iGPU for transcoding. And what's wrong with the motherboard ? It was the cheapest compatible with ECC RAM that I found, and it's still about 600 € just for a motherboard.

The 10G card is the TP-Link TX401.
This is the SATA card.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I do not personally see any issue with the motherboard and CPU you selected. It should be fine. That is a very expensive motherboard but I know that you know that already.

The SATA card, If it works properly, it will be your bottleneck. 16 drives connected to a x1 PCIe slot, nope. But if you do not need fast access dard drives then you may be fine. The real question is, will TrueNAS have the driver support.

Well let us know how the RAM thing is going.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Why the i9 wouldn't be great for a NAS ? I wanted this CPU because 1) he should be powerful enough for what I'm aiming to do with it and 2) I'll likely have the use of the HD 770 iGPU for transcoding. And what's wrong with the motherboard ? It was the cheapest compatible with ECC RAM that I found, and it's still about 600 € just for a motherboard.
The motherboard is too new, and hence too expensive for what it is (PCIe 5.0? no use for that in a NAS). And the last generation Core i9 is way too powerful for what a NAS requires; unless you plan to run CPU-intensive VMs/apps, this is massive overkill.
For a storage plateform with light apps, and with a 600 E budget "for the motherboard", you could have an A2SDi-H-TF instead. Except this one already includes the CPU, 12 SATA ports and a server-grade 10 GbE NIC—and, if you buy second-hand RAM, you can fit it with 128 GB for 120 E (RDIMMs are GREAT).
If you do need an iGPU for transcoding, the recommendation would be an earlier generation: C2x6 chipset and Xeon E-2000, or even a Core i3-8100/9100 going back to Coffee Lake. Less sexy, but powerful enough for a NAS and hopefully lighter on your wallet (although C246 motherboards are now hard to find and so more expensive than they should and than they used to be).

Edit. The above began as generic recommendation. Then I went back up and remembered you want 16*20 TB… 320 TB before taking out redundancy. At these heights, the "1 GB RAM per TB of storage" does not apply strictly, but 128 GB RAM feels like a bare minimum. On UDIMM platforms, a L2ARC would help; on RDIMM platforms, consider 192-256 GB RAM.
You're in a territory where a 1st (or 2nd) generation Xeon Scalable would be quite a reasonable option—put a GPU for transcoding, or just handle that on the CPU.

The 10G card is the TP-Link TX401.
This is the SATA card.
Urgh! I more or less feared the Aquantia NIC (dubious driver support in TrueNAS; server-grade Chelsio T520 or Solarflare 5122F/6122F/7122F cards go for $50 on eBay) but the "SATA card"… HOLY CRAP! 16 ports on a 4-port controller (ASM 1064, PCIe 3.0x1) and multiple port multipliersDo NOT use that with ZFS! This is litterally guaranteed to destroy your data. Get a SAS HBA instead (LSI 9200/9300 series, or the Dell/HP/IBM equivalents).
Beside, with so many drives, a server enclosure with hot-swap trays on a SAS backplane would make your life easier than cramming 16 drives in a consumer-grade ATX tower (Define 7XL or the like?).
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
16… SATA ports? What is this card?
And what is the 10G NIC by the way?
I picked that up as well - what PCIe card - it doesn't sound like a proper HBA - which means its almost certainsly an issue although probably not the current issue
 

WoisWoi

Dabbler
Joined
Nov 20, 2023
Messages
32
The motherboard is too new, and hence too expensive for what it is (PCIe 5.0? no use for that in a NAS). And the last generation Core i9 is way too powerful for what a NAS requires; unless you plan to run CPU-intensive VMs/apps, this is massive overkill.
For a storage plateform with light apps, and with a 600 E budget "for the motherboard", you could have an A2SDi-H-TF instead. Except this one already includes the CPU, 12 SATA ports and a server-grade 10 GbE NIC—and, if you buy second-hand RAM, you can fit it with 128 GB for 120 E (RDIMMs are GREAT).
If you do need an iGPU for transcoding, the recommendation would be an earlier generation: C2x6 chipset and Xeon E-2000, or even a Core i3-8100/9100 going back to Coffee Lake. Less sexy, but powerful enough for a NAS and hopefully lighter on your wallet (although C246 motherboards are now hard to find and so more expensive than they should and than they used to be).

Edit. The above began as generic recommendation. Then I went back up and remembered you want 16*20 TB… 320 TB before taking out redundancy. At these heights, the "1 GB RAM per TB of storage" does not apply strictly, but 128 GB RAM feels like a bare minimum. On UDIMM platforms, a L2ARC would help; on RDIMM platforms, consider 192-256 GB RAM.
You're in a territory where a 1st (or 2nd) generation Xeon Scalable would be quite a reasonable option—put a GPU for transcoding, or just handle that on the CPU.


Urgh! I more or less feared the Aquantia NIC (dubious driver support in TrueNAS; server-grade Chelsio T520 or Solarflare 5122F/6122F/7122F cards go for $50 on eBay) but the "SATA card"… HOLY CRAP! 16 ports on a 4-port controller (ASM 1064, PCIe 3.0x1) and multiple port multipliersDo NOT use that with ZFS! This is litterally guaranteed to destroy your data. Get a SAS HBA instead (LSI 9200/9300 series, or the Dell/HP/IBM equivalents).
Beside, with so many drives, a server enclosure with hot-swap trays on a SAS backplane would make your life easier than cramming 16 drives in a consumer-grade ATX tower (Define 7XL or the like?).
About the motherboard it was the cheapest that I was able to find which was compatible with DDR5 ECC RAM, but perhaps DDR4 wouldn't be worse for my use case, I don't know.

I think the i9 isn't overkill for what I'll do with it, and in any way, the price difference wouldn't really justify the downgrade for me. More than an iGPU, I need something as good as the UHD 770 with Quick Sync. But i7-14700K could be an option.

Also, I intend to do 2 pools of 8 HDD in RAIDZ2. Yet, I'm really new to the NAS "world" so thanks you for your help !

64 GB (DDR ECC) isn't enough considering the amount of storage ? I didn't know about that. If 128 GB is a minimum, what would be the ideal / recommendation (and why, if I may ?) ?

So I shall change the 10G card, OK (to get something like this one to avoid driver issue ? but from what I've read the TP-link is supported on TrueNas Scale) ? Would you also be afraid for my linux server that use the same card ? And I shall also change the SATA card, for what one card of 16 slots or 2 or 8 slots ? or 1 card of 8 slots and using the ports on the motherboard ? Perhaps this model on ebay at 228 $ (even if I'll be charged by the customs) instead of 585 € on Amazon FR ?

Yes, I'm using the Define 7XL, what would you suggest ? It is a silent case, with good airflow and space.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
You don't need the latest and greatest for a NAS. DDR4 has the advantage generally of being cheaper if a bit slower (which doesn't matter)

I use a Fractal Design 5 - I added a 3rd Fractal Design Fan (1 Rear, 2 front) stuffed in 7 HDD's and the airflow wasn't enough to keep the disks cool. The fans may be quiet, but they don't shift much air. So I replaced all 3 fans with some high static pressure fans on a seperate fan controller and turned them down till the case was nearly quiet. You may or may not have the same issue with the FD 7

For the LSI card - ideally you would buy a used card from a dismantler in Europe, rather than a new card from China
With 16HDD's then a 9200 LSI HBA would do just fine. If you were using many SSD's then you will need a 9300 which are more expensive. Some examples are:
LSI HBA 16i
But a better choice is 16 or 24 Port if you get the 9305-16i or 9305-24i its a more efficient card than the 9300

Nothing to do with me - its just the result of some searches

As for 10Gb Cards. Chelsio or Intel are generally considered the best and most reliable. Intel 520, 720, Chelsio 520. Also available second hand on ebay - again try to buy second hand, not from China.

Why not new from China - there is an issue with fake cards
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
What's your use case and what's you'll be doing with the CPU?

Also, I intend to do 2 pools of 8 HDD in RAIDZ2. Yet, I'm really new to the NAS "world" so thanks you for your help !
8-wide raidz2 is good (recommended not to go beyond 10 to 12-wide). But having two pools with the same layout raises the question whether you could just do bit a single pool with two vdevs.

64 GB (DDR ECC) isn't enough considering the amount of storage ? I didn't know about that. If 128 GB is a minimum, what would be the ideal / recommendation (and why, if I may ?) ?
ZFS uses RAM for everything, so more RAM generally means more performance. I do not have personal experience with over 200 TB of storage, so I don't know how RAM requirements scale at that level, but I would expect that 128 GB is a reasonable start. Only using the system and running arc_summary will tell whether there's enough for your workload or whether you'd benefit from more RAM or from a L2ARC.

So I shall change the 10G card, OK (to get something like this one to avoid driver issue ? but from what I've read the TP-link is supported on TrueNas Scale) ?
Preferably, yes. I note that you're not afraid of refurbished hardware (good for your wallet!) or from SFP+.
Being somewhat "supported" by the underlying OS is not the same thing as having a known good driver. Known good things are Chelsio, Inter server NICs (i210, i350 and then the 10 GbE NICs and higher, not the i225/226), Solarflare. With your amount of storage, I would expect that there would be multiple clients asking/sending lots of data simultaneously, and these is the situation where the lesser stuff is known to collapse under load.

Would you also be afraid for my linux server that use the same card ?
Depends what it is and what it does… ZFS was designed for enterprise requirements and assuming enterprise-grade hardware; ZFS is not that good at downgrading to consumer-grade hardware. (Think of ZFS as a the "Elon Musk boss-from-Hell" of filesystems, requiring all components to be "hardcore" or quit.)

And I shall also change the SATA card, for what one card of 16 slots or 2 or 8 slots ? or 1 card of 8 slots and using the ports on the motherboard ? Perhaps this model on ebay at 228 $ (even if I'll be charged by the customs) instead of 585 € on Amazon FR ?
Definitely a LSI 9200 (even the PCIe 2.0 generation is fine for HDDs) or 9300 (PCIe 3.0 generation, would be fine to even move up to SATA/SAS SSDs). A -8i and motherboard ports are fine. Two -8i are fine if you have the slots (and I suggest sending back your current motherboard, CPU and RAM for refund and moving to the class of hardware which has the multiple slots). A 9305-16i is fine (but the most expensive option).

Yes, I'm using the Define 7XL, what would you suggest ? It is a silent case, with good airflow and space.
16 spinning drives are not going to be "quiet". Having once tried to temporarily convert my Define 7 (not XL) I did not like the experience of sliding the trays in their little notches, and there are reports that the trays somewhat isolate the drives from airflow and hinder colling (my ambiant temperature is generally low so that was not an issue in my case). But if you're happy with the propsect of installing and individually wiring 16 drives in there, I'm not going to insist about 3U/4U storage racks, their convenient SAS backplanes (one -8i HBA and expanders to bind all drives) and their wind tunnel fans to keep everyhing cool).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
(I'm European a french speaker, over here we say To for Teraoctect which is the equivalent of the Terabyte)
Fixed that for you.
the "SATA card"… HOLY CRAP! 16 ports on a 4-port controller (ASM 1064, PCIe 3.0x1) and multiple port multipliersDo NOT use that with ZFS! This is litterally guaranteed to destroy your data. Get a SAS HBA instead (LSI 9200/9300 series, or the Dell/HP/IBM equivalents).
Not guaranteed to destroy data, just likely. But it is guaranteed to suck away years of your life with all sorts of issues, from flaky disks to sloooooooow disk I/O. And that's if you get far enough along to set things up (far from certain when port multipliers are involved).
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
For the LSI card - ideally you would buy a used card from a dismantler in Europe, rather than a new card from China
To be fair, there are honest sellers and reliable dismantlers in China, and there are crooks and traders in fake goods in Europe/UK/US/wherever. The general issue is sorting out the good ones from the bad apples.
Thanks for pointing to a UK dismantler. Even with Brexit and the resulting customs hassle, it's a good step forward.

Regarding fakes, fake Intel NICs have been reported or strongly suspected here (the Yottamark is here for a reason), and there's even a resource from our ever helpful Resident Grinch about fake HBAs. I've never heard of fake Chelsio or fake Solarflare cards, and I doubt that there could be a viable industry right now producing plausible fakes of these venerable NICs, to be sold on eBay for under $50 apiece; for me, any listing may be presumed a genuine item, freshly refurbished from an upgraded data centre.
But the NIC is the lesser issue, especially if the OP ends up with a genuine server motherboard which may well have on-board 10 GbE.
 

WoisWoi

Dabbler
Joined
Nov 20, 2023
Messages
32
OK, thanks you for all your feedback !
I was thinking to get this Chelsio card and either this (Dell) LSI 9305-16 card or this one, but I'm not sure if there is a difference between those two (except the references). Or maybe an LSI 9300-8i and I'll connect the rest of the disk to the motherboard ? I'll keep the same motherboard because I can't find a cheaper MB compatible with ECC and LGA 1700, and I managed to find it 200 € cheaper.

But if you're happy with the propsect of installing and individually wiring 16 drives in there, I'm not going to insist about 3U/4U storage racks, their convenient SAS backplanes (one -8i HBA and expanders to bind all drives) and their wind tunnel fans to keep everyhing cool).
I already did the installation...
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Then, if the drives are inside the case, you want a -16i or (-8i) HBA, with 'i' as in 'internal'. These -16e are for connecting an external disk shelf.
The Chelsio card is good—but get the long bracket.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I use a Fractal Design 5 - I added a 3rd Fractal Design Fan (1 Rear, 2 front) stuffed in 7 HDD's and the airflow wasn't enough to keep the disks cool. The fans may be quiet, but they don't shift much air. So I replaced all 3 fans with some high static pressure fans on a seperate fan controller and turned them down till the case was nearly quiet.
I replaced the fan without prior testing as per your recommendation. I use Thermaltake tough fans (17 Eur each iirc), they were relatively cheap for high static pressure fans.
2 in the front and the stock fan in the back, not even bothering with the default fan controller (iirc that only accepted 3 PIN fans not PWM).
Temperatures for my HDDs are low 30s to high 30s and that is on the standard fan speed setting (so around 50 % speed, but I can check 500 rpm it is).
I may look for cheap 4 PIN to 3 PIN adapters and use the built in fan control to ramp up the fan speed on them for the summer. That's the only downside of my newly found love for IPMI - the fan speed control is not very customizeable.*
 
Top