BUILD Building a file server for a SMB. Could use oppinions and advice.

Status
Not open for further replies.

Chicken76

Dabbler
Joined
Jan 14, 2013
Messages
46
As the title says I'm building a file server for a small business to replace the storage role of an old Windows Server machine.

The budget is pretty low, nowhere near TrueNas prices, but I estimate i have enough to put together a decent FreeNAS machine with room to grow.

My choices for hardware so far are:

SERVER:
Supermicro SYS-6028R-TR
It's a 2U dual-socket server with 8 3.5" bays

CPU:
1 x Xeon E5-2620 v3
A single (for now, might upgrade later to maximize ram) 2.4 GHz Haswell hexa-core that turbos to 3.2 GHz
Can't go higher price-wise, but I'm guessing it will be enough for a good while.

RAM:
4 x MEM-DR416L-HL01-ER21
That code is on the Supermicro Compatible Memory list.
64 GB should fit my 'working set' for now and upgrading later shouldn't be a problem.
I'm not going to use deduplication, compression or encryption.

DRIVES:
2 x WD RE 4TB WD4000FYYZ
Configured as ZFS mirror.
I have 1.something TB right now and growing (rough estimation) by a few hundred GBs per year. I'm thinking these drives should be enough for the first two years or so, without experiencing slowdowns as ZFS fills up.

Usage:
SMB shares (active directory integrated) with snapshots.
No NFS or iSCSI. No jails/VMs planned.

I still need some recommendations for boot device. I'd like to avoid using the drive bays for this, so I'm leaning for USB sticks. Might get two and mirror them if they're not too expensive.


I also plan on buying a second-hand server and replicating the datasets to it, but I should probably open a separate thread for it, right?
 
Joined
Apr 9, 2015
Messages
1,258
Should be good to grow into as you said. Can grab a second CPU and more ram easily as well as add more drives. Mirrored USB drives is perfectly fine but that server has two fixed bays so it would be easy to toss in a couple SSD's which will boot much faster.

Just remember that adding more mirrors always leaves you with a single drive redundancy on each set, expanding your zpool this way is more of a risk each time as the pool is only as good as a single vDev, lose one vDev and you lose the pool so I would say plan that any expansion will be multiple drives in a raidZ2 and that you will be removing the mirrored drives after copying data. Even though having good backups is always recommended which it seems like you want to plan having everything go down will throw a major wrench in the works and it will always happen at the worst time.

As far as turning off compression you will likely be hurting yourself more than anything, it has almost no overhead and the benefits will be worth it when you have compressible data.

How many users will be using the server at any given time and will they be working with multiple small files or larger ones? Media related or just workflow/documents?
 
Joined
Mar 22, 2016
Messages
217
One thing that was recommended to me, and I've seen echoed else were, was to invest in a speedier CPU if you plan to use CIFS since it is singled threaded. If you went with a single CPU mobo, like the X10SRL, you could then get a 1630V3. Or if you wanted to spend the saved money upgrade to a 1650V3.

That's just some advice I've seen given out here.
 

Chicken76

Dabbler
Joined
Jan 14, 2013
Messages
46
Should be good to grow into as you said. Can grab a second CPU and more ram easily as well as add more drives. Mirrored USB drives is perfectly fine but that server has two fixed bays so it would be easy to toss in a couple SSD's which will boot much faster.

Just remember that adding more mirrors always leaves you with a single drive redundancy on each set, expanding your zpool this way is more of a risk each time as the pool is only as good as a single vDev, lose one vDev and you lose the pool so I would say plan that any expansion will be multiple drives in a raidZ2 and that you will be removing the mirrored drives after copying data. Even though having good backups is always recommended which it seems like you want to plan having everything go down will throw a major wrench in the works and it will always happen at the worst time.

As far as turning off compression you will likely be hurting yourself more than anything, it has almost no overhead and the benefits will be worth it when you have compressible data.

How many users will be using the server at any given time and will they be working with multiple small files or larger ones? Media related or just workflow/documents?

Thank you for your reply.

Using two SSDs as a boot pool might be an option. Will see if they fit the budget.

Regarding more vdevs in a pool, I don't think I'll go that route. I'm starting with a 4TB pool and I'm going to add other pools in the future, which will be at least this size each. I'll spread my shares between pools to achieve even usage as much as possible. This way, if both drives in a mirror go down, I'll only be missing a few shares, which I'll restore from backups or from the replica.

Using compression will provide me with very small benefits if at all. Besides the usual Office documents and PDFs, we have many CAD files, which range from a few kilobytes to half a gigabyte in size, and they don't compress that well. I'm thinking when writing big files on a pool with compression, there might be higher latency, which I'm trying to avoid.

By the way, how bad is the added overhead of having Active Directory (a Windows domain controller) validate access rights, as opposed to having local users on FreeNAS? Is it noticeable?
 

Chicken76

Dabbler
Joined
Jan 14, 2013
Messages
46
One thing that was recommended to me, and I've seen echoed else were, was to invest in a speedier CPU if you plan to use CIFS since it is singled threaded. If you went with a single CPU mobo, like the X10SRL, you could then get a 1630V3. Or if you wanted to spend the saved money upgrade to a 1650V3.

That's just some advice I've seen given out here.

I've read that Samba is single-threaded, but how bad can that Haswell E5 2.4-3.2 GHz Xeon with 15 MB L3 cache be?
Going with socket 1150 would limit the memory to 32GB, which would only partially hold the working set, so the higher single-thread performance would be negated by having to go to the pool more often, I'm thinking.
 
Joined
Apr 9, 2015
Messages
1,258
I've read that Samba is single-threaded, but how bad can that Haswell E5 2.4-3.2 GHz Xeon with 15 MB L3 cache be?
Going with socket 1150 would limit the memory to 32GB, which would only partially hold the working set, so the higher single-thread performance would be negated by having to go to the pool more often, I'm thinking.


It won't be. I can easily max a 1Gb/s connection with some old E5640's Being single threaded will have much more of a difference if you are trying to use one of the ultra low power cpu's. You are better off using a board where you can maximize the ram than worrying about samba being single threaded especially with a decent workload.

The compression will not affect anything as far as you will notice, the bottleneck will probably be the drives and then the ethernet connection. That is actually one problem with using multiple separate pools rather than getting say two vDev's in a raidZ2. Your single mirror will probably have an average throughput of 90MB/s even though it is two spindles your system will only have the speed of one drive. A single 4 drive raidZ2 vDev will have approx double the throughput average, you gain any two drive redundancy out of the same number of drives two mirror sets would take with a lot less headache for remembering where a file is. Adding a second vDev will effectively double the throughput again while keeping the redundancy high. Take a look at some of the testing here https://calomel.org/zfs_raid_speed_capacity.html It will also give you a reason to pick up a shiny little 10Gb ethernet card and a switch so that multiple clients can access at top speed at any given time. This will be useful if with separate pools no two clients are accessing the same pool at a given time but if they are both accessing the same pool the drive will be the limiting factor and this will be even worse if one is reading while the other writes.

Plus you can turn off compression for a dataset that holds the CAD and PDF files if they are kept separate from the other office documents but leave it on for the entire pool and gain speed and a bit of space. Plus honestly if this will only be acting as a file server you will have more than a few CPU cycles available for the compression to do it's work. LZ4 is written in such a way that when it encounters data that will not compress well it aborts early on so there is really no reason not to use it.

On the active directory front I have no clue what overheads or performance hits you will incur so someone else will need to chime in.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm sure that would require a mighty fast CPU.
Not by any sort of modern standard.

It's far more likely that performance will increase with compression because the drives have to work less.
 
Joined
Mar 22, 2016
Messages
217
Not by any sort of modern standard.

It's far more likely that performance will increase with compression because the drives have to work less.

I read a post from Cyberjock back from 2014:

Yep.. my e1230v2 saw a decrease that was significantly enough to lower my NFS writes by almost 50%. So when all of these people at iX and elsewhere say "you won't notice" I have to laugh.

ZFS uses a block size that is variable from 512-bytes to 128kB depending on the load. You cannot force a given larger size, you can only set a maximum allowed block size. 1MB would be nice, but that's currently only implemented in zpools that are v32 which is Oracle's implementation only.

For example:
  • When people spew shit like "around 3x faster on compression of incompressible data" you've gotta talk about *what* it's 3x faster then. 3x faster than gzip9 compression, even on the most expensive CPU money can buy is still unacceptable speeds.

  • Your google code link mentions single thread performance of 422MB/sec compression on an i5-3340M. I can't find *solid* evidence that LZ4 is actually multi-threaded in the ZFS implementation. But, 422MB/sec is pretty crappy for an i5-3340M in the big picture. Sure, home users won't likely have a problem. But 422MB/sec is 1/2 of a single 10Gb link. And many users with G3220s and the new Intel Atoms are using CPUs that are slower than an i5-3340M, so that's not necessarily a "good thing" anyway. But it may be multi-threaded. It may not. It also may be multi-threaded only in certain circumstances.
In short, trying to sell me on a compression routine where there's very little transparency is pretty disappointing. This is pretty par-for-the-course for ZFS though. It's not a FreeNAS problem in my opinion, except that it's the default.

Also, I firmly believe that compressions should NEVER be enabled by default. And even more so, if it is going to be enabled by default, it should be 100% *blatantly* obvious to the user. For example, during pool creation there's a dropdown to choose compression with the default being lz4. This sneaky enable it and not say a word about it is extremely dishonest. Just look at the 9.2.1 announcement. No comment at *all* about the change in default compression. Conspiracy theorists can now rejoice.

If I'm reading that right, it means the "abort" feature that is supposed to save the compression penalty of LZ4 compression is really not there due to the block write level nature of ZFS.

I don't know if anything has changed since then, but I was hoping it had.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
How many users? What are usage patterns like? Just a few excel and word files, doing video editing or CAD work off of the shares? Does it have to be rackmount?

A single mirrored vdev isn't going to have a whole lot of IOPs to throw around you might want to consider using the same money to get more smaller capacity drives. Or you could always go all-flash for our storage. For instance, put 6 x 500GB SSDs in a RAIDZ2 array. That should give you plenty of capacity for now.

Make sure that you keep the blanks in the hotswap bays you aren't using. My experience with the hard drive cooling on a 8-bay 2U supermicro chassis has been meh to ugh (needed to take steps to make sure that airflow was going through the right areas). Personally, I'd rather have a 12-bay chassis if I'm taking up 2U.

You could easily get by with a single-cpu motherboard, but LGA2011 is still a good idea (because of the amount of RAM you can have).
 

Chicken76

Dabbler
Joined
Jan 14, 2013
Messages
46
How many users? What are usage patterns like? Just a few excel and word files, doing video editing or CAD work off of the shares? Does it have to be rackmount?

A single mirrored vdev isn't going to have a whole lot of IOPs to throw around you might want to consider using the same money to get more smaller capacity drives. Or you could always go all-flash for our storage. For instance, put 6 x 500GB SSDs in a RAIDZ2 array. That should give you plenty of capacity for now.

Make sure that you keep the blanks in the hotswap bays you aren't using. My experience with the hard drive cooling on a 8-bay 2U supermicro chassis has been meh to ugh (needed to take steps to make sure that airflow was going through the right areas). Personally, I'd rather have a 12-bay chassis if I'm taking up 2U.

You could easily get by with a single-cpu motherboard, but LGA2011 is still a good idea (because of the amount of RAM you can have).

I have about 20 users right now. There's no video editing going on, and the Office work is not worth mentioning from a load on the fileserver perspective. It's the CAD work on these shares that's going to be 95% of the load. Besides the random opening of single-file drawings, (a few MB in size each, which shouldn't be a problem for 20 users), we have two applications that depend on storage speed:
  • one, which upon opening a section/assembly reads hundreds or even thousands of small files (tens to hundreds of KB each) in a serial manner, of course. Upon saving, same thing but with writes. This currently takes up to a couple of minutes from an old machine with 4GB ram.
  • the other, uses single files, but big ones, up to 500 MB I've seen so far. This is not that bad for reads, but users have begun modifying the autosave interval to insane values because it disrupts workflow when the application freezes for a few seconds for an autosave. They also do regular saves less often, which leads to a lot of time lost when there's a power failure, application crash, etc.
As for going all-flash, I'm afraid that is way outside the budget. I understand your point about a single mirrored vdev not having the IOPS to handle a lot of random accesses, but I'm hoping it won't have to. As I said in a reply above, my workset should fit in 64 GB of ram, so most requests should be served from ram and not from disk. As for writes, I don't think any of them are synchronous, so they should go in memory first and then flushed to disk asynchonously.

The only problems I see right now are:
  • slower speeds after booting the machine, since the Arc is not filled. If this becomes a problem (it shouldn't, I won't restart the machine for months), I'll look into adding an SSD as L2Arc. Is the recommendation about adding L2Arc only on machines with 128GB ram or more still standing? And speaking of SSDs, are the Supermicro 3.5" caddies capable of taking 2.5" drives?
  • since this an Active Directory environment, I'm not sure how much latency checking access rights with a third machine adds. I've used FreeNAS before, but not integrated in a Windows domain. I could use some info from people who are doing this in their networks and can share what problems they encountered.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Joined
Apr 9, 2015
Messages
1,258
Yeah the ARC is only going to help with reads not writes which seems to be the area of main complaint for the users L2ARC will not help for writing at all.

The options are numerous and will depend on how much money you want to throw at it vs how much the problem needs to be dealt with. The first recommended option is ALWAYS throw more RAM at it. Max the board out and FreeNAS will handle it as best it can. But since you will be using that ram as volatile storage ZIL would come in handy in a power outage

A pair of SSD's for only the CAD files is an option as is multiple fast SAS drives in a RaidZ2. It would be pretty cheap to get a BUNCH of old ~140GB 10,000 or 15,000 rpm SAS drives. I was looking at them kicking around an idea and they can be found used for about ten bucks each on ebay. Being used you would probably want a couple as hot spares but you could easily get massive performance out of two vDev's and end up with a decent amount of storage for the cost. It would also be possible to multiple mirrors in one pool but would also require hot spares or very good backups.

There is also the option of SLOG to speed up the writes.

I also have to say that with multiple users you will either want to aggregate the network connections or better yet make sure that the server has a 10Gb card and a switch that will complement it. All the speed in the world in the machine will not help when the users are maxing out the connection. 20 users saving multiple files at the same time will see some hefty slowdowns if they all hit the machine near quitting time. I don't know what your network look like but if you have a single 24 port switch running the clients and the server it would be ideal to swap that out and swap in a 24 port with a 10Gb uplink. Imagine each user around 5pm starting to all save and each has 400 MB of data total to save to the FreeNAS, a single 1Gb connection will be able to handle around 100 MB/s and you have a total of 8GB to be written total from 20 users. It's a worst case right now but you are having files that will probably only increase in size and I am guessing that the number of users will not decrease either. Making the network change will shift the bottleneck and that is what we are really doing with the systems, ideally the bottleneck will be with each client but the more clients we have the more likely the bottleneck will start to become the server.

I would honestly think about a 15 or 24 drive case or larger and multiple drives even if they are small since you will have multiple spindles you will get SSD like speeds or better. With 500GB drives in a single 10 drive raidZ2 vDev you will be looking at around the same capacity as your two drive mirror with write speeds around 400 to 500MB/s and reads around 800 to 900MB/s At around 30 per drive you will be around the same cost as the two drives you are showing and the ability to expand to another vDev and increase your capacity eventually replacing drives to grow the pool. It's the most cost effective way I can come up with that will not risk data, give a lot of speed and be halfway cost effective. But it will rely on either having link aggregation and the hardware to support it in the switch or an upgrade to 10Gb networking for the server and a switch that has a 10Gb uplink.

Everyone has their own way of coming up with a solution and to me that is the fun of it all. There are ten different ways to solve the same problem you just have to open your mind to them and see which one works the best for your situation.

I have an SSD in my desktop and I love it for booting and reads but once the writes are over about 50MB it plain SUCKS so that is why I skip SSD's for now. Yes SLC SSD's are much faster but they are also very expensive and will be out of your budget for a decent amount of storage. Plus it sounds as if you expect to have a lot of data coming out of the CAD programs and my guess is that 1TB will get you by for a very short time. Spinning rust is slower but cheaper and you can aggregate them easily to make a fast vDev. Plus to take advantage of some mirrored SSD's you will still need the network upgrade and the initial cost will be higher to meet your storage requirements.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Yeah the ARC is only going to help with reads not writes which seems to be the area of main complaint for the users L2ARC will not help for writing at all.

The options are numerous and will depend on how much money you want to throw at it vs how much the problem needs to be dealt with. The first recommended option is ALWAYS throw more RAM at it. Max the board out and FreeNAS will handle it as best it can. But since you will be using that ram as volatile storage ZIL would come in handy in a power outage

A pair of SSD's for only the CAD files is an option as is multiple fast SAS drives in a RaidZ2. It would be pretty cheap to get a BUNCH of old ~140GB 10,000 or 15,000 rpm SAS drives. I was looking at them kicking around an idea and they can be found used for about ten bucks each on ebay. Being used you would probably want a couple as hot spares but you could easily get massive performance out of two vDev's and end up with a decent amount of storage for the cost. It would also be possible to multiple mirrors in one pool but would also require hot spares or very good backups.

There is also the option of SLOG to speed up the writes.

I also have to say that with multiple users you will either want to aggregate the network connections or better yet make sure that the server has a 10Gb card and a switch that will complement it. All the speed in the world in the machine will not help when the users are maxing out the connection. 20 users saving multiple files at the same time will see some hefty slowdowns if they all hit the machine near quitting time. I don't know what your network look like but if you have a single 24 port switch running the clients and the server it would be ideal to swap that out and swap in a 24 port with a 10Gb uplink. Imagine each user around 5pm starting to all save and each has 400 MB of data total to save to the FreeNAS, a single 1Gb connection will be able to handle around 100 MB/s and you have a total of 8GB to be written total from 20 users. It's a worst case right now but you are having files that will probably only increase in size and I am guessing that the number of users will not decrease either. Making the network change will shift the bottleneck and that is what we are really doing with the systems, ideally the bottleneck will be with each client but the more clients we have the more likely the bottleneck will start to become the server.

I would honestly think about a 15 or 24 drive case or larger and multiple drives even if they are small since you will have multiple spindles you will get SSD like speeds or better. With 500GB drives in a single 10 drive raidZ2 vDev you will be looking at around the same capacity as your two drive mirror with write speeds around 400 to 500MB/s and reads around 800 to 900MB/s At around 30 per drive you will be around the same cost as the two drives you are showing and the ability to expand to another vDev and increase your capacity eventually replacing drives to grow the pool. It's the most cost effective way I can come up with that will not risk data, give a lot of speed and be halfway cost effective. But it will rely on either having link aggregation and the hardware to support it in the switch or an upgrade to 10Gb networking for the server and a switch that has a 10Gb uplink.

Everyone has their own way of coming up with a solution and to me that is the fun of it all. There are ten different ways to solve the same problem you just have to open your mind to them and see which one works the best for your situation.

I have an SSD in my desktop and I love it for booting and reads but once the writes are over about 50MB it plain SUCKS so that is why I skip SSD's for now. Yes SLC SSD's are much faster but they are also very expensive and will be out of your budget for a decent amount of storage. Plus it sounds as if you expect to have a lot of data coming out of the CAD programs and my guess is that 1TB will get you by for a very short time. Spinning rust is slower but cheaper and you can aggregate them easily to make a fast vDev. Plus to take advantage of some mirrored SSD's you will still need the network upgrade and the initial cost will be higher to meet your storage requirements.

I'm not sure how much a SLOG device will help with samba performance. I have some modestly active samba servers and zilstat shows basically zero ZIL usage. L2ARC won't help write speeds at all. More ARC can improve write performance because of the way that the ZFS write cache (not the ZIL) works. ZFS aggregates writes into transaction groups, which are by default 1/8th of your system's memory, which it then flushes onto your out to your disks every 5 seconds. More RAM, bigger transaction groups. Of course, if your pair of disks is struggling to keep up with this the rate, ZFS will pause I/O to let your pool to catch up. For more info on this see jgreco's post here: https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Filling a chassis with 10-15K RPM drives will be noisy, generate a lot of heat, and consume quite a lot of power.

If I have time next week, I'll try to benchmark write performance for moving thousands of files in the 1KB - 16KB range (maybe generate a 3-5GB set of them).

I'm thinking of doing the following tests (with zilstat running):
1) 6 * mirrored vdevs with 7200RPM drives on server with 48GB RAM
2) 1 * 8-drive RAIDZ2 vdev with 7200RPM drives on server with 32GB RAM
3) Repeat tests (1)-(2) with case sensitivity off.
4) Repeat tests (1)-(2) with extra vfs modules disabled.
5) Repeat tests (1)-(2) with "aio write size = 1" - force samba to use its aio code path.

This should give a rough idea of the benefit of both setups in a variety of different situations with teensy files and common performance tweaks (as well as give an idea about whether the ZIL is touched at all).
 
Joined
Apr 9, 2015
Messages
1,258
The thing I was thinking about with ZIL is if the system has a very high amount of ram since it seems to max at 2TB, 1/8th or approx 250GB is quite a bit of data to deal with on an unfortunate power loss and if the disks can't keep up it will only get bigger or there will be a total system slowdown.

Yes 10k and 15k drives are going to produce more heat and noise and use more power. With it not being in a home I don't think noise will be a huge deal and while heat can be a problem it is possible to find ways to deal with it like placing a vent fan in the server room so that the hottest air is vented outside rather than cooling it and in the winter time venting it into the office will probably be a boon in a cooler climate. The 500GB drives at around 30 new per however are sata drives at 7200rpm. It's still a pain to find that size in a SAS drive at that size point so it's still a tradeoff of space vs speed. Less space for faster speed or twice the number of disks. I tend to not nit pick the power so much since it is part of the now vs later, cheap now more to run or expensive now and cheaper to run, it all ends up the same in the end. Plus a swap to larger slower drives down the road will lower power consumption while increasing space but gets things up and running ASAP.

Very interested in the tests though, should help feed more information into the loop for everyone's benefit. I have been dealing mainly with large files and base a lot of the speed tests off of information that is much the same. If the systems used have 1Gb and 10Gb ethernet I would be very interested in seeing if you are network capped if two or three clients are testing at the same time with each client doing a portion of the workload. That may very well be an issue with the system the OP is placing.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
The thing I was thinking about with ZIL is if the system has a very high amount of ram since it seems to max at 2TB, 1/8th or approx 250GB is quite a bit of data to deal with on an unfortunate power loss and if the disks can't keep up it will only get bigger or there will be a total system slowdown.

Yes 10k and 15k drives are going to produce more heat and noise and use more power. With it not being in a home I don't think noise will be a huge deal and while heat can be a problem it is possible to find ways to deal with it like placing a vent fan in the server room so that the hottest air is vented outside rather than cooling it and in the winter time venting it into the office will probably be a boon in a cooler climate. The 500GB drives at around 30 new per however are sata drives at 7200rpm. It's still a pain to find that size in a SAS drive at that size point so it's still a tradeoff of space vs speed. Less space for faster speed or twice the number of disks. I tend to not nit pick the power so much since it is part of the now vs later, cheap now more to run or expensive now and cheaper to run, it all ends up the same in the end. Plus a swap to larger slower drives down the road will lower power consumption while increasing space but gets things up and running ASAP.

Very interested in the tests though, should help feed more information into the loop for everyone's benefit. I have been dealing mainly with large files and base a lot of the speed tests off of information that is much the same. If the systems used have 1Gb and 10Gb ethernet I would be very interested in seeing if you are network capped if two or three clients are testing at the same time with each client doing a portion of the workload. That may very well be an issue with the system the OP is placing.

Performed my tests with a 1.5GB folder containing about 95,000 files (best transfer took 8 minutes 27 seconds) using robocopy. Killing case sensitivity may have yielded a speed increase of 5-10%. Speeds were pretty much the same across both pools. Enabling the aio_pthread vfs module and setting samba to use AIO resulted in the transfer taking about 30% longer, which I think supports Jeremy Allison's statement to the effect that it doesn't work right on FreeBSD.

I have a sinking suspicion I may have only benchmarked my client system (SSD, 8GB RAM, Windows 10, Pentium G3250), which is okay because I was curious about maximizing the speed of a single client.
 
Status
Not open for further replies.
Top