Extend 4 SATA ports to 6 ports

Status
Not open for further replies.

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Hi all,

I have following hardware:
  • Intel Atom D525 1,8Ghz (dual core)
  • 4GB DDR3
  • 4 x Samsung Spinpoint F4EG HD204UI (uses all 4 SATA ports available on the motherboard)
  • 1 free PCI slot
  • 1 free IDE controller

Originally I was going for Ubuntu Server and 4 drives in Software Raid5. But then I came across freenas and read about ZFS. As I like what I read about freenas and ZFS, I was going for a RaidZ with 4 disks. But reading more on this forum about redundancy and possible data loss due to disk failures, I'm leaning more to RaidZ2 and would like to have 6 disks instead of 4.
So my question is, what would be the best way to extend my system to 6 disks?
 

headconnect

Explorer
Joined
May 28, 2011
Messages
59
I think the simplest option would be using a cheap 2p PCI sata card? I'm sure you can find some links if you search around for it. There are also some threads relating to performance complications, but they are related to using multiple PCI cards simultaneously, so shouldn't be a problem for you. You might want to consider getting a 4p card instead just to have the flexibility to go for an 8-disk solution in the future (you may decide to add two new disks as a mirror for example) - but that's up to your budget :)
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
If the budget would allow it, the case wouldn't. I only have room for 6 disks...

Since you tackle performance, won't I have a performance hit? As far as I know PCI has a speed of 133MB/s and if putting 2 drives on there I would only have half the speed for each drive. I also read somewhere that new drives are capable of almost hitting the 100MB/s. So would my array be much slower as my other disks will be waiting on those 2? Or am I way off and the difference will be unnoticable small (I have gigabit LAN)?

As alternative solution I found there are also IDE to SATA adapters, so I could add 1 SATA drive to the IDE controller and 1 SATA drive through a PCI card like you suggested. Or would this have the same performance hit?
 

headconnect

Explorer
Joined
May 28, 2011
Messages
59
I think you'd have at least the same performance hit on IDE->SATA as you would on PCI->SATA. I'm personally not experienced with using PCI extension cards for SATA, but I'm fairly certain some other forum members are (come on guys, help a brotha out), so they should be able to let you know about that.

There may be other alternatives as well, but I'm not so certain about the feasability (i.e. maybe USB2.0->SATA, etc, but _do_ _not_ attempt that unless someone with actual experience and hopefully a dev says it's 'not a horrible idea' ;)).

Anyway, I would most probably go for PCI over IDE for additional drives.
 

freeflow

Dabbler
Joined
May 29, 2011
Messages
38
Do the sums. Gigabit network supports a theoretical maximum of 125 megabytes/sec. In practice you wan't get this speed. For your six disk array you will need to read at a speed of 125/4 (you can ignore parity for this calculation) or about 32 megabytes/sec/disk. This should be achievable by a PCI card (64 megabytes/sec). However you will need to check that the gigabit network card doen't also sit on the pci bus. If it does, then disk i/o and network traffic will all be sharing the same 133 megabytes/sec theoretical maximum of the PCI bus. The same advice applies to the IDE port.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Here is a block diagram of my motherboard. (Click on the image for a bigger view)


If I read it correctly, the network interface is on a different bus of the PCI slot, so I have the full speed of the PCI bus available for the 2 sata disks. According to your calculations this should be enough to saturate a gigabit network. So on my client computer (the one connecting to the NAS) I won't notice any difference.

Pure theoritical now. According to the above block diagram, it would be possible to obtain a better throughput for the disk array if I put 1 disk on the ide channel (using some sort of adapter) and 1 disk on the PCI bus (using a sata controller card)?

Thx for all the info already provided. I'm trying to understand the ins and outs of the system I'm building :)
 

Fornax

Dabbler
Joined
Jun 22, 2011
Messages
12
Hi Sjieke,

Do notice that PCI and PCIe (PCI Express) are two totally different architectures.

Regarding this Gigabyte motherboard (GA-D525TUD and it's brothers):

If I read it correctly, the network interface is on a different bus of the PCI slot, so I have the full speed of the PCI bus available for the 2 sata disks.

You did indeed read that diagram correctly. The PCI-slot where you would put in your SATA-card is the only device on that bus, so it has the entire 133MB/s for itself. It is not shared with the onboard NIC.

A quick Google appears to tell me that the NM10-chipset supports a maximum of 4 PCIe lanes. One of them is used by the NIC, so the NIC is effectively on it's own bus also, this is good. One other PCIe-lane connects to the chip that provides the 3rd and 4th SATA and the PATA port, if this is a 'simple' 1st generation PCIe then it has 200MB/s bandwith, again that's plenty to saturate the NIC. (Looks like that leaves 2 PCIe lanes not used on this motherboard, which is not too surprising since it uses onboard video, already has an extra chip + 2 SATA + 1 ATA connectors and it is small, already offers more than most other boards out there.)

I have been looking into this particular board myself and it seems to be close to a perfect choice (featurewise) except that it can hold only 4GB of RAM. And more RAM would have been better because of the diskcache (ARC) but that is a chipset-limit for this platform.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Thx for the info,
It makes me feel good about my choice :cool:

I initially was going to install ubuntu server and software raid as mentioned in my first post. So for that the 4 GB RAM would have been plenty.
I indeed read everywhere that more RAM would have been better for ZFS diskcache (ARC).
If I understand it correctly, the diskcache is used to prevent the need to access the disks for common requested data. But I think that 4GB might be enough for my needs (a home user) because:
  • Most of the times there will be only 1 user.
  • It will be to access some text files or stream music and video. To fill 4GB of text files, you need to work very fast I think :). Streaming a blueray movie wouldn't fit in 8GB also, so for that the cache won't matter.
  • Another task that it will be performing is downloading using bittorrent (once the plugins are available). And since this is new data from the net, caching doesn't apply.
  • And last, the read and write speeds of the array should be able to saturate a gigabit network, so if the cache misses, it shouldn't be noticeable.
So if ZFS doesn't use a lot of RAM for other stuff, I think most home users won't notice the difference between 4 or 8 GB of ram.
If you would have a lot of concurrent users, the situation changes of course.

I'm not able to test the performance as I have some trouble with the delivery of my harddisks.
So could you give your opinion on my thoughts? Is my thinking reasonable, or am I missing something crucial here?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,877
PCIe and SATA ports are a big deal in the SSD world right now especially when discussing 6Gb SSD's. A normal PCIe x1 bus cannot provide full 1200Gb to the two drives, it's just shy of it, but for a standard hard drive, not a problem at all. In fact you need a PCIe X4 card to reap maximum benefits of the 6Gb SSD speeds.

Remember, even the fast traditional hard drives are typically well under 150Mb/sec except for some burst speeds, so find a PCIe X1 to SATA card and you will never have to worry about throughput. Buy a 4 port if you find a nice one for future expandability in case you buy a new case. Do not buy a RAID card for FreeNAS, it's not needed and costs significantly more.

-Mark
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,877
I wonder how well that board works. It's a Dell which to me means it's probably a very capable board. If it were 6Gb connections I'd be seriously looking at it for my main computer.
 

Fornax

Dabbler
Joined
Jun 22, 2011
Messages
12
That card is not an option in this case. The motherboard only has a PCI extensionslot but this card is PCIe.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Indeed, I only have a PCI slot. But thx for the suggestions :)
 

Tekkie

Patron
Joined
May 31, 2011
Messages
352
My mistake for missing that bit in the question.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,877
I didn't see it either because the line diagram shows PCIe but when looking on the internet for a photo it definitely is a normal PCI connector.

Here is one that should work fine and there is a 4 port if you want to spend 10 more bucks. I have this card and did use it for FreeNAS .7 just over a year ago when I was playing with it.
http://www.amazon.com/dp/B00552PL3E/?tag=ozlp-20
 
Status
Not open for further replies.
Top