What is State tx->tx

Status
Not open for further replies.

bernardc

Dabbler
Joined
Jan 10, 2013
Messages
31
and why does it correlate with pauses in my large transfers to the FreeNAS? Here are two screenshots showing the transfer condition and the pause condition: Copy running.jpg Copy stalled.jpg . The transfer rate is excellent compared to the existing QNAP snail server, but it would be a lot better if it didn't pause half the time. My system is well cooled (plus it's freezing in here).

I'm trying to get a new FreeNAS server going in a mostly Windows environment. I've gone through the checklists and still have the following difficulties:
1. FreeNAS is not showing up in Windows Explorer even though the Workgroup name is correct and CIFS is running. BUT, if I just type in \\freenas it does show up.
2. When I add a second CIFS Share, it simply mirrors all of the contents of the first share. Anything I do in one, such as delete a file, also shows up in the other.
3. I don't quite get the distinction between a mount point, a volume, and a dataset. Duh. Sorry.

On another topic:
I have an LSI hardware RAID array that is formatted as a single ZFS volume. Can I add drives to the array without having to rebuild everthing?

I know I'm totally lost, but I'm making good time!
 

Attachments

  • Copy running.jpg
    Copy running.jpg
    20.4 KB · Views: 260
  • Copy stalled.jpg
    Copy stalled.jpg
    20 KB · Views: 261

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
tx->tx is, I believe, transaction group flushing. IIRC it was primarily seen when the transaction group size was too big, which is consistent with "pauses". Your transfer speeds are probably a little "too good" and your disks are probably a little too slow. This is a variation of bug 1531 if so and could be addressed the same way.

1) don't do Windows enough to know

2) You need to specify a different directory to share

3) One of the other posters here has a FAQ at the very top of the forum explaining these things

On the other topic: if your LSI controller supports growing a volume through addition of disks, then ZFS may be able to be convinced to work with that. On the other hand, it is a bad idea. The LSI controller should be used to pass through the disks ("IT mode").
 

bernardc

Dabbler
Joined
Jan 10, 2013
Messages
31
jgreco,
Wow. You've done a lot of work on this. Your description of the catatonic states is exactly what I've got. I'm disappointed to gather that there has been barely a trace of acknowledgement of the problem. I'll need to dabble with the tuning as you have explained.

Regarding the LSI RAID card, I'm curious about this issue. The manual seems to be of two minds. On the one hand, section 1.4.4 says "If you need reliable disk alerting, immediate reporting of a failed drive, and or swapping, use a fully manageable hardware RAID controller such as a LSI MegaRAID controller..." OK, so I'll use a RAID card. But then in section 1.4.6 it advises to dumb-down your hardware RAID card. So what does all this mean? Do I have the wrong OS? Or did I throw away $900 on the LSI card when I could have just plugged the drives into the motherboard?
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi bernardc,

If you plan on using ZFS then I'm sorry to say that you pretty much threw away $900.00 on the controller.

-Will
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
With 32GB you are more likely to need to do a little work but once you do I think you'll really like your setup.

The important part to understand here is that the benefit of a RAID controller is not in the RAID capabilities. Your typical mainboard controller has the advantage of being cheap, but probably doesn't support hot swap and almost certainly can't support more than 4, 6, or 8 drives (depending). Therefore people often use a different attachment technology ... and that'll generally be a RAID controller.

So there are two ways to go with that.

1) The most straightforward ZFS setups are one controller channel per drive. This is guaranteed to give good performance but can get a little bit expensive once past eight ports; a M1015 off of eBay cross-flashed to IT mode runs about $75, but 16 port controllers are more pricey and 24's are probably north of $1K. LSI is particularly hard to shop for as they OEM a lot of stuff for other name brands.

2) The more complex way is to use SATA port multipliers or SAS expanders, a route that carries with it some unknowns and risk, even for those of us who do this professionally, because the technology may make it more complicated to identify the physical location of a drive, or to understand the dynamics of the choke points in the storage topology, etc. But once you go this route, you can attach a large number of drives using a small number of controller channels, as long as you're not slagging out those channels with too much traffic, etc.

So you didn't really tell us much about your system, which would be handy to understand your issues a bit more.
 

bernardc

Dabbler
Joined
Jan 10, 2013
Messages
31
Thank you, survive. When there's bad news, I want to hear it right away, and unambiguously. Luckily, I can return the $900 card.

So then if ZFS is so good, why are there hardware-RAID cards (as opposed to cheaper HBAs)? Are they bought by people stuck in the '90s? Ones who have not kept up with the drifting realities of technological advancement? Were they taught this by Microsoft? And why are they so sure? I ask you.

Regarding my system, it's a Supermicro X9SRA MB which has 10 SATA ports, 32 GB ECC ram, and a 2-GHz 6-core Xeon E5 2620, currently housing 4 3-TB SATA drives, with room for 11 more. Someone asked me, "What makes this a 'server' MB?" Now I know: 10 disk ports.

NEW QUESTION: I've taken your advice and connected another 4 SATA disks to four ports listed in the MB manual as SAS ports, as I understand that SATA disks can be used in them. In the BIOS, they show up fine, but after booting, View Disks doesn't show them. What am I missing? I suppose I need to show the boot-up log, but how do I capture that? (I know it has something to do with pipes. "The internet is a series of tubes," the late Alaska senator said.)

I know it's wrong to joke when RFeynman is having a crisis. But his story has made me redouble my effort.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
So then if ZFS is so good, why are there hardware-RAID cards (as opposed to cheaper HBAs)? Are they bought by people stuck in the '90s? Ones who have not kept up with the drifting realities of technological advancement? Were they taught this by Microsoft? And why are they so sure? I ask you.

I think it's a combination of people buying enterprise class hardware because its "enterprise class", people ignorant of ZFS(I had heard about it but didn't know about all of the advantages until early 2012) and the fact that FreeBSD really isn't that popular compared to Microsoft. Too many people know MS stuff and assume there can't possibly be anything better when you have a multi-billion dollar worldwide software developer making software.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
I think jgreco covered this once, but sometimes enterprise class isn't just marketing speak. Sometimes the products are really "better", as defined by better components (life), performance, and/or support. For instance, I know some RAID cards send out a piercing alarm when there's a problem detected. FreeNAS's software RAID doesn't yet have that level of seamless real-time alerting. It's an easy statement to say ZFS is good. But out of curiosity, if there were a catastrophic bug found tomorrow, exactly how would enterprises get patches? Who exactly "owns" ZFS at this point? Who's "responsible" for fixing it (saying it's provided for free and it gets fixed whenever it gets fixed isn't exactly comforting). Now, we can all say we think ZFS has been torture tested and is reliable at this point, but I'm not going to call someone who has concerns about prematurely shifting to it stupid or out of date. It's all fun and games until stuff doesn't work.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think it's a combination of people buying enterprise class hardware because its "enterprise class", people ignorant of ZFS(I had heard about it but didn't know about all of the advantages until early 2012) and the fact that FreeBSD really isn't that popular compared to Microsoft. Too many people know MS stuff and assume there can't possibly be anything better when you have a multi-billion dollar worldwide software developer making software.

A slightly different perspective: there is a demand for cheap hardware, driven by cheap consumers who want to get the most for the least. From this pool spawns the kinda-works stuff like Realtek ethernet chipsets. There is a demand for quality hardware, driven by businesses who just want to get on with getting business done, and are willing to pay a reasonable premium to obtain it. From this pool spawns the reliably-works stuff like Intel ethernet chipsets. There may be a third pool, the prosumer, a little bit in the middle, but probably leaning towards business/enterprise.

So consider storage:

- Your average consumer stores their photos on the cheapest PoS external drive they can get their hands on at WalMart, fully expecting that it will be with them for the rest of their lives. The slightly more "power user" ones will get the drives built into their PC by the local PC shop, maybe even with the crappy and chintzy software RAID written by intern monkeys of the SATA chipset's manufacturers, which keeps the data in RAID until one of the drives dies and the recovery software doesn't work quite right.

- Your average business just wants their data to Be There(tm). So years ago, long before ZFS, it became common for businesses to utilize RAID and "enterprise grade" hard drives to build fast, supposedly resilient storage systems, and for manufacturers of these to charge a premium for things like RAID and BBU because the market was real tolerant of that sort of thing. Businesses also had the need to store massive amounts of data, so technologies to allow dozens or hundreds of drives to be attached exist for that. Usually data protection of such large caches of data is considered mandatory! And as long as it all worked with Netware, Windows, NT, and later things like ESXi, it was all good stuff, but basically the hardware was targeted towards those use models.

Now along comes ZFS, a revolutionary storage technology, championed by Sun who recognized that building specialized silicon for storage was extremely expensive and was rapidly outdated every few years, but CPU advances with multiple cores and multiple busses meant that you could potentially run your RAID code in kernel-land on the host system. And Sun had a history with similar products, particularly Online: DiskSuite (later named Solstice DiskSuite). Attaching the disks directly and letting the host CPU deal with it, giving them a unique edge in a variety of ways, not to mention the opportunity to sell massive CPU/massive memory servers.

Next, along comes FreeBSD on Intel. Integrates ZFS. And now you wonder why there aren't cheaper HBA's. Well there are, but for the most part, the "HBA"'s you will find are the ones that let you add two or four SATA ports to the system. If they have a decent chipset on them, they probably even work fine for ZFS. But they lack port density. So then we have to look up. There are HBA's that support larger numbers of drives, but they're often a similar/same platform as RAID controllers by the same mfr. Look at the LSI2008, which can be flashed to IR or IT modes.

So this is where the PC world is completely awesome; many businesses work on three year cycles, and many manufacturers include things in base packages that aren't needed. So you can look both at the gear that's being retired as "out of date" and also stuff that's current but being pulled out to be upgraded. I don't remember offhand the story of where the flood of M1015's is coming from, but you know that for every M1015 you see on eBay for $75, some business probably paid more than twice that for it. And the M1015 is built on a generic platform LSI designed to be able to build a range of storage controllers, both HBA and RAID. So you can get your cheap multiport HBA for ZFS off eBay, but it'll also be capable of being a RAID card, because designing it to support both possibilities means more sales of a single product, which is in turn easier to support.

Now let me flip this on its ear, because fair is fair, and while I agree ZFS has great advantages, it also sucks, at least right now, in some ways, on FreeBSD.

ZFS won't let you know when a drive fails, at least not right away. By way of comparison, when we have a drive in RAID die on one of our ESXi hosts, we get an instant flag and alert.

ZFS needs some significant manual puttering to swap in a new drive. By way of comparison, when one of our RAID drives dies, most RAID gear these days begins the rebuild without further prompting when a new drive is inserted.

etc. I fully expect ZFS to be able to do those things in the near future. But right now? The RAID controllers still have some advantages if you need simple and reliable.
 
Status
Not open for further replies.
Top