AsRock now showing an interesting 1U product...

Status
Not open for further replies.

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
Here's some info on the processor:
http://www.zdnet.com/intel-juices-up-microserver-speeds-with-thrifty-avoton-chip-7000020173/

Sounds like this processor has a power consumption advantage when doing multiple small tasks constantly, as compared to a full size processor? It supports AES-NI and ECC, so those aren't an issue.

I wonder what specific workloads this would excel at. I guess it has 8 cores because each core is slower but could be good for serving a few concurrent CIFS shares or something? In a case where each CIFS process gets its own core?

That's a lot of drives.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Interesting. I'd be REALLY concerned with overheating drives in the second and third rows.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Nah. A 1U could have a fair amount of airflow over the drives. You have 1.75" height to work with. The cooling could well be less problematic than a 4U/24. The back drives would be warmer, but ... SSD? :)
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Noob question maybe, but isn't this kind of laborious to service?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yup. But some of us work in environments where space and power are both very expensive. Quite frankly, putting a dozen drives in a box where you only need seven active is entirely within the realm of possibility, which means five drive replacements are available without shutdown.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
In my line of work, you get radiation exposure(nuclear power). So that's big big big factor for some jobs. We'd often rather replace a $10k part that isn't bad than spend hours in a high radiation environment. We call that "risk assessment". ;)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have sent AsRock an inquiry about the product. I would certainly like more information, and the overall design is of a sort that I can really appreciate - significant compromises like a Backblaze Pod, but also innovative like one. HP may have gone awry with the MicroServer Gen8 and I've been accused of favoring Supermicro too heavily. But truth is, I'm a pragmatist and do what works. Supermicro screwed up their Avotons (laptop memory, modest SATA). I'm perfectly fine with AsRock owning the title of "best low power NAS board."

If I can get my hands on one, it'll be interesting.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I tend to agree. Supermicro screwed the pooch with their Avolon offerings. I'm okay with AsRock owning the title if its earned. Right now my fear(and observation) is everyone is jizzing in their pants over it, but its got its own problems. It doesn't do FreeBSD/FreeNAS very well(if at all). I can't even keep track of the latest news on the AsRock stuff. I figure when its all worked out it'll be well known all over these boards.

Just like jgreco, if i could afford one I'd totally be bench testing it for various performance numbers and I'd have tons of info for people on how well it performs. Unfortunately, being "forum admin" doesn't qualify for free hardware because I want it. ;)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There are two problems I see with Avoton, which are:

1) It seems to be much more expensive than a basic 1155 + modest CPU, which may resolve eventually or may not if the pricing is intentionally high to target data center operations,

2) Performance on a single core is not that great - so this could affect ZFS+NAS operations, where the number of cores isn't as significant a factor in deciding winner as is the clock speed.

From my point of view, I've basically been sitting around for the past three years trying to figure out the most practical way to put our VM environment on low power storage. Basically all the conventional SAN options are watt-guzzling expensive shelves with FC or whatever. It is pretty trivial to go out and get some SoC Linux-based NAS units that do iSCSI for a hundred bucks, but I wanted something that could go faster and better if pushed, and I wanted it to be cheap enough to be able to have it redundant, so two of them.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Very interesting...

And suddenly the IPMI DHCP's. So apparently it is set by default to connect the IPMI to LAN1, despite having a dedicated port for IPMI.

Yeah, my pet hate with ASRock IPMI so far... By default they bridge both LAN1 and dedicated, yet the default port used is LAN1.. Why bother having a dedicated port !?

So.... How loud is it? Lots of tiny powerful fans it seems.

How easy is it to remove a drive while the machine is running? is there enough space to slide the drive to un-hook from the SATA port and then lift the drive?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The dedicated port can presumably be configured as a separate item. This is not exactly unusual, but the choice to default to one of the system ports violates POLA.

It was moderately loud when I first turned it on for a few seconds, then the fans revved down. But quite frankly the shop is not a quiet environment so that's a total crap assessment.

Also worth noting I do very little work from the shop. I much prefer sitting on my arse in my office or from remote. Unfortunately I am in the process of deploying new Win8.1 based laptops, and part of the process is that installs need to be scriptable (==reproducible). So I am also very busy with a client these past few months, and the new deployment is going extra glacially slow. No Java yet. So I have to wait until I'm back in the office on a more mature OS. And then I can hopefully make some useful assessments.

It would be tricky, not impossible, but tricky to swap a drive. You would need a railset capable of 110% extension. It is unusual but by no means unheard-of for servers to have such rails. In the old days, we'd just go to Accuride and select a suitable set of slides. These days, with the servers typically being a bit wider, it has often been necessary to rely on manufacturer-provided slides in order to obtain something sufficiently narrow that is also no more than 1.75" (1U) tall. The 4-drive 1U storage servers are usually the worst in this regard. Because rail sets that extend 110% are typically track-in-track, and since this thing is 32" deep, I am ... well, skeptical that this has been addressed by ASRock. All I can say for sure is that they didn't include rails and I haven't inquired.

If the top lid were cut, maybe two or three inches in front of the fan bulkhead, it would probably be more feasible to have a set of standard slides and then be able to remove the front lid. However, this might compromise the chassis rigidity. I suspect that the reason for the rear screws is to strengthen and add rigidity.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Other notes:

Chassis with 32GB RAM, nothing else, peaks around 62 watts booting (fans screaming at full speed) before settling down to around 41 watts during boot.

IPMI solution a customized AMI MegaRAC, which I hadn't really seen lately but generally speaking it appears to be very well done ("frustration-free packaging"). Right away it provides you with a summary of its estimation of your browser's compatibility and settings that might interfere with correct operation.

And then the joy stops. Because either the board is unhappy with the Kingston RAM or this matched set of four Kingston 1600's may be one of the recent batch of "bad RAM" ... not sure which! But pulling two of the modules has resulted in it being able to run longer than ten minutes. Now what's real strange is that it had been running memtest86 on the full 32GB for several days with no trouble... and given some of the panics I was getting, it does look like it hated the memory subsystem.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So anyways the testing is a bit behind due to the memory problems, but the system appears stable on 16GB now. I've got four older WD drives set up as two mirrored vdevs. The underlying devices are capable of about 90MB/sec, and the pool reads locally at about 250MB/sec, and writes at about 145MB/sec. The system has mostly remained very responsive while doing so; I've caught it a few times hiccuping for a few seconds under heavy write pressure after first boot, but then it seems to go away. I'm guessing this means that a better transaction group sizing algorithm is in use that learns quickly from mistakes. I only seem to be getting about 300 megabits on a CIFS read, though, I'm not sure where the bottleneck is.
 
Status
Not open for further replies.
Top