Is AFP Support broken?

Status
Not open for further replies.

sirjorj

Dabbler
Joined
Jun 13, 2015
Messages
42
I just got my first FreeNAS-based NAS up and running. I created a new dataset (Mac-type), made it my own (user and group), made it an AFP share, and dumped ~80 GB of data to it. Awesome! Just like I hoped.

I made another dataset (Mac-type, max compression this time as it will be archive data), made it mine, made it AFP, started dumping data to it and after several files written, it failed, saying it didn't have the right permissions. After another try or two, i can't even mount the share and the web interface is down. I reboot the machine and try again - same thing. I try to delete the dataset but that fails because the mount is "in use". Another reboot and I'm able to delete the dataset and try again. After another try or two (I tried using Mac file permission instead of UNIX under advanced permissions, though the first dataset that worked used UNIX just fine) and got the same results, but it would fail at different files, sometimes failing one ones that it got past on previous attempts.

I am using OS X 10.11.2
Is there some obvious mistake I am making?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
so your NAS stops working, during transfers, and you need help troubleshooting it, but won't tell us about your configuration? per the forum rules? What hardware, what version of FreeNAS, what is your network layout? Details. They are important
 

sirjorj

Dabbler
Joined
Jun 13, 2015
Messages
42
so your NAS stops working, during transfers, and you need help troubleshooting it, but won't tell us about your configuration? per the forum rules? What hardware, what version of FreeNAS, what is your network layout? Details. They are important

Sorry.

My build is detailed here, but to sum it up, it's a SuperMicro A1SAi-2750F with 32GB RAM and 6x 6TB WD Reds. The MoBo/RAM have been running various things (mostly Linux) for over a year, so I have no reason to think its bad hardware. The drives just completed the HDD burn-in test (smartctl with various settings and badblocks) as described on a different post here.

I am running FreeNAS-9.3-STABLE-201601181840

Network layout is iMac (late 09 model, maxed out and then some) and NAS connected via gigabit ethernet through a Cisco switch.
 

sirjorj

Dabbler
Joined
Jun 13, 2015
Messages
42
I gave this another test this evening. I dumped a large (~30GB) file to it without a problem. I then tried to copy a ~20GB directory containing 63 files to it. It failed.

/var/log/messages :

Jan 20 18:23:40 freenas afpd[37036]: transmit: Request to dbd daemon (volume emc) timed out.
Jan 20 18:24:37 freenas cnid_dbd[37037]: read: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: error reading message header: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: read: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: error reading message header: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: read: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: error reading message header: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: read: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: error reading message header: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: read: Connection reset by peer
Jan 20 18:24:37 freenas cnid_dbd[37037]: error reading message header: Connection reset by peer
 

sirjorj

Dabbler
Joined
Jun 13, 2015
Messages
42
So - more interesting discoveries. I tried copying the folder that always caused the failure to the share/dataset that always worked. It worked fine. Perhaps the problem is caused by using the gzip-9 compression instead of the default lz4. I will test this by deleting the gzip-9 dataset, but I have to reboot the server first, as I get "device is busy" errors when I try now...
 

sirjorj

Dabbler
Joined
Jun 13, 2015
Messages
42
More info and a possible cause.

I tried remaking the dataset with max compression 3 or 4 different times and each time, writing the ~20 GB directory to it would fail.
I tried remaking it with default compression and writing the ~20 GB directory worked perfectly.

It looks like theres an issue with using AFP to write to a dataset that is using max compression.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338

sirjorj

Dabbler
Joined
Jun 13, 2015
Messages
42
That was just part of my intended use. I was planning on using gzip-9 datasets for backup/archival purposes - stuff I will not be frequently accessing. I figured that if I'm not going to access it that much, why not take the initial performance hit and possibly gain some space in the long run.
 
Status
Not open for further replies.
Top