Bad Performance - FN 9.1 iSCSI and VMWare ESX 5.1

Status
Not open for further replies.

datamining

Dabbler
Joined
Feb 28, 2013
Messages
10
Hello everyone,

i´m not really into freenas or freebsd but i keep reading the forums for about four months. Currently I run a small FreeNas System with the following setup:

CPU: Dual Core AMD Opteron(tm) Processor 280 (2393.23-MHz K8-class CPU)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs

HDD: 4x 2TB 7.2K RAID5 DATA
1x 160GB 7.2K RAID1 SYSTEM
RAM: 4x 1GB ECC RAM

RAID CTRL: 3ware 9550SX

I´ve connected a VMware ESXi Cluster with 15 Servers to the Storage while using iSCSI (just a test enviorment). The whole Infrastructure is connected with 1Gbit FD. Backbone Switch is an High-Class HP-Switch so that should not be the Problem.

So this is an screenshot from the bandwidth report:

And this is an screenshot from the CPU report:


As you can see, i got a bandwidth about ~ 60Mbits ... not really awesome :(
So i read a lot but i could not find the "right" answer. Is there something wrong with my configuration? Is it the SAN Hardware? Is it the iSCSI configuration? I have no clue :(
could that be the "right" way:
Delete the RAID5 (Hardware RAID) and connect the 2TB drives als "single drive", create a ZFS device and let ZFS handle the System, after that create a new iSCSI device on ZFA based System?
Thanks for any help or advices!
regards
miner
 

Attachments

  • bge01.PNG
    bge01.PNG
    15.4 KB · Views: 638
  • cpu.PNG
    cpu.PNG
    14.2 KB · Views: 579

Andres Biront

Dabbler
Joined
Sep 3, 2013
Messages
17
Hi!

I'm also new here, and new to FreeNAS. I'm also having issues to configure iSCSI with VMware with an acceptable performance. First of all, you should really need to consider upgrading your RAM. 4GB is kinda on the bare minimum side.

Anyway, based on my experience, iSCSI with VMware is not working as it should (or I'm missing something, or it's just as good as it gets), so you shouldn't expect miracles. But, anyway, 60Mbps it's really awful.

My "cheapo-3rd-world-compatible" configuration is the following.

FreeNAS 9.1.1 Server:
Pentium G2030 (2 cores, 3Ghz, IvyBridge based, 3MB)
2x4GB DDR3-1333 c7 1T
4x1TB WD Blue 7200RPM in RAIDZ1
2x1Gbps Realtek 8111E (before anybody says something against them, they hit 110MB/s and mantain 100MB/s constantly)

ESXi 5.1 Server 1:
Athlon II X3 455 (3 cores, 3.7Ghz, K10 based, 3x512k)
2x4GB DDR3-1600
160GB HDD
1x1Gbps Intel PRO1000MT
1x1Gbps Realtek 8168

ESXi 5.1 Server 2:
Core i5 650 (2c/4t, 3.2Ghz, Nehalem based, 3MB)
2x4GB DDR3-1600
160GB HDD
2x1Gbps Intel PRO1000MT
1x1Gbps Realtek 8168
(one NIC and the FXO are directly connected to a VM using DirectPath, so 2x1Gbps are available for testing)

Windows 7 Client (main rig):
Core i7 3820 (4c/8t, 4.5Ghz, SandyBridge-E, 10MB)
4x4GB DDR3-2333
128GB Crucial M4, 64GB Crucial C300 Cache of 500GB Samsung F3, 1.5TB WD Green
1x1Gbps Intel NIC OB
Other unimportant things.

Everything connected to 2 TP Link gigabit switches that cost less than 50 bucks :P I can't recall the models, and I'm lazy to get up and look.
The RAIDZ1 is configured with 1 2TB Dataset for CIFS, and 1 500GB ZVOL for iSCSI. I've created an aditional 50GB ZVOL for testing iSCSI with Windows initiators.

Ok, to the tests!
dd on local: 210MB/s Write, 290MB/s Read. Nice. It should saturate the gigabit Network.

Testing CIFS with Robocopy:
~92MB/s Read / ~92MB/s Write. Average.

Testing iSCSI on Windows, 1Gpbs connection, no multipath available. Robocopy again:
~102MB/s Read / 102MB/s Write. Average.

It's obviously faster than CIFS. Even though iSCSI and ZFS doesn't get along that well, CIFS still sucks donkey balls.

Ok, so, what does this tell me?
My NAS can saturate 1Gbps networks pretty easily (I didn't expect more than 100MB/s on my home equipment, homemade cabling, etc.)
Intel and Realtek NICs work just fine on FreeNAS 9.1.1
I'm starting to like my first experience with FreeNAS :D

Ok, now, onto my Virtual Environment. I have 2 ESXi, non clustered, with only 4 VMs "on production network". Lol. This is at home, and I have an Asterisk VM, my Active Directory, a File/Print Server (soon to be retired, the File Server part at least), an OpenVPN, and nothing more. Working all day. I also have 6 virtual ESXi, different versions, 'cause I work with VMware, but they are mostly offline.

What I did? I know is not "best case scenario", but I've created 2 iSCSI vmkernel ports, one on each NIC. I KNOW I only have 2 NICs, but the traffic on my network, and over those NICs is minimal.
Ok, moving on. Before I configured Round Robin, I migrated my VMs to the iSCSI disk and did the same tests...

iSCSI Performance on VMware: 25 to 35MB/s, with the occasional hiccup (bug 1531?). I DON'T get those 2-3s freeze with Windows.
Always testing with Robocopy from local to iSCSI disk.

I did some changes on the network, and the iSCSI configuration. Now, I get a max transfer of ~70MB/s and average of 55MB/s but performance is not nearly as steady nor as fast as in Windows.

I've also configured RR, and it's clearly balancing over the 2 available NICs, but performance didn't change a bit.

Example Robocopy on ESXi VM:
Speed : 47796633 Bytes/sec.

Example Robocopy on Windows:
Speed : 100190799 Bytes/sec.

On a lower level, with HD Tune on Windows:

Clipboard01.png


And on a VM:

Clipboard02.png


Keep in mind that it's 1 path for Windows, and 2 paths for the ESXi server. 45MB/s write on ESXi vs 100MB/s on Windows.
Also, that VM pic. is best case scenario, another VM is getting about 36MB/s average.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, for starters, you need ALOT more than 8GB of RAM for good FreeNAS performance. 8GB is the minimum!

You might need an L2ARC and/or ZIL, but both of should should come after you have at least 32GB of RAM.

Other than that, you are on your own. Lots of people have this problem, and each system has to be tuned to work for its workload. There is no silver bullet or it would already be stickied.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hello everyone,

i´m not really into freenas or freebsd but i keep reading the forums for about four months. Currently I run a small FreeNas System with the following setup:

CPU: Dual Core AMD Opteron(tm) Processor 280 (2393.23-MHz K8-class CPU)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs

HDD: 4x 2TB 7.2K RAID5 DATA
1x 160GB 7.2K RAID1 SYSTEM
RAM: 4x 1GB ECC RAM

RAID CTRL: 3ware 9550SX

The 9550SX is pretty old at this point and has some odd problems. 4GB of RAM is too small for ZFS on FreeNAS.

I had some 2005-era storage servers with similar specs, including the 9550SX (except AMD Opteron 240EE CPU's) and found them unbearable under FreeNAS with ZFS (and pretty awful under Nexenta too). I don't think the 9550SX will let you actually pass the disks through, which is a real pain.

But in general terms, ZFS takes a massive amount of overhead on a system, and this comes as a shock and disappointment to most people. Those 2005-era storage servers were perfectly fine on FreeBSD 6 with UFS and able to easily saturate a single gigabit link. With FreeNAS and ZFS, they were very slow. Throwing a modern CPU and lots of RAM at the problem isn't a total fix, ZFS is *still* a pig.

For iSCSI, there are also other considerations, like many people will try filling their pool with the absolute largest zvol they possibly can, which will crater things. ZFS is a copy-on-write filesystem, and filling it too full makes things ugly. The conventional wisdom was 80% capacity was "full", based on the old behaviour of switching from first-fit allocation to best-fit at 80%, but that's no longer the allocation switch threshold. Still, the wisdom is generally correct, but I'm tempted to advise people to fill their pools no more than maybe 60% in order to allow ZFS more opportunities to optimize writes. There are many factors to consider though, and no single tuning strategy really covers all the cases.

My "cheapo-3rd-world-compatible" configuration is the following.

FreeNAS 9.1.1 Server:
Pentium G2030 (2 cores, 3Ghz, IvyBridge based, 3MB)
2x4GB DDR3-1333 c7 1T
4x1TB WD Blue 7200RPM in RAIDZ1
2x1Gbps Realtek 8111E (before anybody says something against them, they hit 110MB/s and mantain 100MB/s constantly)

Generally speaking, I'd expect a system like this to work fairly well, but with 3TB of usable storage and 2550GB spoken for, that could be bad. It isn't clear if you've actually used the 2TB CIFS dataset or just how full that is. But be mindful of what I said above about percentage full.

The thing you really have to watch out for, though, is that we tend to think of iSCSI servers as block oriented things. If iSCSI asks to write block #123456, then asks to write it again, as computer guys we sort of think of that being written to the same place on the disk both times. Not with a CoW filesystem storing those bytes! Writes cause fragmentation. The behaviour and performance of ZFS is more similar to NetApp's WAFL in this regard. Writing large amounts of data to a nearly-full pool will result in a lot of performance unpredictability as the system struggles to make reasonable allocations.

So I don't have any specific advice. You seem to be tackling this logically. I'd actually suggest re-trying some of your earlier tests, particularly the local dd's, to see if there is any performance difference that might be attributable to a fuller pool or something like that.
 

Andres Biront

Dabbler
Joined
Sep 3, 2013
Messages
17
I don't know if a silver bullet, but there has to be something to dramatically improve performance with ESXi, appart from throwing hardware at the problem. There is something that the ESXi is doing in a different way that the Windows 7 rig does.

iSCSI performance between FreeNAS and Windows is excellent, on my hardware that, as you can see, is pretty cheap. But with ESXi it craps out.

What we should do is understand what ESXi is doing different.

I can't install 32GB on my FreeNAS server. Maximum allowed is 16GB. Also, I wouldn't. I'll upgrade to 16GB when I get the money, but there has to be something else to do that doesn't involve spending more on hardware.

It's just a test lab at home, I don't really care about the performance, it's working just fine as it is. I'm trying to improve it just for fun.

Edit:
Both, the CIFS dataset and the iSCSI ZVOLs are nearly empty. 233GB within the CIFS, and testing with empty ZVOLs. Usage of the entire RAIDZ is about.. 15%
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There are so many variable and so many factors. All you can do is step through them one at a time. Challenge your assumptions. Etc.

For ESXi, I'll also note that I wouldn't place that much value on testing speeds on a VM running on an iSCSI ESXi datastore until I understood what sort of performance I was actually seeing from the datastore directly from the ESXi CLI (use tech support mode). This really just devolves down to identifying what all the possible links in the chain are and then stress testing each one; when all of that seems as good as can be had individually, then see how fast all the bits go when everything is working together.

You are definitely encouraged to do testing of smaller units, such as like what happens if you use a single path from ESXi over an Intel controller. See if you can dig up iperf for ESXi (I only have an oldish link for 4.0, since that's what we use) to help verify that your network is fundamentally capable of what you hope it is.

I've said it before, a properly resourced ZFS system ought to be totally awesome, but the definition of "properly resourced" is very likely to be shocking, mind-blowing even. So you could also try FreeNAS with an iSCSI extent on a UFS filesystem. That will perform pretty much as fast as your hardware is capable of, minus, of course, all the nifty ZFS features.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
iSCSI performance between FreeNAS and Windows is excellent, on my hardware that, as you can see, is pretty cheap. But with ESXi it craps out.

What we should do is understand what ESXi is doing different.

We do. ESXi makes all writes with NFS sync writes. iSCSI doesn't, but ZFS is a CoW filesystem(unlike all Windows FS).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We do. ESXi makes all writes with NFS sync writes. iSCSI doesn't, but ZFS is a CoW filesystem(unlike all Windows FS).

It doesn't seem like either of those is the issue here though. He's testing an iSCSI initiator from Windows to the FreeNAS box, right? Or am I totally hosed?
 

Andres Biront

Dabbler
Joined
Sep 3, 2013
Messages
17
That's right. I'm testing with the Windows iSCSI initiator to the FreeNAS box.

I'm searching for iperf for ESXi. But in the meantime, I know is not the same, but it uses the same phisycal adapters.

Iperf -s on VM, -c on FreeNAS, and then -s on FreeNAS. There is something wrong there.
[ ID] Interval Transfer Bandwidth [ 6] 0.0-10.0 sec 972 MBytes 815 Mbits/sec
[ ID] Interval Transfer Bandwidth [ 7] 0.0-10.0 sec 550 MBytes 460 Mbits/sec
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
I have the same problem.
32 gb ram, ssd zil l2arc. But freenas 8.3
Local write is over 200 MBps.
Vm in esxi to iscsi only writes up to 70 MBps.

Something is wrong with iscsi either in freenas or esxi.
Tuning iscsi parameter like max data length....doesnt really improve anything.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Could you convert the VM to thick provisionning and see if it could write faster?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thick provisioning is useless for a CoW file system like ZFS. There's really no reason you shouldn't do thin provisioning for ZFS since fragmentation is a fact of life for CoW file systems(and ZFS has no option to defrag at this time). Your data will be created in a fragmented state(which thick provisioning is supposed to prevent for non-CoW file systems). Lots of this stuff requires deep knowledge, understanding, and application of your knowledge to derive the final answers to your problems. It's not magic.

I have the same problem.
32 gb ram, ssd zil l2arc. But freenas 8.3
Local write is over 200 MBps.
Vm in esxi to iscsi only writes up to 70 MBps.

Something is wrong with iscsi either in freenas or esxi.
Tuning iscsi parameter like max data length....doesnt really improve anything.

Any why do you say something is wrong with iSCSI in FreeNAS or ESXi? I'd bet a larger sum of money you don't grasp the complexity of ZFS. I've been studying it on and off for more than 18 months and I feel like I've barely scratched the surface and I'm still an idiot regarding the topic.

Tuning iscsi parameter like max data length....doesnt really improve anything.

I'd bet you don't fully understand how the iSCSI parameter works in combination with ZFS. It's like I said above. You have to have the knowledge, understanding, and the skill to apply that knowledge appropriately. You'd be amazed at how many people won't get all of those(and how many just want to have an "easy button" and don't really give a crap). Well, the "easy button" is Windows Server. :)
 

Andres Biront

Dabbler
Joined
Sep 3, 2013
Messages
17
Any why do you say something is wrong with iSCSI in FreeNAS or ESXi? I'd bet a larger sum of money you don't grasp the complexity of ZFS. I've been studying it on and off for more than 18 months and I feel like I've barely scratched the surface and I'm still an idiot regarding the topic.


It's not about the complexity of anything. We are trying to understand it, we are shooting blanks to try to get something fixed, even though as you said we do not understand. We will, as we keep trying and playing with this stuff, lear something in the end. That's the idea of the post, also the forum I would guess.

IMHO, he said, and I'll have to agree, that something is wrong -or not performing the best it could- when you mix FreeNAS and ESXi.
I'd say that he jumped to that conclusion based on the fact that he has sufficient hardware to expect more out of the configuration.

In my case, I have sufficient hardware to saturate a gigabit link (and I'd bet 2 gigabit links also, but Windows 7 doesn't support MPIO) using iSCSI with FreeNAS if, and ONLY IF, I connect with Windows. That throws away a lot of questioning regarding the used NICs (Realtek), the amout of Memory (8GB), the used space of the RAID (15%). Because it's a fact that with Windows it works flawlessly.

Now, when I connect with ESXi, I see a completely different situation. I can't stress a single gigabit link. I see every now and then a second that BW drops to 0, and it's not stable as it is -with the exact same hardware, zvol, and RAID- when connected with Windows. So, it's not so crazy to conclude that something in that mix is working completely different and it's underperforming.

It's a fact that ESXi (based on several experiences I've read on the forum, and comparing with what already has been said), doesn't work to well with FreeNAS. Some said "ZFS does not get along too well with iSCSI", which is half true. Maybe is not the ideal scenario, but iSCSI with Windows just-simply-works.

Could it be a desing decision and we can't do anything about it? Could some tuning and configuration change the results? We are trying to figure that out.

We surely don't understand how iSCSI works in combination with ZFS. We are trying to. But, as I said, this isn't a problem of iSCSI+ZFS, that just works. The problem shows up with ESXi.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ESXi and FreeNAS work fine together; what gets complicated when mixing them is usually ZFS.

As previously noted, actual testing to eliminate potential problems ought to be done. The iperf tests to validate network capabilities, turning off multipath (I mean what the hell's the point of adding complexity when you don't even have basic functionality working well), and testing with UFS-backed iSCSI would go a long way to providing useful clues.

Random bashing of major component subsystems (iSCSI, ESXi, VMware, Realtek, etc) isn't actually going to help.
 

datnus

Contributor
Joined
Jan 25, 2013
Messages
102
Local write is good => Protocol iscsi is not good.
If windows iscsi initiator is good. => May be the problem with esxi software initiator?

We are trying to ping pong the problems.
We dont need to know how to make a car.
We need to point out the problem and focus our investigation to it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It's not about the complexity of anything. We are trying to understand it, we are shooting blanks to try to get something fixed, even though as you said we do not understand. We will, as we keep trying and playing with this stuff, lear something in the end. That's the idea of the post, also the forum I would guess.

I think most people(including myself) are shooting blanks when it comes to tweaking ZFS. You practically need a damn 4 year degree in ZFS to understand it.

IMHO, he said, and I'll have to agree, that something is wrong -or not performing the best it could- when you mix FreeNAS and ESXi.
I'd say that he jumped to that conclusion based on the fact that he has sufficient hardware to expect more out of the configuration.

Now is it that something is wrong or is it that ESXi is doing things to protect your data. For example, NFS with ESXi is horrible because ESXi makes every single write a sync write. That's a performance killer by itself. But, it does have the virtue of being the safest with your data.

In my case, I have sufficient hardware to saturate a gigabit link (and I'd bet 2 gigabit links also, but Windows 7 doesn't support MPIO) using iSCSI with FreeNAS if, and ONLY IF, I connect with Windows. That throws away a lot of questioning regarding the used NICs (Realtek), the amout of Memory (8GB), the used space of the RAID (15%). Because it's a fact that with Windows it works flawlessly.

Now, when I connect with ESXi, I see a completely different situation. I can't stress a single gigabit link. I see every now and then a second that BW drops to 0, and it's not stable as it is -with the exact same hardware, zvol, and RAID- when connected with Windows. So, it's not so crazy to conclude that something in that mix is working completely different and it's underperforming.

It's a fact that ESXi (based on several experiences I've read on the forum, and comparing with what already has been said), doesn't work to well with FreeNAS. Some said "ZFS does not get along too well with iSCSI", which is half true. Maybe is not the ideal scenario, but iSCSI with Windows just-simply-works.

Did you by chance try an iSCSI device from Linux? You're making the conclusion that it works well with Windows but ESXi has performance problems, so ESXi is doing something different/wrong. It might be that Windows isn't valuing the safety of your data as well as ESXi is. I would argue that your assumption that it works flawlessly with Windows is a bit lacking on evidence. Windows might be doing alot of things to cheat and not make safety of your data as much of a priority as ESXi does(or every other OS). There's really no way to know either since Windows is closed source. But, it would be interesting to see how a flavor or two of Linux works for you. I know I had an iSCSI device setup for my HTPC linux box, and it was screaming fast. The only time it was slower than 100MB/sec was if the source/destination couldn't keep up. But, I think we both agree that the behavior between Windows and ESXi is different. And regardless of if I want to disagree with your assessment that Windows works and ESXi doesn't work right, the bottom line is that people aren't happy with the performance. And by large and far, more people either give up on FreeNAS completely and go to something else or switch to UFS for the iSCSI/NFS shares for ESXi stores.

Could it be a design decision and we can't do anything about it? Could some tuning and configuration change the results? We are trying to figure that out.

We surely don't understand how iSCSI works in combination with ZFS. We are trying to. But, as I said, this isn't a problem of iSCSI+ZFS, that just works. The problem shows up with ESXi.

And you are beginning to scratch the surface with how all of this crap works together(or doesn't work together) and how broken some of this stuff can be. My ESXi box(2 whole VMs) keeps losing all network connectivity. I posted to the VMWare community forums and nobody has a clue what is wrong. And not 2 hours ago I was resilvering my pool when the damn thing died forcing me to do a hard shutdown since the box will no longer shutdown even if I try to initiate a shutdown locally. I have searched with Google with my error and gotten nowhere. I literally have no more ideas of possible things to try. My box isn't doing anything special to warrant a weird error that nobody can explain, nor do I have any clue what the problem is. I just know that the damn thing is broke, I have no more troubleshooting options, I have no more Googling options, and I can't even force the machine to reproduce the problem. It might fail twice in 2 hours, then work fine for a week. So I feel your frustration. I have my own. And I know that both of us have our own uphill battle that is against the wind, in the snow, etc.
 

datamining

Dabbler
Joined
Feb 28, 2013
Messages
10
Hey!

@cyberjock
thanks for the Infos. I know 4GB are not that much but on my setup i´m not running any ZFS. Its just a regular RAID-Set with an iSCSI-Disk container so it should not impact that.. or? As you can see on the screenshot the RAM and CPU is not used that lot?

@andres
Could you give me the "dd" command to check up the local performance?

@jgreco
I know that the controllers best days are gone but i think that the regular performance of the controller should be fine for what i want to do .. if i could reach 80MB´s it would be fine .. if i check the specs the controller should get up to 300 MB´s .. currently i´m trolling around at ~ 5 MB´s?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@cyberjock
thanks for the Infos. I know 4GB are not that much but on my setup i´m not running any ZFS. Its just a regular RAID-Set with an iSCSI-Disk container so it should not impact that.. or? As you can see on the screenshot the RAM and CPU is not used that lot?

Yeah, then scratch my comments about RAM. 4GB is good for UFS. My comment is because 90% of users use ZFS. It is one of the main attractive features for FreeNAS.

@andres
Could you give me the "dd" command to check up the local performance?

dd if=/dev/zero of=/mnt/UFS_Volume_Name/testfile bs=1m count=10000

You're basically running at single disk speeds with a 2 disk RAID1. So don't expect speeds to be breakneck. You'll need to do a RAID10 or something for higher speeds.

@jgreco
I know that the controllers best days are gone but i think that the regular performance of the controller should be fine for what i want to do .. if i could reach 80MB´s it would be fine .. if i check the specs the controller should get up to 300 MB´s .. currently i´m trolling around at ~ 5 MB´s?

The 5MB/sec may be because of the number of transactions going on. Again, since you are single disk you have some physical limitations for disks.

In all honesty, 160GB Intel SSDs are going for about $140 or so. If 160GB is all you need I'd get 2 Intel SSDs in a RAID1. That will definitely increase the performance of your system. ;)

@Andres Biront-

I realize you are a Robocopy user, but we've had users in the past that had performance be less than expected with Robocopy(in some cases FAR less). He spent a few days, then I spent a few hours in Teamviewer and Skype only to learn that the problem was Robocopy. Also I'm not sure what your source for data was, but you might have been limited to what the local disk was when testing CIFS speeds. I'd recommend you not use Robocopy to measure transfer speeds. I know it says its supposed to make network transfers faster, but it hasn't prove to be the case in past for quite a few people. Not sure exactly why since I've never used Robocopy.

ZFS really does need a lot of RAM. I'd max it out at 16GB of RAM before trying to do much else. In my server I had horrid performance for single user loads(sometimes the server couldn't stream a single movie). Upgraded from 12GB to 20GB and now I get over 500MB/sec local access. RAM definitely makes a HUGE impact in performance. I try to recommend nobody build a FreeNAS server with new components that can't support 32GB of RAM because you never know when 16GB won't cut it. At that point you're going to be pissed because it'll be a very expensive and very time consuming upgrade. So why not leave yourself open to just dropping in a few sticks of RAM in the future if you need more than 16GB. Normally I'd say you'd be fine with 8GB of you were only using CIFS or basic file sharing. But as soon as you start asking for things like iSCSI, NFS for ESXi, etc. you are also forcing a particular performance level. With a simple CIFS copy, who cares if it stalls for a second. But if your VMs start stalling for 1 second intervals you'll be unhappy.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It doesn't seem like either of those is the issue here though. He's testing an iSCSI initiator from Windows to the FreeNAS box, right? Or am I totally hosed?

I was confused as to what he was testing and from where. At least, I think I am... :confused:
 
Status
Not open for further replies.
Top