There was a closed thread here which always appeared top of my google searches when looking for windows 7 iSCSI target using MCS. Unfortunately, it only says that it should be possible but there was no follow up to say that it definitely could be done or how.
I wanted exactly this configuration to mount an iSCSI share on my HP microserver to my windows 7 PC. Both have dual Intel NICs but with my cheapish netgear gigabit switch, link aggregation was not going to be easy so iSCSI MCS was the best option for me as far as I could tell.
The PC and microserver network ports had to be configured on different subnets for MCS so one port of each is on 192.168.1.x/24 and the other on 192.168.2.x/24.
I setup the server with a couple of 3TB WD red drives mirrored as a ZFS volume.
I then created the iSCSI share as follows
- created a file extent to /mnt/<volume>/zvol which was as large as I needed for my iscsi target
- accepted the defaults for initiators so that any client could connect (ALL/ALL)
- under portals, created a single portal with the two ip addresses listed (not two portals!)
- created a target with the default permissions and an easy to identify name
- mapped the target to the file extent
Then on the PC, in the iSCSI initiator, entered the ip address of the iscsi target and refreshed the targets.
Selected the target and clicked connect then under advanced selected the iSCSI driver and the source and target addresses for the initiator and target then click ok/connect.
Highlighted the target and clicked properties.
Ticked the identifier and then clicked mcs.
Made sure round robin was selected and clicked add.
Selected the other subnet ip-addresses and connected.
The iSCSI target was then active.
Do not add the mount point under the Volumes and Devices tab unless the mount will always be available at boot as this will cause the windows boot to stall for a couple of minutes with a black screen.
After making sure jumbo frames were enabled at both ends, the iSCSI Raid1 ZFS device gave 126MB/s throughput from windows resource monitor.
Swapping the disks to Raid0 ZFS increased the reported figure to 148MB/s while transferring the same large files. Which made me feel a bit better about wasting so much time on this.
I intend getting a couple more drives at which time I can choose the best configuration so there was no point benchmarking at this stage.
I wanted exactly this configuration to mount an iSCSI share on my HP microserver to my windows 7 PC. Both have dual Intel NICs but with my cheapish netgear gigabit switch, link aggregation was not going to be easy so iSCSI MCS was the best option for me as far as I could tell.
The PC and microserver network ports had to be configured on different subnets for MCS so one port of each is on 192.168.1.x/24 and the other on 192.168.2.x/24.
I setup the server with a couple of 3TB WD red drives mirrored as a ZFS volume.
I then created the iSCSI share as follows
- created a file extent to /mnt/<volume>/zvol which was as large as I needed for my iscsi target
- accepted the defaults for initiators so that any client could connect (ALL/ALL)
- under portals, created a single portal with the two ip addresses listed (not two portals!)
- created a target with the default permissions and an easy to identify name
- mapped the target to the file extent
Then on the PC, in the iSCSI initiator, entered the ip address of the iscsi target and refreshed the targets.
Selected the target and clicked connect then under advanced selected the iSCSI driver and the source and target addresses for the initiator and target then click ok/connect.
Highlighted the target and clicked properties.
Ticked the identifier and then clicked mcs.
Made sure round robin was selected and clicked add.
Selected the other subnet ip-addresses and connected.
The iSCSI target was then active.
Do not add the mount point under the Volumes and Devices tab unless the mount will always be available at boot as this will cause the windows boot to stall for a couple of minutes with a black screen.
After making sure jumbo frames were enabled at both ends, the iSCSI Raid1 ZFS device gave 126MB/s throughput from windows resource monitor.
Swapping the disks to Raid0 ZFS increased the reported figure to 148MB/s while transferring the same large files. Which made me feel a bit better about wasting so much time on this.
I intend getting a couple more drives at which time I can choose the best configuration so there was no point benchmarking at this stage.