Windows MPIO to FreeNAS iSCSI Target

Intro

Well I made some mistake, the system worked but not utilizing its max capabilities..

I had been successfully using FreeNAS as a iSCSI target for  a disk mounted in Windows Server, but only one path being used at all times…

Windows Side

Source

I first needed the MPIO feature installed:

  1. Click Manage > Add Roles And Features.
  2. Click Next to get to the Features screen.
  3. Check the box for Multipath I/O (MPIO).
  4. Complete the wizard and wait for the installation to complete.

Noice.

Then we need to configure MPIO to use iSCSI

  1. Click Start and run MPIO.
  2. Navigate to the Discover Multi-Paths tab.
  3. Check the box to Add Support For iSCSI Devices.
  4. Click OK and reboot the server when prompted.

For me I didn’t get prompted for a reboot and reopening MPIO showed the checkbox unchecked, I had to click the add button then I got a prompt to reboot:

Now before I continue to get MPIO working on the source side, I need to fix some mistakes I made on the Target side. To ensure I was safe to make the required changes on the target side I first did the following:

  1. Completed any tasks that were using the disk for I/O
  2. Validated no I/O for disk via Resource manager
  3. Stopped any services that might use the disk for I/O
  4. Took the disk offline in Disk Manager
  5. Disconnected the Disc in iSCSI initiator

We are now safe to make the changes on the target before reconnecting the disk to this server, now on to FreeNAS.

FreeNAS Side

Source

I much like the source specified added an IP to the existing portal.. which I apparently shouldn’t have done.

Stop the iSCSI service for changes to be made.

Now delete the secondary IP from the one portal:

Now click add portal to create the secondary portal with the alternative IP.

There we go now just have to edit the target:

Now, that you have multiple portals/Group IDs configured with different IP addresses, these can be added to the targets.

Editing the existing targets to add iSCSI Group IDs

Once you have a target defined, you can click the Add extra iSCSI Group link to add the multiple Port Group ID backings.

Add extra iSCSI group IDs to each target in FreeNAS

Make sure you have the iSCSI service running. It does hurt at this point to bounce the service to ensure everything is reading the latest configuration, however with FreeNAS the configuration should take effect immediately.

Make sure iSCSI service is running in FreeNAS

Now we can go back to Windows to get the final configurations done. 🙂

Back on Windows

Configuring iSCSI

Launch iSCSI on the application server and select the iSCSI service to start automatically. Browse to the Discovery tab. Do the following for each iSCSI interface on the storage appliance:

  1. Click Discover Portal.
  2. Enter the IP address of the iSCSI appliance.
  3. Click OK.
  4. Repeat the above for each IP address on the iSCSI storage appliance.

Browse to Targets. An entry will appear for each available volume/LUN that the server can see on the storage appliance.

Configure Each Volume

For each volume, do the following:

  1. Click Connect to open the Connect To Target dialogue.
  2. Check the box to Enable Multi-Path.
  3. Click Advanced. This will allow us how to connect the first iSCSI session from the first NIC on the server. We can connect to the first interface on the iSCSI appliance.
  4. In the Advanced Settings box, select Microsoft iSCSI Initiator in Local Adapter, the first NIC of the server in Initiator IP, and the first NIC of the storage appliance in Target Portal IP.
  5. Click OK to close Advanced Settings.
  6. Click OK to close Connect To Target.

The volume is now connected. However, we only have 1 session between the first NIC of the server and the first NIC of the storage appliance. We do not have a fault-tolerant connection enabled:

  1. Click Properties in the Targets dialogue to edit the properties of the volume connection.
  2. Click Add Session.
  3. Check the box to Enable Multi-Path.
  4. Click Advanced.
  5. Select Microsoft iSCSI Initiator in Local Adapter. Select the second iSCSI NIC of the server in Initiator IP and the second NIC of the storage appliance in Target Portal IP.

Click OK a bunch of times.

If you open Disk Management, your new volume(s) should appear. You can right-click a disk or volume that you connected, select properties, and browse to MPIO. From there, you should see the paths and the MPIO customizable policies that are being used by this disk.

I left the load balancing algo to Round Robin, as Noted from here:

MCS

Fail Over Only – This policy utilizes one path as the active path and designates all other paths as standby. Upon failure of the active path the standby paths are enumerated in a round robin fashion until a suitable path is found.
Round Robin – This policy will attempt to balance incoming requests evenly against all paths.
Round Robin With Subset – This policy applies the round robin technique to the designated active paths. Upon failure standby paths are enumerated round robin style until a suitable path is found.
Least Queue Depth – This policy determines the load on each path and attempts to re direct I\O to paths that are lighter in load.
Weighted Paths – This policy allows the user to specify the path order by using weights. The larger the number assigned to the path the lower the priority.
MPIO

As above plus

Least Blocks – This policy sends requests to the path with the least number of pending I\O blocks.

Now did it actually work?

Seems like it.. performance is still not as good as I expected. must keep optimizing!

Hope this helps someone…

Windows MCS and MPIO

I was configuring some iSCSI disk on a Windows server and noticed there were two different options available that seem to provide similar functionality and I had to know… What’s the difference?

Source

THE DIFFERENCE BETWEEN MCS AND MPIO (IN A NUTSHELL):

First we agree upon the common features within both technologies: both serve a multipathing for (iSCSI) I/O-operations utilizing multiple hardware (or OSI Level 1) components, such as Ethernet NICs or iSCSI HBAs. The purpose of multipathing is redundancy and aggregation – how this is implemented depends on the above depicted figures, i.e. through the decision which paths are active and which are passive (or standby, using Microsoft parlance). For the exact definition of the policies, such as round robin, weighted path, fail over only, etc. please refer to “MS iSCSI UG”, p. 41.

Finally here come the condensed definitions for both technologies:

MCS allows the initiator to establish multiple TCP/IP connections to the same target within the same iSCSI session.

MPIO in contrast allows the initiator to establish multiple iSCSI sessions (each having single TCP/IP connection) to the same target, effectively aggregating the duplicate devices into a single device.

If you are not familiar with the terminology (initiator, target, session, connection, initiator port and network portal) please refer to “Multivendor Post” which provides very informative sketches to the iSCSI network architecture.

Now that we know that MCS means effectively several connections within a session and MPIO means multiple sessions the question is when to use what. Mainly you will have to concentrate on two perspectives – vendor support and load balance policy inheritance. The question – or rather schools of thought – about the speed and performance differences are factored out here, because in the author’s opinion these are almost equal and you will probably never get to the point of fully utilizing them. With this said consider the following simple rule of thumb: you can use MCS only when it is supported from the vendor’s SAN and you are not using hardware iSCSI HBAs. In any other case use MPIO. The second thought is – if considering the above conditions you are able to use MCS, but want to apply different load balancing policies to different targets (and effectively LUNs or groups of LUNs) you will still be better off using MPIO. This is because load balancing policies are session adherent. In other words when you are applying policy to MCS it is for the whole session, no matter how many connections are aggregated “beneath” it. On the other side when using MPIO you can set different policies for different LUNs, because the multipathing is using different iSCSI sessions.

 

I’m still trying to wrap my hear around exactly what the source is getting at. But will update the blog when I do some more testing.