Professional Documents
Culture Documents
Multipath I/O
Kris Piepho, Microsoft Product Specialist
January 2017
5/29/2013 Updated to include Windows Server 2008 R2/2012 iSCSI initiator setup and appendix listing
recommended hotfixes and registry values
2/9/2016 Removed Windows Server 2003 content and updated hotfix recommendations
10/4/2016 Re-ordered document for clarity, added Windows Server 2016 and Nano Server content,
updated hotfix recommendations
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright 2010 - 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA [2/3/2017] [Best Practices] [CML1004]
Dell EMC believes the information in this document is accurate as of its publication date. The information is subject to change without notice.
1.1 Audience
This document was written for system administrators responsible for the setup and maintenance of Windows
servers and associated storage. Readers should have a working knowledge of Windows Server and
SC Series arrays.
SC Series arrays provide redundancy and failover with multiple controllers and RAID modes. However,
servers still need a way to spread the I/O load and handle internal failover from one path to the next. This is
where MPIO plays an important role. Without MPIO, servers see multiple instances of the same disk device in
Windows disk management.
The MPIO framework uses Device Specific Modules (DSM) to allow path configuration. Microsoft provides a
built-in generic Microsoft DSM (MSDSM) for Windows Server 2008 R2 and above. This MSDSM provides the
MPIO functionality for Dell Storage customers.
In virtual port mode, each physical port has a WWN (World Wide Name) and a virtual WWN. Servers target
only the virtual WWNs. During normal conditions, all ports process I/O. If a port or storage controller failure
occurs, a virtual WWN moves to another physical WWN in the same fault domain. When the failure is
resolved and ports are rebalanced, the virtual port returns to the preferred physical port.
Virtual port mode provides the following advantages over legacy mode:
Increased connectivity - Because all ports are active, additional front-end bandwidth is available
without sacrificing redundancy.
Improved redundancy
- Fibre Channel - A Fibre Channel port can fail over to another Fibre Channel port in the same
fault domain on the storage controller.
- iSCSI - In a single fault domain configuration, an iSCSI port can fail over to the other iSCSI port
on the storage controller. In a two fault domain configuration, an iSCSI port cannot fail over to the
other iSCSI port on the storage controller.
Simplified iSCSI configuration Each fault domain has an iSCSI control port that coordinates
discovery of the iSCSI ports in the domain. When a server targets the iSCSI port IP address, it
automatically discovers all ports in the fault domain.
As shown in Figure 2, a dual-controller SC Series array in virtual port mode is connected to a Fibre Channel
(FC) server with a single fault domain. All ports belong to a single fault domain because they are connected to
the same FC switch.
2.1.2.2 iSCSI
iSCSI follows the same wiring and port setup as Fibre Channel with the exception of the control port. iSCSI
uses a control port configured for each of the fault domains. Servers connect to the control port, which then
redirects traffic to the appropriate virtual port. When configuring MPIO, this looks slightly different than with
the legacy mode configuration because only the control port in the iSCSI initiator software needs to be
assigned. These differences are covered below in the OS-specific sections.
2.1.2.3 SAS
With select SC Series models that support SAS front-end connectivity, hosts or servers can access storage
by connecting directly to SC Series SAS ports.
In SAS virtual port mode, a volume is active on only one storage controller, but is visible to both storage
controllers. Asymmetric Logical Unit Access (ALUA) controls the path that a server uses to access a volume.
If a storage controller becomes unavailable, the volume becomes active on the other storage controller. The
state of the paths on the available storage controller are set to Active/Optimized and the state of the paths on
the other storage controller are set to Standby. When the storage controller becomes available again and the
ports are rebalanced, the volume moves back to its preferred storage controller and the ALUA states are
updated.
If a SAS path becomes unavailable, the Active/Optimized volumes on that path become active on the other
storage controller. The state of the failed path for those volumes is set to Standby and the state of the active
path for those volumes is set to Active/Optimized.
Note: Failover in SAS virtual port mode occurs within a single fault domain. Therefore, a server must have
both connections in the same fault domain. For example, if a server is connected to SAS port two on one
storage controller, it must be connected to SAS port two on the other storage controller. If a server is not
cabled correctly when a storage controller or SAS path becomes unavailable, access to the volume is lost.
Fibre Channel ports that are zoned to see the SC Series HBAs
iSCSI I/O ports that are in a VLAN that can see the SC Series HBAs
SAS ports directly connected to the SC Series HBAs
With Fibre Channel, the process is the same for virtual ports as it is for legacy ports. However, with legacy
ports, the server cannot see reserve ports. iSCSI virtual ports connect only to an SC Series control port.
3.2 iSCSI
As with Fibre Channel, an iSCSI server can be created automatically or manually. For automatic
configuration, enter the IP address of the SC Series controller HBA ports in the server iSCSI HBA or initiator
software. Use either the HBA BIOS or the software initiator configuration wizard. In virtual port mode, enter
the IP address of the control port. In legacy port mode, enter the IP address of the primary port. This is
covered in more detail in the OS specific sections.
3.3 SAS
To configure a Windows server directly connected to a Dell Storage SC Series unit by SAS, it is highly
recommended to use the wizard. Configure this host to access an array in the Dell Storage Manager client.
The wizard will automatically create a server object on the array, detect all available paths, and assign them
to the server, as well as automatically apply recommended MPIO registry settings to the server. MPIO registry
settings are discussed in detail in appendix A.
A directly connected SAS server can also be created automatically or manually using a Dell Storage Manager
client. Refer to section 3.6, Configuring a SAS server , for details on creating a SAS server.
Note: If the WWN or IQN is not listed, make sure that the Only Show Up Connections box is not checked.
1. In the Create Server wizard shown in Figure 4, click Manually Define HBA.
2. In the Select Transport Type window, choose Fibre Channel or iSCSI.
3. Enter the WWN or iSCSI name, and click Continue. Repeat steps 1-3 for every WWN or iSCSI HBA
to be associated with the server.
4. Once all of the HBAs are added, check the appropriate HBA and continue the wizard.
Note: The new HBA appears with a white X in a red circle. Once the server is connected the warning state is
removed.
1. Log in to the SC Series array using the Dell Storage Manager client.
2. Select the Storage tab.
3. In the tree view, right-click Servers.
4. From the shortcut menu, select Create Server.
5. Once all HBAs are added, check the appropriate HBA and continue the wizard.
Note: The new HBA appears with a white X in a red circle. Once the server is connected the warning state is
removed.
The connectivity tab displays the HBAs that are defined for the server object and the SC Series array
HBAs/control ports the server HBAs are attached to.
In the example shown in Figure 6, there are a total of eight possible paths because each of the two Fibre
Channel HBA ports on the server can see four Fibre Channel HBA ports assigned to each fault domain.
In the example shown in Figure 7, there are a total of four possible paths because each of the two iSCSI NICs
on the server can see two iSCSI control ports assigned to each fault domain.
3.9 Restrict volume mapping paths (Fibre Channel and iSCSI only)
An SC Series volume is mapped to all available paths unless the advanced mapping button is used to restrict
mapping paths to FC only, iSCSI only, or specified HBA ports and controller ports. To restrict mapping paths:
Note: The option to limit ports by transport type is only available on systems that have more than one
transport type available, such as both Fibre Channel and iSCSI.
Note: Using mixed transports concurrently on the same Windows server host is not supported with Server
2012 R2 and newer. With Server 2012 R2 and newer, when a LUN is presented to the host that is using
multiple transports such as Fibre Channel and iSCSI, the host will default to one transport or the other
(typically Fibre Channel is chosen by the host) and ignore the other transport. If all paths for one type of
transport go down, the host may not send data using the other transport without a disk re-scan. This is
default Windows Server behavior.
Assuming the default mapping wizard is used and paths are not restricted, a volume is mapped to all
available paths, creating multiple I/O paths from the server to the volume.
To view the mapped paths, select a volume in the tree view and click the Mappings tab.
To view the mapped paths, select a volume in the tree view and click the Mappings tab. Figure 13 shows
volume SAS MPIO Volume mapped to the server with all available SAS ports.
It is recommended that there are multiple front-end controller paths to the servers and that the servers have
multiple connections to the controller.
Note: Windows Server 2012 or later includes the ability to use heterogeneous HBA types with MPIO. In
previous versions of Windows Server, it was a requirement to use HBAs of the same model.
Note: MPIO is required for directly-connected SAS volumes and is only supported on Windows Server 2008
R2 or later. Please refer to appendix A in this document for important Windows Server 2008 R2 MPIO
configuration recommendations.
To access Server Manager, click Start > Control Panel > Administrative Tools > Server Manager, or click
the Server Manager icon in the taskbar.
Note: For server core installations, the above commands are case sensitive.
Note: On Windows Server 2012 and 2012 R2 Core installations, follow the instructions for PowerShell. To
access PowerShell on a Server Core installation, type powershell and press Enter at the command
prompt.
3. When the Add Roles and Features Wizard window opens, click Next.
Note: When the MPIO configuration is complete, refer to appendix A for important MPIO-specific hotfix and
registry settings.
Note: The software iSCSI initiator included in Windows Server 2008 R2 or later provides the necessary
performance and stability required for iSCSI connections to an SC Series array. However, Dell supports the
use of an iSCSI HBA.
1. Open the MPIO control panel by clicking Start > Administrative Tools > MPIO.
To associate the SC Series volumes with the DSM through the use of PowerShell, follow these steps.
3. Now that SC Series storage is supported through the Microsoft DSM, claim all available SC volumes
to be used by MPIO by typing:
Update-MPIOClaimedHW Confirm:$false
This command provides the same result as the MPIO Control Panel and PowerShell options. It associates
SC Series volumes and then restarts the server. To bypass the reboot option (if rebooting later is desired),
use -n in place of -r.
Once the server reboots, use Disk Management to verify that the configuration is correct. There should only
be one instance of each SAN volume listed in Disk Management.
For SCOS versions 6.5 and earlier, the default load balance policy for SC Series volumes is round robin.
For SCOS version 6.6 and later, Fibre Channel and iSCSI-connected volumes (both single and multi-path) will
default to round robin. SAS-connected volumes default to round robin with subset. SC Series volumes
mapped with both Fibre Channel and iSCSI transports will default to round robin. Volumes mapped with both
SAS and iSCSI will default to round robin with subset.
Failover only, round robin and least queue depth are the only MPIO load balancing policies supported
on Fibre Channel and iSCSI volumes.
Dell Storage SC series SAS-connected volumes support the following MPIO load balancing policies:
round robin with subset, least queue depth and weighted paths.
1 Failover only
2 Round robin
5 Weighted paths
6 Least blocks
7 Vendor specific
For example, to change all SC Series volumes to a failover only policy, use the following command:
To change the default load balancing policy to failover only, open a PowerShell window with elevated
(administrator) privileges and type:
Get-MSDSMGlobalDefaultLoadBalancePolicy
- If the default policy is set to round robin, the result will return RR:
- If the default policy is set to failover only, the result will return FOO:
Note: The PowerShell MPIO module does not include cmdlets that can change the default load balancing
policy on a specific volume.
To change the default load balancing policy on a single volume, open a command prompt or PowerShell
window with elevated (administrator) privileges (commands will work in both).
mpclaim s d
Figure 14 shows that the load balancing policy (LB Policy) is set to RR (round robin) for disks 0 and 1.
The syntax to change the load balancing policy on a specific volume is:
Refer to Table 1 on page 29 for load balancing policies and the associated numbers for the mpclaim
command.
To change the load balancing policy of MPIO disk 2 to round robin, type:
mpclaim l d 2 2
mpclaim s d
Note: For instructions on how to configure Windows Nano Server for iSCSI MPIO, refer to Section 7, MPIO
on Windows Nano Server.
The iSCSI quick connect feature works well for single iSCSI path connectivity. Configuring iSCSI to use MPIO
requires a few more steps, but is still easy to configure.
Figure 15 represents a dual-controller SC Series array that is configured with virtual front-end ports with two
fault domains. Two physical iSCSI ports (one from each controller) are grouped logically as a virtual domain
that is assigned a virtual iSCSI IP address. Each virtual domain physical port is connected to two separate
iSCSI switches to ensure full path redundancy to the dual iSCSI NICs on the server.
To configure the server for iSCSI MPIO, complete the following steps.
9. From the Local adapter drop-down menu, select Microsoft iSCSI Initiator.
From the Initiator IP drop-down menu, select the local IP address of the server NIC that is to be
associated with the first fault domain (fault domain 100).
10. Click OK, and then OK again to return to the iSCSI Initiator properties window.
11. Verify that the target IP address and adapter IP address are displayed in the Target portals section.
12. Repeat steps 1-11 to add the second target IP for the second virtual fault domain and the server
second iSCSI NIC (in this example, 10.10.128.1 and 10.10.128.101).
14. Select the Targets tab. This should be populated with the discovered iSCSI target ports on the array.
15. Highlight the first target, and then click Connect.
16. On the Connect To Target screen, verify that both Add this connection to the list of Favorite
Targets and Enable multi-path are checked.
17. Click Advanced to display additional options.
20. Click OK, and then OK again to return to the iSCSI Initiator properties window.
21. Repeat steps 15- 20 for each additional target listed.
22. When finished, all the targets show a Connected status.
Nano Server includes PowerShell 5.1 Core Edition. PowerShell Core Edition was built to run on reduced
footprint editions of Windows such as Nano Server, and contains fewer modules and cmdlets than the
Desktop Edition which is included with Windows Desktop and Core editions.
Nano Server can be administered through a remote PowerShell connection, or by using GUI tools on a full
Windows Server 2016 installation. All the commands in this section are issued through a remote PowerShell
session (PSsession) to the Nano Server.
For more information about Nano Server, refer to the Getting Started with Nano Server page on Microsoft
TechNet.
When prompted, answer Yes to restart Nano Server immediately, or No to restart the server later.
After restarting Nano, re-establish a remote PowerShell connection to the Nano Server.
The command will return Enabled if the MPIO feature is working properly.
Keep in mind that when running the MPIO claim script, the load balance policy is chosen dynamically and
cannot be modified. All Fibre Channel and iSCSI-connected SC Series volumes will use Round Robin as the
load balance policy. SAS-connected SC Series volumes will use Round Robin with Subset as the default
policy.
By default, the script will configure the server to claim all volumes on all transports (FC, iSCSI, and SAS). If
desired, the script can be configured to claim volumes on a specific transport. A reboot is required when the
script has completed running.
For the examples that follow, the Nano Server will be configured to connect to a dual-controller SC Series
array configured with two iSCSI fault domains. The Nano Server has two dedicated NIC ports that will be
used for iSCSI traffic (one port for each SC Series fault domain).
By default, the iSCSI service is not running on a Nano Server. The service must be started, and the service
must all be set to start automatically when the server starts.
Get-IscsiTarget
4. Connect to each target listed, enabling multipath and making the connection persistent.
6. When the targets are connected, use Get-IscsiTarget to view the IsConnected status as True
for each node address.
Get-IscsiConnection
9. Obtain a list of targets on the established portal to the second fault domain.
Get-IscsiTarget
11. Repeat this process for any other non-connected targets listed in Get-IscsiTarget.
At this point iSCSI connectivity between the Nano Server and SC Series array has been established. A server
object can now be created on the SC Series array, allowing volumes to be mapped to the Nano Server using
the iSCSI transport method. For detailed instructions on how to create a server object on an SC Series array,
refer to section 3, Configuring servers.
Note: In some cases, prerequisite updates must be installed on the server before the hotfixes listed below
can be installed. Please read the prerequisite information for each applicable hotfix before proceeding.
Note: Updates and hotfixes are listed in the order in which they should be installed.
Note: The registry settings below should be made on all Windows Server hosts that use the Microsoft DSM
to access LUNs on SC Series arrays in order to ensure proper behavior and performance. This includes
hosts configured to use single-path and MPIO.
Note: Using mixed transports concurrently on the same Windows Server host is not supported with Server
2012 R2 and newer. With Server 2012 R2 and newer, when a LUN is presented to the host that is using
multiple transports such as Fibre Channel and iSCSI, the host will default to one transport or the other
(typically Fibre Channel is chosen by the host over iSCSI) and ignore the other transport. When configured
to use multiple transports, if all paths for one type of transport go down, the host may not send data using
the other transport without a disk re-scan. This is default Windows Server behavior.
(msiscsi.sys 11/9/2015)
(storport.sys 11/19/2014)
(msdsm.sys - 5/6/2015)
(mpio.sys 9/24/2014)
(msdsm.sys 1/24/2016)
Please refer to section A.4.1 for a PowerShell script that will apply recommended registry settings to all
versions of Windows Server and Nano Server.
Note: Recommended registry settings apply to all versions of Windows Server unless directly specified.
Note: The registry settings in Table 7 only apply to Windows Server 2012 or later.
TimeoutValue Disk time-out is a registry setting that defines the time that 60 no change
Windows will wait for a hard disk to respond to a command.
Installing host bus adapters (HBA) or other storage controllers
can cause this key to be created and configured.
Disable Nagles Algorithm: To disable delayed ACK and Nagles algorithm, create the following entries for
each SAN interface subkey in the Windows Server registry:
Entries:
TcpAckFrequency
TcpNoDelay
Value type:
REG_DWORD, number
Value to disable:
1. Go to Adapter Settings.
2. Right click the adapter and select Properties.
3. Under the Networking tab, click Configure.
4. Under the Advanced tab, select Interrupt Moderation and choose Disabled.
Note: A reboot is required for any registry changes to take effect. Alternatively, unloading and reloading the
initiator driver will also cause the change to take effect. In the Device Manager GUI, look under SCSI and
RAID Controllers, right-click Microsoft iSCSI Initiator, and select Disable to unload the driver. Then select
Enable to reload the driver.
# Assign variables
$MpioRegPath = "HKLM:\SYSTEM\CurrentControlSet\Services\mpio\Parameters"
$IscsiRegPath = "HKLM:\SYSTEM\CurrentControlSet\Control\Class\"
$IscsiRegPath += "{4d36e97b-e325-11ce-bfc1-08002be10318}\000*"
# General settings
# iSCSI settings
A.7 Resources
Appendix A resources include:
Dell TechCenter is an online technical community where IT professionals have access to numerous resources
for Dell software, hardware and services.
Storage Solutions Technical Documents on Dell TechCenter provide expertise that helps to ensure customer
success on Dell Storage platforms.
Dell Dell EMC SC Series Storage: Microsoft Windows Server 2016 and Nano Server Best
Practices
Dell Windows Server 2012 R2 Best Practices for Dell Compellent Storage Center