Configure iSCSI Connectivity With VMware vSphere 5

Published on December 2016 | Categories: Documents | Downloads: 24 | Comments: 0 | Views: 274
of 43
Download PDF   Embed   Report

Configure iSCSI Connectivity With VMware vSphere 5

Comments

Content

TECHNICAL REPORT

Configuring iSCSI Connectivity with VMware vSphere 5 and Dell EqualLogic PS Series Storage
ABSTRACT

This Technical Report will explain how to configure and connect a Dell™ EqualLogic™ PS Series SAN to a VMware® vSphere™ 5 Environment using the software iSCSI initiator.

TR1075 V1.0

Copyright © 2011 Dell Inc. All Rights Reserved. EqualLogic is a registered trademark of Dell Inc. Dell is a trademark of Dell Inc. All trademarks and registered trademarks mentioned herein are the property of their respective owners. Information in this document is subject to change without notice. Reproduction in any manner whatsoever without the written permission of Dell is strictly forbidden. [Nov 2011]

WWW.DELL.COM/PSseries

Preface
PS Series arrays optimize resources by automating performance and network load balancing. Additionally, PS Series arrays offer all-inclusive array management software, host software, and free firmware updates.

Audience
The information in this guide is intended for VMware vSPhere Administrators configuring SAN access to a Dell EqualLogic PS Series SAN.

Related Documentation
For detailed information about PS Series arrays, groups, volumes, array software, and host software, log in to the Documentation page at the customer support site.

Dell Online Services
You can learn about Dell products and services using this procedure:
1. Visit http://www.dell.com or the URL specified in any Dell product information.

2. Use the locale menu or click on the link that specifies your country or region.

Dell EqualLogic Storage Solutions
To learn more about Dell EqualLogic products and new releases being planned, visit the Dell EqualLogic TechCenter site: http://delltechcenter.com/page/EqualLogic. Here you can also find articles, demos, online discussions, technical documentation, and more details about the benefits of our product family.

Table of Contents
Executive Summary ............................................................................................................1 Introduction .........................................................................................................................1 Features Of The vSphere Software iSCSI Initiator ........................................................1 Configuring vSphere iSCSI Software Initiator with PS Series Storage ......................2 Establishing Sessions to the SAN .....................................................................................2 VMkernel Storage Heartbeat ............................................................................................3 Example Installation Steps ................................................................................................4 Section 1: vSwitch Configuration ...................................................................................5 Standard vSwitch Configuration ..............................................................................5

Step 1: Configure Standard vSwitch and Storage Heartbeat ............................... 5 Step 2: Add iSCSI VMkernel Ports........................................................................... 8 Step 3: Associate VMkernel Ports to Physical Adapters ...................................... 10 Step 4: Configure Jumbo Frames ........................................................................ 13 Step 1: Step 2: Step 3: Step 4: Step 5: Configure vSphere Distributed Virtual Switch ....................................... 15 Add Port Groups ...................................................................................... 17 Configure Storage Heartbeat and iSCSI VMkernel Ports ....................... 18 Associate VMkernel Ports to Physical Adapters ..................................... 21 Configure Jumbo Frames ....................................................................... 24

vSphere Distributed Switch Configuration ...........................................................14

Section 2: Configure VMware iSCSI Software Initiator ............................................25 Section 3: Connect to the Dell EqualLogic PS Series SAN .......................................28

Step 1: Install iSCSI Software Initiator ................................................................. 25 Step 2: Binding VMkernel Ports to iSCSI Software Initiator ................................ 26 Step 1: Configure Dynamic Discovery of PS Series SAN.................................... 28 Step 2: Create and Configure Volume................................................................ 29 Step 3: Connect to a Volume on PS Series SAN.................................................. 32 Step 4: Enabling VMware Native Multipathing - Round Robin .......................... 32 Step 4: Create VMFS Datastores and Connect More Volumes........................... 33

FAQ ......................................................................................................................................35 Summary ............................................................................................................................35 Technical Support and Customer Service ...................................................................36

Revision Information The following table describes the release history of this Technical Report. Report 1.0 Date November 2011 Document Revision Initial Release

The following table shows the software and firmware used for the preparation of this Technical Report. Vendor VMware® Dell Model vSphere 5.x Dell™ EqualLogic™ PS Series SAN Software Revision 5.0 4.3.8 or Higher

The following table lists the documents referred to in this Technical Report. All PS Series Technical Reports are available on the Customer Support site at: support.dell.com Vendor VMware VMware Dell Dell Document Title iSCSI SAN Configuration Guide vSphere System Administration Guides Dell EqualLogic PS Series Array Administration Guide Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vSphere 5 and PS Series SANs

iii

EXECUTIVE SUMMARY
VMware® vSphere™ 5 is VMware’s newest flagship product allowing for advanced server virtualization and management. Many of the advanced features provided by VMware including the ability to move running Virtual Machines (VMs) between active servers, High Availability clustering (HA), and advanced load balancing all require some manner of shared storage accessed by each of the servers. The Dell™ EqualLogic™ PS Series SAN is a highly virtualized shared storage platform that works with VMware vSphere 5 to provide these advanced features. This Technical Report will discuss how to configure your VMware ESXi 5™ environment to communicate with the Dell EqualLogic PS Series SAN.

INTRODUCTION
VMware® vSphere™ 5 offers intelligent and advanced enhancements to the iSCSI software initiator in conjunction with iSCSI SAN connectivity. Many of these new features require advanced configuration in order to work properly. This Technical Report will address some of the new features in vSphere as well as show administrators how to connect a vSphere 5 environment to a Dell™ EqualLogic™ PS Series iSCSI SAN. These steps are documented in VMware’s iSCSI SAN Configuration Guide which can be found on VMware’s website. This Technical Report summarizes the steps specific to connecting to a PS Series SAN. This Technical Report will cover the steps for utilizing the software iSCSI initiator inside the ESXi host. Users connecting their vSphere environment using iSCSI HBAs should not follow these steps, and should configure their environment as outlined in the VMware SAN Configuration Guide.

FEATURES OF THE VSPHERE SOFTWARE ISCSI INITIATOR
VMware vSphere 5 has support for various advances with iSCSI SAN connectivity. This Technical Report will cover the new features in the iSCSI software initiator as well as how to configure them to connect to the SAN. iSCSI Software Initiator – With ESXi 4 and continuing in ESXi 5, the iSCSI software initiator was re-written from the ground up for better performance and functionality. Jumbo Frames – With ESXi 5 Jumbo Frames can be enabled on the iSCSI software initiator. Jumbo Frame support allows for larger packets of data to be transferred between the ESXi 5 hosts and the SAN for increased efficiency and performance. With ESXi 5, Jumbo Frames can be configured and enabled from the vCenter GUI which is a change from vSphere 4 which required CLI.

Note: Jumbo Frames are not required, they are optional. Your network infrastructure must be able to fully support them to achieve any benefit.

1

MPIO – With ESXi 5 and vSphere 5, customers can benefit from Multi-Path I/O from the ESXi 5 hosts to the SAN. This allows for multiple connections to be concurrently used to allow for greater bandwidth. This enables ESXi 5 to take full advantage of the scale out networking in the PS Series SAN. Third Party MPIO Support – VMware has provided an architecture that enables storage vendors to provide new and advanced intelligent integration. Dell has a MPIO plug-in that will enhance MPIO with the existing iSCSI software initiator for easier management, better performance and bandwidth.

CONFIGURING VSPHERE ISCSI SOFTWARE INITIATOR WITH PS SERIES STORAGE
Taking advantage of all of these features requires advanced configuration by vSphere administrators. In ESX 4 these configurations were done through a combination of CLI commands and GUI processes. ESXi 5 has streamlined this so that the entire process can be done through the vCenter GUI. The rest of this Technical Report will focus on the installation and configuration of an iSCSI software initiator connection to a PS Series SAN. Each of these steps can be found inside the VMware iSCSI SAN configuration guide and where names and IP Addresses are used, they will be different for each environment. This is merely an example and demonstration of how to configure a new vSphere ESXi 5 environment correctly and connect it to the EqualLogic SAN. The following assumptions are made for this example: 1. Running ESXi 5 2. Running Dell EqualLogic PS Series SAN Firmware 4.3.8 or later 3. More than one Network Interface Card (NIC) set aside for iSCSI traffic Not every environment will require all of the steps detailed in this Technical Report. The rest of this document assumes the environment will be using multiple NICs and attaching to a Dell EqualLogic PS Series SAN utilizing Native Multipathing (NMP) from VMware.

ESTABLISHING SESSIONS TO THE SAN
Before continuing, we first must discuss how VMware ESXi establishes its connection to the SAN utilizing the vSphere iSCSI Software Adapter. VMware uses VMkernel ports as the session initiators so we must configure each port that we want to use as a path to the storage. This configuration will be a one to one (1:1) VMkernel port to NIC relationship. Each session to the SAN will come from one vmkernel port which will go out a single physical network interface card (NIC). Once these sessions to the SAN are initiated, both the VMware Native Multi-Path (NMP) and the Dell EqualLogic network load balancer will take care of load balancing and spreading the I/O across all avai lable paths. Each volume on the PS Series array can be utilized by ESXi as either a Datastore or a Raw Device Map (RDM). To do this, the iSCSI software adapter utilizes the VMkernel

2

ports that were created and establishes a session to the SAN and to that volume to communicate. Administrators have the ability to use additional NICs for failover but this document will focus on enabling NMP with Round Robin or preparation for 3 rd Party Multipathing with the Dell EqualLogic Multipathing Extension Module. With the improvements to vSphere and MPIO, administrators can take advantage of multiple paths to the SAN for greater bandwidth and performance. This does require some additional configuration which is discussed in detail in this Technical Report. Each VMkernel port is bound to a physical adapter. Depending on the environment this can create a single session to a volume or up to 8 sessions (ESXi maximum number of paths to a volume). Use a one to one (1:1) ratio of VMkernel Ports to physical network cards. This means if there are 2 physical NICs, you would establish 1 VMkernel per physical NIC, associating a separate NIC with each VMkernel port. This means in the following example you would establish 2 sessions to a single volume on the SAN. This trend can be expanded depending on the number of NICs you have in the system.

Figure 1: Conceptual Image of iSCSI Sessions using 1:1 VMkernel mapping with 2 physical NICs for iSCSI Traffic VMKERNEL STORAGE HEARTBEAT
In the VMware virtual networking model, certain types of vmkernel network traffic are sent out on a default vmkernel port for each subnet. The iSCSI multipathing network configuration requires that the iSCSI vmkernel ports use a single physical NIC as an uplink. As a result, if the physical NIC that is being used as the uplink for the default vmkernel port goes down, network traffic that is using the default vmkernel port will fail. This includes vMotion traffic, SSH access, and ICMP ping replies. Although iSCSI traffic isn’t directly affected by t his condition, a side effect of the suppressed ping replies is that the EqualLogic PS Series group will not be able to accurately determine connectivity during the login process, and therefore a suboptimal placement of iSCSI sessions will occur. In some scenarios, depending upon array, server and network load, logins may not be completed in a timely manner. To prevent this from occurring, Dell recommends that a highly available vmkernel port be created on the iSCSI subnet serving as the default vmkernel port for such outgoing traffic. When properly configured this heartbeat sits outside of the iSCSI software initiator and does not consume any additional iSCSI storage connections. It is simply used as the lowest vmkernel port for vmkping and other iSCSI network functions. This heartbeat

3

has to be the lowest VMkernel port on the vSwitch and is not bound to the software initiator. It is always recommended to separate iSCSI traffic and standard management traffic and this Storage Heartbeat should not be on the same subnet as the ESXi management traffic. This Technical Report will guide the user through establishing a Storage Heartbeat during the vSwitch configuration.

EXAMPLE INSTALLATION STEPS
Each environment will be different but the following is a list of example installation steps for configuring a new ESXi 5 host to connect to a PS Series SAN. Throughout these examples the names and IP addresses assigned will need to be changed to be relevant in your environment. These examples assume a switch with Jumbo Frames support on the physical hardware. This Technical Report will focus on one-to-one VMkernel mapping with 2 physical NICs and 2 VMkernel Ports. This would be a typical solution for many environments to utilize all of the bandwidth available to the ESXi host’s network interfaces. There are some suggested configurations depending on the number of NICs that will be used for iSCSI traffic. Every environment will differ depending on the number of hosts, the number of EqualLogic members, and the number of volumes. In a default configuration assign one VMkernel port for each physical NIC in the system. So if there are 2 NICs, assign 2 VMkernel Ports. This is referred to in the VMware iSCSI document as 1:1 port binding. Keep in mind that it is the VMkernel port that establishes the iSCSI session to the volume and the physical NIC is just the means it utilizes to get there. Due to how the PS Series SAN automatically load balances volumes across multiple members and iSCSi connections across multiple ports, this configuration will give both redundancy and performance gains when configured properly. Sample Configurations 2 physical 1Gbe NICs 4 physical 1Gbe NICs 2 physical 10Gbe NICs 2 VMkernel Ports (1 per physical NIC) 4 VMkernel Ports (1 per physical NIC) 2 VMkernel Ports (1 per physical NIC)

This provides scalability and performance as the SAN environment grows without having to make changes on each ESXi host.

4

If more iSCSI connections are desired follow the above sample configurations to obtain the number of VMkernel Ports that match the environment and the number of paths you need to the PS Series SAN Always keep in mind the entire infrastructure of the virtual datacenter when deciding on network path and volume count. View the Release Notes of the PS Series Firmware for the current connection limits of pools and groups for the Dell EqualLogic PS Series SAN. All of these configurations are done at the iSCSI vSwitch level. This means that once it is completed, the ESXi 5 host will create multiple iSCSI connections to the PS Series SAN. Every new volume will have more iSCSI connections as well. Once this is configured there only need to be changes made if more NICs are being added or if more or less paths to the storage are needed.

Example Environment SECTION 1: VSWITCH CONFIGURATION
This Technical Report will discuss the two ways to configure the virtual switches in ESXi 5. These can be either vSphere Standard Switches (vSwitch) or vSphere Distributed Switches (vDS). Either method is viable for the environment and will depend on the Administrator’s familiarity with the method along with the VMware license structure in the environment. Administrators should choose one method and apply it to their entire ESXi cluster for ease of configuration and management. The steps are very similar but will be described in detail for each method. Standard vSwitch Configuration If you are using vSphere Distributed Switches for iSCSI connectivity skip this section and move to the vSphere Distributed Switch section. Step 1: Configure Standard vSwitch and Storage Heartbeat This step will create a new standard vSwitch with the Storage Heartbeat VMkernel port.

5

1. From the vCenter GUI select the ESXi host to configure and click the Configuration tab. 2. Select Networking from the Hardware pane. 3. Verify the View is set to vSphere Standard Switch and click Add Networking. 4. This brings up the Add Network Wizard. Select VMkernel and click Next .

5. Select all of the physical network adapters that will be used for PS Series SAN connectivity and click Next . 6. For the Network Label type in Storage Heartbeat and click Next.

6

7. Enter in the IP Address and Subnet Mask for the Storage Heartbeat. This must be on the same network subnet as the PS Series Group IP Address. Because this is non-routed, the VMkernel Default Gateway can be ignored as it is the gateway of the management VMkernel. This will not come into play during iSCSI connectivity. Enter in the values and click Next . 8. Verify the settings and click Finish to complete the vSwitch creation.

7

Step 2: Add iSCSI VMkernel Ports This next step will assign VMkernel Ports to the new vSwitch. It will also assign the IP Addresses to the iSCSI# VMkernel Ports. Each VMkernel Port will need its own IP Address and they must all be on the same subnet as each other and be on the same subnet as the PS Series Group IP Address. 1. You will see the new vSwitch in the Configuration screen. Click Properties next to the newly created vSwitch. This will open the Properties pane of the switch.

2. Now add a VMkernel port for each physical network adapter to correspond to the 1:1 VMkernel binding discussed earlier. Click Add, select VMkernel and click Next .

8

3. For the Network Label type in iSCSI1 and click Next .

4. Enter in the IP Address and the Subnet Mask. This address must be on the same subnet as the Storage Heartbeat and the PS Series Group IP Address. Click Next . 5. Verify the settings and click Finish to configure the VMkernel Port. 6. Continue adding iSCSI# VMkernel Ports for each physical network adapter that will be communicating with the SAN. In this example there are two physical 9

NICs so iSCSI1 and iSCSI2 are created. When finished you will see something similar in the Properties pane.

Step 3: Associate VMkernel Ports to Physical Adapters The next step is used to create the individual 1:1 path bindings for each VMkernel Port to a NIC. This is required in order to take advantage of the new advanced features such as Round Robin MPIO or 3rd party MPIO plug-ins that are available from Dell. From our previous step there is the Storage Heartbeat and two iSCSI# VMkernel ports and two NICs. This means that the Storage Heartbeat will have both NICs assigned to it and we will assign each iSCSI VMkernel port one NIC to it. Again, each environment will differ and these numbers can change based on the number of NICs and the number of paths assigned. 1. Click Properties next to the Standard vSwitch being used for iSCSI communication. 2. The Storage Heartbeat has both physical network adapters assigned to it for high availability. 3. Select the first iSCSI# VMkernel Port and click the Edit button. 4. Click the NIC Teaming tab. 5. We need to change the NIC Teaming so that only a single vmnic is in each uplink to create a 1:1 binding. Click the checkbox next to Override switch failover order. Select the adapters that are not going to be assigned to the VMkernel (vmnic7 in this example) and click the Move Down button until it is listed under Unused Adapters.

10

6. When this is completed click OK. 7. Select the next iSCSI# VMkernel Port and click the Edit button. 8. Just as before, click the NIC Teaming tab and select the check box for Override switch failover order. This time select another adapter that has not already been bound to an Active Adapter. In this example iSCSI2 is bound to vmnic7 so vmnic6 is moved to Unused Adapters.

11

9. Verify the action and click OK. 10. Once all of the iSCSI# VMkernel Ports are bound 1:1 with physical network adapters click Close to exit the properties of the vSwitch. Do this same thing for each of the iSCSI# VMkernel ports so that each VMkernel port is mapped to only one adapter. In this example we assigned iSCSI1 to vmnic6 and iSCSI2 to vmnic7. Be sure to move all but one adapter to unused adapters so that it uses a 1:1 binding.

NOTE: Do not modify the adapters for the Storage Heartbeat. The Storage Heartbeat leverages all of the available physical NICs.

12

Step 4: Configure Jumbo Frames One of the enhancements in vSphere 5 for iSCSI configuration is the ability to adjust Jumbo Frames support from the GUI instead of through the CLI. In order for Jumbo Frames to work they need to be configured on the vSwitch as well as the Storage Heartbeat VMkernel Port and iSCSI VMkernel Ports. In addition, the physical switch layer must be configured to support Jumbo Frames. To enable Jumbo Frames select the vSwitch created for iSCSI connectivity and click Properties. From the Properties pane of the Standard vSwitch you will see the vSwitch itself, the Storage Heartbeat and each of the iSCSI# VMkernel Ports configured. In order for Jumbo Frames to be supported, it must be configured on each of these items. 1. Select the vSwitch; you will see under the Advanced Properties pane on the right the MTU is defaulted to 1500. To change it to 9000 for Jumbo Frames click the Edit button. 2. Select the General tab and under the Advanced Properties change the MTU from 1500 to 9000 and click OK.

13

3. For each of the VMkernel Ports, Jumbo Frames must also be enabled. Select the Storage Heartbeat and click the Edit button. 4. Under the General tab in the NIC Settings, change the MTU to 9000 and click OK. 5. Do this for each iSCSI# VMkernel Port. All of the VMkernel Ports in the vSwitch must be configured for Jumbo Frame as well as the vSwitch properties itself in order for Jumbo Frames to work properly. 6. When this is complete click Close to exit out of the vSwitch Properties page. vSphere Distributed Switch Configuration Some environments utilize vSphere Distributed Switches (vDS) for network connections and management. One of the benefits to a vDS is the ability to create and configure a single network profile and then attach multiple hosts to this configuration.

14

Administrators using the vSphere Standard Virtual Switch can skip these steps and move on to Section 2. These steps follow the same premise as creating a vSphere Standard Switch. Step 1: Configure vSphere Distributed Virtual Switch This step will create a new vDS and is done at the cluster level. 1. From the vCenter GUI Home Screen click Networking in the Inventory section. 2. Click Add a vSphere Distributed Switch. 3. For this example we are going to create a vSphere Distributed Switch version 5. Select this and click Next .

4. Name the vDS iSCSI. Select the number of uplink ports to match the number of VMkernels and physical NICs that will be used for iSCSI and click Next . In this example we are creating 3 uplink ports. 1 for the Storage Heartbeat and 1 for each iSCSI connection.

15

5. Select all of the ESXi hosts that are going to participate in this vDS, which should be the whole cluster. For each ESXi host, select the physical network adapters they will use for iSCSI traffic and click Next .

16

6. Uncheck the “Automatically create a default port group ” checkbox. Verify the settings and click Finish.

Step 2: Add Port Groups This next step will create and configure the port groups that will be used to assign the VMkernel ports to. 1. From the Home screen in the vCenter GUI click Networking. 2. Select the iSCSI vDS that was just created and click Create a new port group. 3. Change the Name to Storage Heartbeat. Click Next .

17

4. Repeat the above steps and create an iSCSI# Port Group for each physical NIC that will be used for iSCSI connectivity. In this example we created port groups iSCSI1 and iSCSI2.

Step 3: Configure Storage Heartbeat and iSCSI VMkernel Ports Now that the port groups for the vDS have been configured, we need to configure each of the VMkernel ports on each separate ESXi 5 host. This will assign the IP Addresses for the iSCSI# VMkernel Ports as well as the Storage Heartbeat. Each VMkernel Port will need its own IP Address and they must all be on the same subnet and be on the same subnet as the PS Series Group IP Address. This step needs to be completed on each ESXi host.

18

1. From the vCenter GUI select an ESXi host and click on the Configuration tab. Click on Networking under the Hardware pane. Change the View to vSphere Distributed Switch . 2. On the Distributed Switch: iSCSI click Manage Virtual Adapters. 3. Click Add. 4. Choose New virtual adapter and click Next . 5. Select VMkernel and click Next . 6. Click Select port group and assign it to the Storage Heartbeat port group and click Next .

7. Enter in the IP Address and Subnet Mask for the Storage Heartbeat. This must be on the same network as the iSCSI PS Series Group IP Address. Because this is non-routed, the VMkernel Default Gateway can be ignored as it is the gateway of the management VMkernel. This will not come into play during iSCSI connectivity. Enter in the values and click Next . 8. Verify the settings on the host and click Finish. 9. Continue adding additional VMkernel ports to match the number of iSCSI# VMkernels. In this example there is iSCSI1 and iSCSI2 so two more VMkernel ports are added. Click Close when all of the VMkernel ports are added.

19

10. Repeat this on each ESXi host that is participating in the vDS switch. 11. In this example there are two hosts and each has a Storage Heartbeat and two iSCSI# vmkernel ports configured. This can be seen from the vCenter GUI by clicking on Networking from the Home screen and then clicking on the new vDS iSCSI.

20

Step 4: Associate VMkernel Ports to Physical Adapters This step will configure the 1:1 binding of VMkernel ports to physical NIC adapters. This is done at the cluster vDS level and not the individual ESXi host level. This is only done for the iSCSI# VMkernel ports and not the Storage Heartbeat VMkernel port. 1. From the vCenter GUI click on Networking from the Home page. 2. Select the new iSCSI vDS and click the Configuration tab. 3. Next to iSCSI1 click the Edit Settings icon 4. Under Policies click Teaming and Failover. 5. We need to change the teaming so that only a single dvUplink is in each uplink to create a 1:1 binding. This is done by selecting the dvUplinks that are not going to be assigned to the VMkernel (dvUplink2 in this example) and clicking the Move Down button until it is listed under Unused Uplinks. 6. Click Ok. 7. Do the same for iSCSI2 by moving dvUplink1 to unused uplinks.

21

8. As you can see in the following example, the highly available Storage Heartbeat is available on all of the physical adapters and iSCSI1 is bound to a specific adapter (dvUplink1) and iSCSI2 is bound to the other adapter (dvUplink2). One of the benefits of configuring a vDS is that new hosts added will be able to leverage the configuration settings that are already configured including the 1:1 binding. The same vmnic does not have to be used on every ESXi host as long as the appropriate NIC is attached to the proper dvUplink.

22

23

Step 5: Configure Jumbo Frames One of the enhancements in vSphere 5 for iSCSI configuration is the ability to adjust Jumbo Frames support from the GUI instead of through the CLI. In order for Jumbo Frames to work they need to be configured on the vDS as well as each of the VMkernel Ports. In addition, the physical switch layer must be able to support Jumbo Frames. Jumbo Frames has to be configured at both the cluster vDS level as well as each ESXi host level. 1. 2. 3. 4. From the vCenter GUI click on Networking in the Home page. Select the vDS iSCSI and click Edit Settings. Under the Properties tab click Advanced. Change the Maximum MTU to 9000 and click Ok.

24

Now that Jumbo Frames has been enabled on the vDS each ESXi host has to be configured. 1. From the vCenter GUI select an ESXi host and click on the Configuration tab. Click on Networking under the Hardware pane. Change the View to vSphere Distributed Switch . 2. On the Distributed Switch: iSCSI click Manager Virtual Adapters. 3. Select the first vmk# VMkernel port and on the right side of the pane you will see the MTU value under the NIC Settings. 4. Click Edit . 5. Under the NIC Settings change the MTU to 9000 and click OK. 6. Do this for all of the vmk# VMkernel ports on all of the hosts in order to enable Jumbo Frames across the environment.

SECTION 2: CONFIGURE VMWARE ISCSI SOFTWARE INITIATOR
Now that the virtual switches are configured and the VMkernel ports are bound to physical NICs in a 1:1 fashion, the next thing to configure is the iSCSI Initiator. This section will detail the installation and configuration of the VMware iSCSI Software Initiator. These steps are done on each ESXi host that needs connectivity to the SAN. Step 1: Install iSCSI Software Initiator VMware ESXi 5 does not come with the iSCSI Software Initiator added by default. 1. From the vCenter GUI select the ESXi host and click the Configuration tab. In the Hardware pane click Storage Adapters. 2. In the upper right hand corner click Add Select Add Software iSCSI Adapter and click OK. 3. Click OK on the dialogue box to add the iSCSI Adapter. Configure CHAP – (Optional)

25

CHAP authentication for access control lists can be very beneficial. In fact, for larger cluster environments, CHAP is often the preferred method of volume access authentication, from an ease of administration point of view. 1. Click the newly installed iSCSI Software Adapter. Click Properties. 2. Under the General tab you can configure CHAP if your PS Series SAN is configured for volume access through CHAP. To do this click on the CHAP button. Enter in the appropriate information and click Ok.

Step 2: Binding VMkernel Ports to iSCSI Software Initiator The next step is to bind each of the iSCSI# VMkernel ports to the iSCSI Software Adapter. In previous versions of ESX 4.x this could only be done via CLI commands but with ESXi 5 it is now configured through the vCenter GUI. This is done to tell the iSCSI Software Adapter which VMkernel ports to use for connectivity to the SAN. 1. Click the Network Configuration tab in the iSCSI Initiator. 2. Click Add.

26

3. You will see the iSCSI# VMkernel port groups along with the VMkernel Adapter and which physical network card it is assigned to. Note that the Storage Heartbeat is not listed here because it cannot be assigned to the iSCSI Adapter as it is not bound in a 1:1 fashion. If you do not see any adapters here that should be, verify that each of the iSCSI# VMkernel ports were bound 1:1 to physical network adapters. 4. Select one of the iSCSI# VMkernel ports and add it by clicking OK.

5. Continue adding all of the available iSCSI# VMkernel ports. When all of the iSCSI# VMkernel Ports are added to the iSCSI Software Adapter you will see each of the Port Group Policies show up as Compliant if they are correctly configured. You can also see which physical NIC each one is assigned to. Path status will show Not Used until volumes are actually attached.

27

6. When all of the iSCSI# port groups are assigned to the software iSCSI adapter
click Close.

SECTION 3: CONNECT TO THE DELL EQUALLOGIC PS SERIES SAN
Now that the advanced configuration for the vSphere iSCSI Software Initiator has been completed, the next stage is to connect to the Dell EqualLogic PS Series SAN and to the volumes it contains. More information for complete administration of the Dell PS Series SAN can be found in the PS Series Administrators Guide. In this example we will attach the iSCSI Software Initiator to the SAN and to a single volume. Step 1: Configure Dynamic Discovery of PS Series SAN The first thing to do is add the PS Series Group IP Address to the dynamic discovery of the ESXi host iSCSI Software Initiator. This is done to enable rescans to find new volumes that the ESXi host has access rights to.

28

1. From the vCenter GUI select an ESXi host. Click the Configuration tab and select Storage Adapters under the Hardware pane. 2. Click on the iSCSI Software Adapter and click Properties. 3. Click the Dynamic Discovery tab. 4. Click Add. In the Add Send Target Server box type in the Group IP Address of the PS Series SAN and hit Ok.

5. Click Close. You will be prompted for a rescan of the host bus adapter. If there are no volumes configured on the PS Series array for this ESXi host click No otherwise click Yes to rescan for new volumes. Step 2: Create and Configure Volume The next step will be to create a new volume and assign it to the ESXi host. This can be done multiple ways so refer to the Group Administrators Guide for more information. In this example we will create a 500GB volume and assign it to this ESXi host via the iqn name. If CHAP was previously configured you could also use CHAP in the Access Control List (ACL).

29

1. From the Dell EqualLogic PS Series Group Manager GUI click the Volumes button, then Create Volume. Create a new volume, in this example ESXVOLDEMO, and click Next .

2. Set the volume size, select the options, and click Next .

30

3. Under iSCSI Access you can choose to use CHAP, IP Address, Initiator Name or any combination of the three. Keep in mind that as a vSphere environment grows, being able to scale the number of connections to each volume is important. 4. If using IP Address, only the IP Address of the iSCSI# vmkernel ports need to be added. For initial creation just use one IP address and then add the additional IP Addresses to the volume via the Access tab. 5. To find the iSCSI Initiator Name from the vCenter GUI click on the ESXi host, click on the Configuration tab and select Storage Adapters under the Hardware pane. The iqn can be copied and pasted into the Group Manager interface for the Initiator Name.

6. There is a check box option for “Allow simultaneous connections from initiators with different IQN names”. This option is necessary to enable all of the advanced vSphere capabilities that rely on shared storage. This will need to be checked and additional ESXi hosts iqns added to the Access tab when configuring access for your remaining ESXi hosts.

31

7. Click Next to continue the volume creation. 8. Review the volume creation information on the next screen and click Finish . Step 3: Connect to a Volume on PS Series SAN The next step is to connect to the volume on the SAN and verify the connection status. Since the iSCSI access and configuration was configured in the last step, the only thing to do now is to rescan the adapters and make sure the volume appears correctly. 1. In the vCenter GUI select the ESXi host and click on the Configuration tab. Select Storage Adapters under the Hardware pane and click the iSCSI Software Adapter. 2. Right click on the iSCSI Software Adapter and select Rescan. When this is done, if everything has been configured properly under Devices there will be a new EQLOGIC iSCSI Disk with the correct size shown.

Step 4: Enabling VMware Native Multipathing - Round Robin One of the advanced features that is enabled by configuring the iSCSI Software Initiator in the way we have done, is that now we can take advantage of VMware’s native MPIO by enabling Round Robin. This combined with the fan out intelligent design of the PS Series group allows for greater and better bandwidth utilization. To configure Round Robin Multipathing on a volume, right click on the volume and click Properties. Click the Manage Paths button. This will display the path information with a default of Fixed Path. To enable Round Robin select the drop down next to Path Selection and choose Round Robin (VMware). This will reconfigure the volume to utilize a load balancing policy going across all available paths.

NOTE: This needs to be done for every existing and new volume that you want the Round Robin policy to apply to.

32

To verify that all of the configuration settings were made correctly, in the PS Series Group Manager GUI, select the Volume and then click the Connections tab. You will see all of the iSCSI# VMkernel ports IP Addresses for each ESXi host that has connectivity to the volume. Step 4: Create VMFS Datastores and Connect More Volumes Each existing volume can be modified to allow multiple ESXi hosts to attach to it by adding the Initiator Name (iqn) in the Access Tab inside the PS Series Group Manager GUI. See the PS Series Group Administration Guide for more information on adding more access control connections to a volume. In order for ESXi to leverage the new volume for virtual machines it needs to be formatted VMFS. For a more detailed explanation of VMFS see the VMware Administrator’s Guide but the following is a summary of the steps taken. 1. 2. 3. 4. From the Configuration tab click Storage. Click Add Storage. Select Disk/LUN and click Next . Select the newly scanned volume and click Next . The entire volume name can be seen by expanding out the Path ID. 5. Choose the File System Version (VMFS-5 or VMFS-3) and click Next . 6. Review the information and click Next.

33

7. Give the Datastore a name, it is recommended to name the Datastore the same as the PS Series Volume name, in this case ESXVOLDEMO, and click Next . 8. Select the capacity (Maximum Available Space) and click Next . 9. Verify the entire configuration and click Finish. This will format the volume and make it available for the cluster to be able to install VMs to.

34

FAQ
Q: Can I use Host Profiles to configure iSCSI for new ESXi Hosts? A: Unlike ESX 4, Host Profiles for ESXi 5 allow for iSCSI configuration to be configured by using an existing profile. The only change the administrator will need to make is to re-verify the Compliance in the Network Configuration tab of the iSCSI Software Adapter as the Storage Heartbeat will be incorrectly added. Remove the Storage Heartbeat and add the appropriate iSCSI# VMkernel ports. Q: What is the maximum size of a VMFS Datastore? A: VMware ESXi 5 supports VMFS Datastore size of 64TB. As of the writing of this document the largest volume that can be created on the Dell EqualLogic PS Series SAN is 15TB. Always see the readme for PS Series firmware to determine the maximums for each firmware version.

SUMMARY
This Technical Report is intended to guide vSphere 5 administrators in the proper configuration of the VMware iSCSI Software Initiator and connect it to the Dell EqualLogic PS Series SAN. With all of the advanced features that vSphere has that relies on shared storage, it is important to follow these steps to enable them in the vSphere environment. Always consult the VMware iSCSI Configuration Guide for the latest full documentation for configuring vSphere environments.

35

TECHNICAL SUPPORT AND CUSTOMER SERVICE
Dell support service is available to answer your questions about PS Series SAN arrays.

Contacting Dell
1. If you have an Express Service Code, have it ready. The code helps the Dell automated support telephone system direct your call more efficiently. 2. If you are a customer in the United States or Canada in need of technical support, call 1-800-945-3355. If not, go to Step 3. 3. Visit support.equallogic.com. 4. Log in, or click “Create Account” to request a new support account. 5. At the top right, click “Contact Us,” and call the phone number or select the link for the type of support you need.

36

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close