Configure: Storage arrays for VPLEX

 

 

Topic

Customer Procedures

Selections

Procedures: Configure

Configure: Storage arrays for VPLEX

 

 

.

Contents

Configuring Arrays for Use with VPLEX.. 4

Before you begin. 4

Discovering arrays. 4

For metadata volumes. 4

For logging volumes. 5

Initiator settings on back-end arrays. 5

EMC Symmetrix. 7

Procedure to enable OS2007 (Required for operation on 5.2 and later) 7

Notes on thin provisioning support in GeoSynchrony 4.x. 8

EMC CLARiiON. 9

Notes on thin provisioning support in GeoSynchrony 4.x. 10

HP 3PAR V/T/S/F/Pxxx storage arrays. 10

HPXP 24000/20000/12000/10000/1000/512/128/48. 21

HP P6300/P6500. 22

HDS-VSP/HP P9500. 29

HDS AMS 25xx. 30

Hitachi USP V series. 31

Sun/HDS 99xx. 36

IBM DS4700. 36

IBM DS4800/DS5100/DS5300. 36

IBM DS5020. 48

IBM v7000. 64

IBM DS8xxx. 73

IBM SVC. 74

IBM XIV. 74

Fujitsu ETERNUS. 75

HP EVA 4/6/8000, 4/6/8100 and 4/6/8400. 79

NetApp FAS/V 3xxx/6xxx or IBM N6xxx/N7xxx Series arrays. 80

Special notes for working with NetApp arrays. 91

Creating a name mapping (or hints) file for VPLEX for third-party arrays. 91

Registering VPLEX initiators with CLARiiON VNX arrays. 92

Creating a name mapping (or hints) file for VPLEX for third-party arrays. 106

 


 

Configuring Arrays for Use with VPLEX

The procedures in this document describe the configuration steps required to configure an array for use with VPLEX.

Before you begin

Consider the following conditions before configuring your arrays.

Discovering arrays

Note:  In releases before GeoSynchrony Release 5.1 Patch 2, when allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time.

After a set of no more than 10 LUNs have been allocated (before Release 5.1 Patch 2), check the VPLEX to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

 

Note:  VPLEX only supports block-based storage devices that use 512-byte sectors for allocation or addressing, hence ensuring the storage array connecting to VPLEX supports or emulates the same. A storage device that does not use 512-byte sectors can be discovered by VPLEX but cannot be claimed for use within VPLEX and cannot be used to create a meta-volume. When you try to use discovered storage volumes with unsupported block sizes within VPLEX (either by claiming them or for creating meta-volume using appropriate VPLEX CLI commands), the command fails with this error - the disk has an unsupported disk block size and thus can't be moved to a non-default spare pool.

For metadata volumes

All array types, VPLEX Local, VPLEX Metro, and VPLEX Geo configurations; volumes that will be used for metadata volumes must meet requirements specified in the EMC VPLEX Configuration Guide. Those volumes must be clean (have zeros written) before they can be used.

An example how to clean all data from the given disk:

                   

  1. [   ]    Expose disk that will be used for metadata to the Linux host

  2. [   ]    Write zeros to the disk using the following command:

WARNING:    This command will erase all data on the disk.

dd if=/dev/zero of=device name conv=notrunc

Example:

dd if=/dev/zero of=/dev/sdbg conv=notrunc

 

For logging volumes

Volumes that will be used for logging volumes must meet requirements specified in the configuration guide. Those volumes must be clean (have zeros written) before they can be used.

An example how to clean all data from the given disk:

                   

  1. [   ]    Expose disk that will be used for metadata to the Linux host

  2. [   ]    Write zeros to the disk using the following command:

WARNING:    This command will erase all data on the disk.

dd if=/dev/zero of=device name conv=notrunc

Example:

dd if=/dev/zero of=/dev/sdbg conv=notrunc

 

Initiator settings on back-end arrays

The EMC Simple Support Matrix on the EMC® Support website lists the storage arrays that have been qualified for use with VPLEX.

The following table identifies the initiator settings for these arrays when configuring for use with VPLEX.

Storage array family

Model

Vendor

Product ID

Initiator settings

EMC Symmetrix®

 

EMC

SYMMETRIX

See “EMC Symmetrix” on page 7

EMC CLARiiON®

 

EMC

CLARIION

See “EMC CLARiiON” on page 9

HDS USP/HPXP

 

Hitachi

OPEN

 

Default (Standard)

HDS VSP/HP P9500

 

Hitachi 9900 series (Lightning)

 

HDS 9910

Hitachi

 

OPEN

 

Default (Standard)

HDS 9960

HDS 9970

HDS 9980

Hitachi USP series (TagmaStore)

HDS TagmaStore NSC55

HDS Tagmastore USP100

HDS TagmaStore USP600

HDS TagmaStore USP1100

Hitachi USP VM series

HDS USP VM

Hitachi AMS 2xxx series

HDS AMS 2100

Hitachi

 

DF600F

 

Windows

HDS AMS 2300

HDS AMS 2500

Sun/HDS 99xx series

 

Hitachi

OPEN

Default (Standard)

IBM DS4700

IBM DS4700

IBM

OPEN-V

Linux

IBM DS8000 series

IBM DS8100

IBM

 

2107900

Windows 2000/2003

IBM DS8300

IBM SVC

SVC

IBM

2145

Generic

IBM XIV

XIV

IBM

2810XIV

Default (Standard)

3PAR

3PAR

3PARdata

 

VV

Generic or Generic ALUA (if applicable)

Fujitsu DX8x00, ETERNUS 8000
M1200/M2200

ETERNUS 8000

Fujitsu

 

E8000

Linux

ETERNUS DX8000

ETERNUS_DX800

HP EVA 4/6/8000, 4/6/8100 and 4/6/8400

HP EVA 4000 AA

HP or COMPAQ

 

HSV101

Linux

HP EVA 4100 AA

HSV200

HP EVA 4400 AA

HSV300

HP EVA 6000 AA

HSV200

HP EVA 6100 AA

HSV200

HP EVA 6400 AA

HSV400

HP EVA 8000 AA

HSV210

HP EVA 8100 AA

HSV210

HP EVA 8400 AA

HSV450

HP StorageWorks XP 48/128/512/1000/10000/12000/20000/24000

HP XP48

HP or COMPAQ

 

OPEN

 

 

Default  (Standard)

HP XP512

HP XP128

HP XP1024

HP XP10000

HP XP12000

HP XP20000

HP XP24000

NetApp FAS/V 3xxx/6xxx series

 

NETAPP

LUN

Linux

 

The following sections describe the steps to configure the arrays for use with VPLEX.

EMC Symmetrix

For Symmetrix-to-VPLEX connections, configure the Symmetrix Fibre Channel directors (FAs) as shown in Table 1.

Table 1       Required Symmetrix FA bit settings for connection to VPLEX

Set *

Do not set

Optional

SPC-2 Compliance (SPC2)

SCSI-3 Compliance (SC3)

Enable Point-to-Point (PP)

Unique Worldwide Name (UWN)

Common Serial Number (C)

For Release 5.2 and later:

OS-2007 (OS compliance)

Disable Queue Reset on Unit Attention (D)

AS/400 Ports Only (AS4)

Avoid Reset Broadcast (ARB)

Environment Reports to Host (E)

Soft Reset (S)

Open VMS (OVMS)

Return Busy (B)

Enable Sunapee (SCL)

Sequent Bit (SEQ)

Non Participant (N)

For releases before Release 5.2:

OS-2007 (OS compliance)

Linkspeed

Enable Auto-Negotiation (EAN)

VCM/ACLX **

 


*  For the Symmetrix 8000 series, only the PP, UWN, and C bits must be set.

**  Must be set if VPLEX is sharing Symmetrix directors with hosts that require conflicting bit settings. For any other configuration, the VCM/ACLX bit can be either set or not set.

Note:  The EMC Host Connectivity Guides on the EMC Support website provide more information on Symmetrix connectivity to VPLEX.

Procedure to enable OS2007 (Required for operation on 5.2 and later)

The OS2007 bit on Symmetrix/VMAX FA’s which are connected to VPLEX back-end ports should be enabled from VPLEX GeoSynchrony Release 5.2 onward. The enabling of this bit on Symmetrix allows VPLEX to detect (in the presence of host I/Os) configuration changes on the array storage-view and react to it by automatically re-discovering the back-end storage-view and detecting LUN re-mapping issues.

                   

  1. [   ]    Ensure that the VPLEX connected to the Symmetrix/VMAX is on GeoSynchrony Release 5.2 or higher.

  2. [   ]    As recommended for VPLEX, ensure that SPC-2 is set on the ports/storage group that has the VPLEX back-end initiators attached/referenced.

  3. [   ]    Follow Symmetrix/VMAX documentation to set the OS2007 bit on the FA. If the FA is also connected (masked) with initiator ports other than VPLEX, ensure that those initiators do not get impacted by this configuration change.

  4. [   ]    Set OS2007 flag on a Symm Target port using symconfigure

a.   Parse the command for enabling the OS2007 bit on SYMM DIR FA port 10e:0

symconfiguresid  SYMM ID  -cmd “set port  10e:0  SCSI_Support1=ENABLE;  preview

 

b.   Execute the command for enabling the OS2007 bit on SYMM DIR FA port 10e:0

symconfiguresid  SYMM ID  -cmd “set port  10e:0  SCSI_Support1= ENABLE;  commit   

  5. [   ]    If OS2007 flag cannot be set on Symm Target port ( if the port is shared between VPLEX and non-VPLEX initiators), then the following symmaccess commands can be used to set it:

symaccess -sid SymmID -wwn wwn | -iscsi iscsi

 

 set hba_flags [on flag,flag,flag... [-enable |-disable] |

   off [flag,flag,flag...]]

 

   list logins [-dirport Dir:Port] [-v]

...

   flag             Specify the overridden HBA port flags or

                    initiator group port flags from the

                    following values in []:

 

                    Supported HBA port flags:

 

                    - Common_Serial_Number     [C]

                    - Disable_Q_Reset_on_UA    [D]

                    - Environ_Set              [E]

                    - Avoid_Reset_Broadcast    [ARB]

                    - AS400                    [AS4]

                    - OpenVMS                  [OVMS]

                    - SCSI_3                   [SC3]

                    - SPC2_Protocol_Version    [SPC2]

                    - SCSI_Support1            [OS2007]

 

                    Supported initiator group port flags:

 

                    - Volume_Set_Addressing    [V]

 

  6. [   ]    If there are multiple FA ports where the OS2007 bit needs to be enabled, they can be done sequentially.

  7. [   ]    Ensure that the OS2007 bit is enabled on all FA ports connected to VPLEX on the Symmetrix/VMAX array.

This procedure is non-disruptive to Host I/O to VPLEX and requires no specific steps on VPLEX.

Notes on thin provisioning support in GeoSynchrony 4.x

·    VPLEX tolerates thinly provisioned devices. However, VPLEX copy and mobility operations (such as migrations and mirrors) do not preserve thinness. The target device is converted into a fully allocated device. After a copy/mobility operation is complete, use zero block reclaim or a similar array-specific utility to make the target device thin again.

·    System volumes such as metadata and logging volumes are supported on thin devices. However, all extents should be pre-allocated, to prevent out-of-space conditions.

·    Oversubscribed thin devices are not supported as system devices.

Note:  Refer to Symmetrix best practices documentation for more information on thin provisioning.

EMC CLARiiON

Set the following for CLARiiON-to-VPLEX attachment:

Note:  On CLARiiON VNX, you can do this when registering VPLEX initiators on the Host > Connectivity Status screen. Refer to Registering VPLEX initiators with CLARiiON VNX arrays on page 92 for more information on registering VPLEX initiators.

·    Initiator type = CLARiiON Open

·    Failover Mode =  4 for ALUA mode, 1 for non-ALUA

·    (Active-passive array only) Auto-switch = True

Note:  Recommended number of LUNs being added at one time is limited to 40.

To add LUNs to a VNX storage group, follow these steps:

                   

  1. [   ]    Click Storage in the Unisphere GUI.

  2. [   ]    Click LUNs.

  3. [   ]    Select the LUNs to add to a storage group.

  4. [   ]    Click Add to Storage Group.

Note:  The EMC Host Connectivity Guides on EMC Support Online provide more information on CLARiiON connectivity to VPLEX.

Additional requirements for CLARiiON VNX:

·    OE for Block V31: Only block-based CLARiiON arrays with Flare R31 are supported. Filesystem-based mode is not supported.

·    You must activate any SAN Copy LUNs configured on CLARiiON VNX before exporting them to VPLEX.

·    When claiming CLARiiON LUNs through VPLEX, use the naviseccli command naviseccli getlun
uid
  name to create a device mapping file.

Note:  The naviseccli command has to be run on the Clariion.

Example : naviseccli -h 192.168.47.27 getlun -uid -name > Clar0400.txt
The file names determine the array name; in this example, storage volumes from the CLARiiON would get the Clar0400_ prefix.

·    The recommended number of LUNs being added at one time is limited to 40.

·    The recommended steps to add LUNs to a VNX storage group:

        

a.   Click Storage in the Unisphere GUI.

b.   Click LUNs.

c.   Select the LUNs to add to a storage group.

d.   Click Add to Storage Group.

·    For Array Interoperability: restrictions for VNX2 (Rockies)

        

a.   VPLEX supports both failover modes of NON-ALUA and ALUA.

b.   Failover mode changes from NON-ALUA to ALUA mode are NOT supported.

c.   Failover mode changes from ALUA to NON-ALUA are supported.

d.   When VNX (Rockies) is connected to VPLEX for the first time, select failover mode BEFORE provision LUs and DO NOT change it.

Notes on thin provisioning support in GeoSynchrony 4.x

·    VPLEX tolerates thinly provisioned devices. However, VPLEX copy and mobility operations (such as migrations and mirrors) do not preserve thinness. The target device is converted into a fully allocated device. After a copy/mobility operation is complete, use zero block reclaim or a similar array-specific utility to make the target device thin again.

·    System volumes such as metadata and logging volumes are supported on thin devices. However, all extents should be pre-allocated, to prevent out-of-space conditions.

·    Oversubscribed thin devices are not supported as system devices.

Note:  Refer to CLARiiON best practices documentation for more information on thin provisioning.

HP 3PAR V/T/S/F/Pxxx storage arrays

Depending on a host persona setting, HP 3PAR Storage Arrays can be presented as either Active/Active or ALUA starting in GeoSynchrony Release 5.2

To provision HP 3PAR LUNs to VPLEX:

                   

  1. [   ]    Zone the HP 3PAR storage array to the VPLEX back-end ports. Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

Note on zoning: To prevent data unavailability, ensure that each host in a storage view has paths to at least two directors in a cluster, and that multi-pathing is configured in such a way that the paths are distributed evenly between directors A and B in each engine.

Note:   When exporting storage LUs from 3PAR to VPLEX, never use logical unit number 0xfe (254 in the 3PAR GUI as listed in decimal notation). Leave this logical unit number unused so that the SES LU can keep that logical unit number uncontested.

Note:   Avoid using the Auto checkbox for selecting the logical unit number (LUN) in the 3PAR export dialog. This checkbox is checked by default. This feature chooses the lowest available logical unit numbers, which can include 0xfe/254 for one of the storage LUs. Insteaed, manually choose logical unit numbers that do not include 0xfe.

  2. [   ]    To login to the 3PAR Inform Management console, click on 3PAR Management Console, enter IP, username and password

  3. [   ]    To create a common provisioning group (CPG), right click on CPG and select Create common provisioning group.

Figure 1       

  4. [   ]    Follow the CPG wizard to create the common provisioning group with aCPG name, device, and RAID Type

Figure 2       

  5. [   ]    To create virtual volume, In the left panel, right click on virtual volumes and select Create virtual volumes.

Figure 3       

  6. [   ]    In the Create virtual volumes wizard, fill in the information for LUN creation such as Name, Size, Provisioning, CPG, and Count.

Figure 4       

  7. [   ]    To create a host, in the left panel, right click on Hosts, select Create host, and select a host.

Figure 5       

  8. [   ]    To verify that the virtual volumes are exported, in the Storage Systems screen, select Export.

Figure 6       

  9. [   ]    Click on VLUNs to map volumes to servers.

10. [   ]    In the Export Virtual Volume wizard, select the volumes and hosts and click Next

Note:  The Auto box is not selected. This the manual way to provision LUNs

Figure 7       

11. [   ]    Click Exported under under Virtual Volumes, to verify LUNs are mapped to the host

Figure 8       

12. [   ]    To verify that the host is connected to the devices, right click Ports listed under array Storage Systems in the left panel.

Figure 9       

13. [   ]    If you need to change the persona of the hose, use the Create hosts wizard.

·    1 – Generic --  3PAR array presents as Active/Active array on VPLEX.

·    2 – Generic-ALUA --  3PAR array presents as Implicit ALUA array on VPLEX. All the back-end paths to each LUN will be Active paths (AAO) only.

Figure 10    

14. [   ]    In the Create Hosts wizard, select an initiator and a port for the host.

Figure 11    

15. [   ]    In the left panel, right-cick on Virtual Volume set and select Create virtual volume set.

Figure 12    

16. [   ]    List the virtual volumes exported to the host. Select the Virtual Volumes to provision.

Figure 13    

17. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

array re-discover array_name

ll /clusters/cluster-Cluster_ID/storage-elements/storage-arrays/3PAR-Array-name/logical-units

HPXP 24000/20000/12000/10000/1000/512/128/48

Note:  HP XP 24000/20000/12000/10000/1000/512/128/48 arrays must have a LUN0 exported to VPLEX. If no LUN0 exists on the array, the report lun command fails with error code 5/25/0 -- Logical unit not supported.

To provision HDS/HPXP LUNs for VPLEX:

                   

  1. [   ]    Log in into the HDS/XP Remote Web Console.

  2. [   ]    From the menu bar, select Go > LUN Manager  >  LU Path and Security.

  3. [   ]    In the LU Path list, select a port and  ensure that LUN Security is enabled:

(LUN Security:Enable ) Target Standard

 

  4. [   ]    Perform LUN masking on the port to which LUNs will be exposed.

 

  5. [   ]    If you need to change the port’s host group to Standard:

        

a.   Click the pen icon on the toolbar, and select Change Host Group from the drop-down menu.

b.   Change the host group’s Host Mode to 00(Standard) and click Apply.

  6. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

  7. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/

 

HP P6300/P6500

HP P6300/P6500 supports implicit-explicit ALUA mode.

To provision HP P6300/P6500 LUNs to VPLEX:

                   

  1. [   ]    Zone the HP P6300/P6500 storage to the VPLEX back-end ports. Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

 

  2. [   ]    Log into HP P6000 Command View GUI by logging into the array management server, opening Internet Explorer to https://localhost:2374/SPoG/, and entering credentials.

  3. [   ]    If necessary, create a Disk Group:

        

a.   On the GUI, navigate to Storage Network > storage_system_name > Disk Groups

b.   Select Create Disk Group.

c.   Enter the name and number of disks.

d.   Modify advanced settings as desired.

e.   Click Create Disk Group.

Figure 14    

  4. [   ]    Create a Host:

        

a.   On the GUI, navigate to Storage Network > storage_system_name > Hosts

b.   Select Add Host.

c.   Enter a name.

d.   Select Fibre channel for the type.

e.   Select port WWNs (please select VPLEX BE port WWNs) in the pull-down menu (or enter them manually if they do not appear).

f.    Select Linux for the operating system.

g.   Modify advanced settings as desired.

h.   Click Add Host.

 

Figure 15    

  5. [   ]    Create Virtual Disk(s):

        

a.   In the GUI, navigate to Storage Network > storage_system_name > Virtual Disks.

b.   Click Create Vdisks. 

c.   Provide a quantity, provide a name, provide a size, select a redundancy, and select a disk group. 

d.   Modify advanced settings as desired.

e.   Click Create Vdisks.

Figure 16    

  6. [   ]    Verify the Vdisk Properties of newly created virtual disks.

Figure 17    

  7. [   ]    At this point, the newly created virtual disks are not presented to a host yet.

Figure 18    

  8. [   ]    Present Virtual Disks to the Host:

        

a.   In the GUI, navigate to Storage Network > storage_system_name > Hosts > host_name.

b.   Switch to Presentation tab

c.   Select Present.

Figure 19    

 

  9. [   ]    Select virtual disk(s) to present and select Confirm Selections.

10. [   ]    Select LUN IDs if desired and select Present Vdisk.

Figure 20    

11. [   ]    Verify that the virtual disks are presented to the host under the presentation tab.

Figure 21    

12. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

13. [   ]    At the VPLEXcli prompt, type the following command to display the new LUNs: 

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/ 

 

HDS-VSP/HP P9500 

Note:  Hitachi arrays must have a LUN0 exported to VPLEX. If no LUN0 exists on the array, the report lun command fails with error code 5/25/0 – Logical unit not supported.

To provision HDS-VSP / HP P9500 LUNs for VPLEX:

                   

  1. [   ]    Log into the HDS-VSP / HP P9500 Remote Web Console.

  2. [   ]    From the menu bar, select target ports and enable Port security on the target ports:

        

a.   From the menu bar, click Ports/Host Groups > Ports and select target VSP ports.

b.   Click Edit Ports and set Port Security : Enable.

  3. [   ]    Create a Host Group add VPLEX ports and target VSP ports to a host group. You can also use an existing group as described below.

·    To create a new host group:

a.    From the menu bar, click Ports/Host Groups, and then click the Host Groups tab.

b.   Click Create Host Groups.

c.   Select VSP target ports and VPLEX WWNs, and then click Add.

·    To use an existing Host Group:

        

a.   From the menu bar, click Ports/Host Groups, and then click the Host Groups tab.

b.   Select VSP target ports with a default Host Group.

c.   Add VPLEX ports to the Host Group by selecting VPLEX WWN, then clicking Add.

  4. [   ]    Select LUNs, add them into the host group, and then map them to the VSP target ports:

        

a.   From the menu bar, click Ports/Host Groups > Host Groups.

b.   Select the host group for VPLEX (the group created in Step 3. [   ]), and then click Add LUN Paths.

c.   Select LUNs, and then click the Add button to add LUNs to the host group.

  5. [   ]    Ensure Host Mode flag is set to default setting - “00 (standard)” on the VSP target ports.

        

a.   From the menu bar, click Ports/Host Groups > Host Groups.

b.   Select the host group for VPLEX, and then click More Actions > Edit Host Groups. Ensure that the Host Mode flag is set to “00 (standard)” for VPLEX.

  6. [   ]    On the VPLEX management server, log into the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

array re-discover  array_name

 

  7. [   ]    At the VPLEXcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/

 

HDS AMS 25xx

Note:  Hitachi arrays must have a LUN0 exported to VPLEX. If no LUN0 exists on the array, the report lun command fails with error code 5/25/0 – Logical unit not supported.

To provision HDS AMS 25xx LUNs for VPLEX:

                   

  1. [   ]    Set each Fibre Channel port that connects to VPLEX to Point-to-Point.

  2. [   ]    Create a RAID Group, and add LUNs.

  3. [   ]    Create a Host Group.

  4. [   ]    On the Host Group Options tab, set the Platform to Windows. Leave all other settings at the default values.

  5. [   ]    Set a name for the Host Group.

  6. [   ]    Add LUNs to the Host Group.

  7. [   ]    On the Logical Units tab, select the LUNs under Available Logical Units, and click Add.

  8. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

  9. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

 

Hitachi USP V series

Note:  Hitachi arrays must have a LUN0 exported to VPLEX. If no LUN0 exists on the array, the report lun command fails with error code 5/25/0 – Logical unit not supported.

To provision Hitachi USP V series LUNs for VPLEX:

                   

  1. [   ]    Connect the array controller ports to the SAN (Fabric A and B).

  2. [   ]    Zone the ports to VPLEX backend WWN.

  3. [   ]    Create a HDP pool on the array:

Note:  Users with the Modify permission can create and delete HDP pools, change threshold values and expand capacity.

        

a.   In the Explorer menu, select Resources > All Storage or My Storage.

b.   Expand the All Storage (or My Storage) object tree in the navigation area, and then select the Universal Storage Platform V/VM to be manipulated, the Pools group, and then Dynamic Provisioning.The Dynamic Provisioning window appears in the application area.

c.   In the Dynamic Provisioning window, click Create DP Pool. The Create DP Pool dialog box appears.

d.   In the Create DP Pool dialog box, select the HDP pool creation procedure checkbox, and then click Next. The Create DP Pool dialog box appears.

e.   In the Create DP Pool dialog box, select a pool ID for the new HDP pool you are creating.

Note:   Pool IDs 0 through 127 display. The radio buttons for the pool IDs that are already assigned are disabled, and for each assigned pool ID, the name of the software product that uses that pool ID displays in the Type column. By default, the smallest unassigned pool ID is selected. You can create a maximum of 128 HDP pools.

f.    Select a HDP pool usage threshold value from the drop-down list. The threshold value must be a multiple of 5 in the range 5 to 95. The default value is 70).

g.   Review the specified information in the Create DP Pool dialog box. If the information is correct, click Next. The Create DP Pool dialog box appears.

Note:   If there are no HDP pool volumes that can be registered into the HDP pool, an error message appears and no objects appear in the Selectable LDEVs sortable table.

h.   In the Create DP Pool dialog box, select the HDP pool volumes to be registered in the HDP pool, and then click Next.The Create DP Pool dialog box appears.

i.    In the Create DP Pool dialog box, select the Configuration check box to create the HDP pool based on the specified information, and then click Confirm.

To return to the Create DP Pool dialog box, click Back. To cancel HDP pool creation, click Cancel.

j.    In the Create DP Pool dialog box, click Finish to close the dialog box.

  4. [   ]    Create HDP volumes:

Note:   Users with the Modify permission can create HDP volumes, HDP pools to assign to HDP volumes, HDP volume capacity, usage rate threshold value and specify the number of HDP volumes to create and the emulation type.

        

a.   In the Explorer menu, select Resources > All Storage or My Storage.

b.   Expand the All Storage (or My Storage) object tree in the navigation area, and select the Universal Storage Platform V/VM to be manipulated, the Pools group, and then Dynamic Provisioning.

c.   The Dynamic Provisioning window appears in the application area.

d.   In the Dynamic Provisioning window, click the Pool ID link of the HDP pool to be assigned to HDP volumes.The DP pool-ID window appears.

e.   In the DP pool-ID window, click the DP VOL tab. A list of the HDP volumes assigned to the HDP pool appears.

f.    On the DP VOL tab, click Create DP VOLs. The Create DP VOLs dialog box appears.

g.   In the Create DP VOLs dialog box, select the HDP volume creation procedure checkbox, and then click Next.The Create DP VOLs dialog box appears.

h.   In the Create DP VOLs dialog box, set the volume capacity, usage rate threshold value, number of volumes to be created and emulation type of the new HDP volumes to be created.

Note the following

·     If the storage subsystem microcode version is 60-03 or later, the capacity specified must be from 47 MB to 4,194,304 MB. If the storage subsystem microcode version is earlier than 60-03, the capacity specified must be from 47 MB to 3,145,663 MB. The default value is 47 MB. The threshold value must be a multiple of 5 in the range 5 to 300. The default value is 5.

·     You can create a maximum of 10 HDP volumes at one time. To create more than 10 HDP volumes, perform steps 4. [   ] a. through h. again.

·     The only emulation type that can be set for HDP volumes is OPEN-V.

·     If you are creating multiple HDP volumes, the values specified in the Create DP VOLs dialog box are set for all HDP volumes being created.

i.    Review the specified information in the Create DP VOLs dialog box. If the information is correct, click Next.The Create DP VOLs dialog box appears.

j.    In the Create DP VOLs dialog box, check the properties of the new HDP volumes to create.

·    To change property information, click the Auto link for the applicable HDP volumes and proceed to step k.

·    To create the HDP volumes using the displayed properties, click Next and proceed to step n.

Note the following:

·     If you set the LDEV number to Auto and then create HDP volumes, the Device Manager server automatically assigns LDEV numbers to the HDP volumes.

·     Clicking the Auto link displays the Edit DP VOL dialog box .

·     Clicking Next displays the Create DP VOLs dialog box.

k.   In the Edit DP VOL dialog box, change the LDEV numbers, capacity, usage rate threshold value and emulation type of the selected HDP volumes.

l.    Specify an LDEV number in the following format:

LDKC-number:CU-number:LDEV-number

To automatically assign LDEV numbers to the HDP volumes without specifying individual LDEV numbers, select Auto.

Note the following:

·     If the storage subsystem microcode version is 60-03 or later, the capacity specified must be from 47 MB to 4,194,304 MB. If the storage subsystem microcode version is earlier than 60-03, the capacity specified must be from 47 MB to 3,145,663 MB. The default value is 47 MB.

·     A threshold value must be a multiple of 5 in the range 5 to 300.

·     The only emulation type that can be set for HDP volumes is OPEN-V.

m.  Review the information specified in the Edit DP VOL dialog box. If the information is correct, click OK to return to the Create DP VOLs dialog box.

n.   To create the HDP volumes based on the information specified in the Create DP VOLs dialog box, select the Configuration check box and then click Confirm.

o.   Click Finish to close the dialog box.

p.   If an error occurs while creating HDP volumes, you can click Back to return to the previous step and correct the settings.

  5. [   ]    Manually add hosts:

Note:   Users with the Modify permission can manually add hosts to Device Manager. When you add a host manually, specify its host name and WWNs (or iSCSI names). Use this information to secure LUNs using the WWNs or iSCSI names of this host.

        

a.   In the Explorer menu, select Resources > Hosts.

b.   In the application area, click Add Host. The Add Host dialog box appears.

c.   Enter the name for the new host using a maximum of 50 bytes, for example: Vplex_Host . The host name is not case-sensitive.

Note:   You cannot use the same host name as that of a mainframe host registered to Device Manager. Confirm the name of the mainframe hosts by using the Device Manager CLI. For details on how to check a mainframe host name, refer to the Hitachi Device Manager Software Command Line Interface (CLI) User's Guide.

d.   Register a WWN or iSCSI name for the new host. Click the appropriate Add button, and then enter the WWN or iSCSI name in the dialog box that appears. Repeat this operation for each WWN or iSCSI name you want to register. You cannot register WWNs or iSCSI names that are already registered.

e.   You must register one or more WWNs or iSCSI names using either of the following methods:

·     To enter WWNs, use the following format:

XX.XX.XX.XX.XX.XX.XX.XX (XX: two-digit hexadecimal number. A dot separating two-digit Hex number.  Do not use colon)

·     To enter iSCSI names, use either the iqn or eui format as follows. The entry is not case-sensitive.

iqn format:

Enter a character string of up to 223 bytes that begins with iqn.. You can use the following characters:

A-Z  a-z  0-9  .  -  :

Note:   Specifying iqn only is not permitted.

eui format:

Enter a 20-byte character string beginning with eui. You can use the following characters:

A-F  a-f  0-9

f.    Click Select to open either the Select WWNs dialog box or the Select iSCSI Names dialog box. These dialog boxes display a list of WWNs or iSCSI names used as LUN security and are not registered for any host.

g.   From the list, select either the WWNs or the iSCSI names you want to register for the host that you are adding. If the WWNs or iSCSI names you are registering are used as LUN security, the LUN security can be inherited. You do not need to set LUN security again.

h.   When you are finished adding WWNs or iSCSI names, review the information on the Add Host dialog box.

i.    Select OK to add the host or Cancel to cancel your request.

j.    If your attempt to add a host fails, an error message appears. You can re-execute the operation by clicking Back to return to the Add Host dialog box.

  6. [   ]    Create a logical group in Device manager:

        

a.   In the Explorer menu, select My Groups > Logical Groups.

b.   Expand the tree in the navigation area as necessary, and then select the parent group for the logical group you want to create.

c.   To create a logical group at the top level, select the Logical Groups object.

d.   In the application area, click Create Group. The Create Group window appears .The parent group selected in the navigation area is displayed in the Parent Group. If you select the Logical Groups object in the navigation area, None is displayed.

e.   In the Parent Group field on the Create Group panel, select the desired parent group for the new logical group, if not already selected. Select None to create a new group at the top level.

f.    In the Create Group window, enter a new group name in the Group Name field. Note the following:

·     The group name cannot be the same as the name of another group in the same level

·     You cannot use LUN SCAN as a group name

·     The group name can contain spaces and is not case-sensitive.

·     The group name can be a maximum of 50 bytes.

·     You can use the following characters in the name:
A - Z a - z 0 - 9 - _ . @

·     The group name may include spaces, but cannot consist only of spaces. Leading or trailing spaces are deleted.

g.   In the Icon field on the Create Group panel, select the desired icon for the new group.

h.   If the information displayed on the Create Group panel is correct, click Ok to create the specified new group (or Cancel to cancel your request).

  7. [   ]    Add storage to the logical group (storage group) and assign to host port:

        

a.   In the Explorer menu, select My Groups > Logical Groups.

b.   Choose the group you created in step 6. [   ].

c.   Click Add Storage.

d.   Select The usual storage addition option and click Ok.

e.   Follow the instructions in the wizard, and choose the Host:WWN/iSCSI Name.

f.    Select the port assigned and click Add.

g.   Click Next and begin allocating storage. 

h.   Select Browse LDEVs, and then click Next.

i.    Select the LUNs you want to add, and then click Next. These LUNs were part of the HDP (data pool) you created in steps 3. [   ] and 4. [   ].

j.    Assign Host Port connections by selecting the LUN and Host Port, and then clicking Add.

k.   Click Next to continue. The LUNs are automatically numbered. If not, click the Auto Number option and then click Finish to complete.

l.    Review the summary of changes and then click Confirm. This will take couple of minutes to complete.

  8. [   ]    Confirm that the Host Group is able to see the storage volumes:

        

a.   In the Explorer menu, select Resources > Hosts.

b.   Select the host you created earlier.

c.   Verify that LDEVs are shown.

  9. [   ]    To discover NetApp LUNs in VPLEX:

        

a.   On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

b.   At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

 

Sun/HDS 99xx

Note:  SUN/HDS 99xx arrays must have a LUN0 exported to VPLEX. If no LUN0 exists on the array, the report lun command fails with error code 5/25/0 – Logical unit not supported.

To provision Sun/HDS 99xx LUNs for VPLEX:

                   

  1. [   ]    Set each Fibre Channel port that connects to VPLEX to Point-to-Point.

  2. [   ]    Create a RAID Group, and add LUNs.

  3. [   ]    Create a Host Group.

  4. [   ]    On the Host Group Options tab, set the Platform to [00] Standard. Leave all other settings at the default values.

  5. [   ]    Set a name for the Host Group.

  6. [   ]    Add LUNs to the Host Group.

  7. [   ]    On the Logical Units tab, select the LUNs under Available Logical Units, and click Add.

  8. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

  9. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

 

IBM DS4700

To provision IBM DS4700 LUNS for VPLEX:

·    Set the failover mode to Active-Passive.

·    Do not add the default logical drive access to the host group used by VPLEX.

IBM DS4800/DS5100/DS5300

To provision IBM DS4800, DS5100, or DS5300 LUNs for VPLEX:

                   

  1. [   ]    Zone the IBM DS4800, DS5100, or DS 5300 backend storage array to the VPLEX back-end ports. Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

  2. [   ]    To create the LUNs using the DS Storage Manager Client GUI

        

a.   Go to the Storage Subsystem menu and click on Configuration and then click on Automatic

Figure 22    

b.   Click Next and choose Create your own configuration radio button.

Figure 23    

c.   Click Next.

d.   Select a RAID level.

e.   Choose the number of arrays and drives.

Figure 24    

f.    Click Next. It displays the summary of the configuration you have chosen.

g.   Click Finish.  Arrays and LUNs are created based on the chosen numbers.

 

Figure 25    

On the Logical tab all the Arrays and Luns are listed. You can edit the name of the Array or LUN.

Figure 26    

   

  3. [   ]    Manually Create Array and LUNs

        

a.   Go to Logical tab.

b.   Click on Total Unconfigured Capacity and right click.

c.   Choose Create Array, follow the wizard, and choose Array name.

d.   Choose drives manually or Automatic. Click Next and choose the protection (RAID level) and click Finish

e.   Select yes on the pop up window which asks if you would like to create logical drive (LUNs) using the new Array.

f.    Follow the wizard and create the LUNs.

g.   Select the option Map later using the Mapping View when it asks for Logical Drive-to-LUN mapping.

Figure 27    

Note:   Note: If you choose Default Mapping, all the LUNs created will be part of the default group. To map to VPLEX, you must remove the LUNs from Default group and then map to VPLEX.

 

  4. [   ]    Host Mapping: Map the luns to VPLEX.

        

a.   Go to the Mappings tab and click Default Group.

b.   On the right pane, all the LUNs created show up. Every LUN created will be part of default group until it is assigned to specific host with specific initiators. Ideally, all the zoned unregistered host initiators have access to all the LUNs under the Default group.

c.   Define a Host group, such as VPLEX.

d.   On the Default Group, right click and select Define

e.   Select Host Group.  The wizard opens.

f.    Enter the Host group name, such as VPLEX

g.   Click OK.

Figure 28    

Figure 29    

 

Host Group Vplex is created under Default Group.

  5. [   ]    Select Host Group Vplex and righ click.

  6. [   ]    Select Define

  7. [   ]    Click Host.

Figure 30    

  8. [   ]    On the wizard, choose Add by selecting a known unassociated host port identifier option.

  9. [   ]    Choose the VPLEX initiator and give the Alias eg: A1-FC00

10. [   ]    Click Add.

Figure 31    

11. [   ]    Click Next.

12. [   ]    In the Specify Host Type window, in the Host type (Operating System type) list select Base.

13. [   ]    Click Next.

14. [   ]    Click Finish.

 

Note:  All the LUNs created are part of Default Group if you have chosen the option Default mapping during manual LUN creation. Also, the LUNs that get created as part of atumatic configuration will be part of Default Group. Remove the LUNs from the Default Group.  Map the LUNs to VPLEX host created above.

 

15. [   ]    To map the LUNs, right click on the VPLEX host created.

16. [   ]    Click on Define and select Additional Mapping.

17. [   ]    Select the LUN.

18. [   ]    Click Add.

Figure 32    

 

19. [   ]    Login to the VPLEX CLI or the VPLEX GUI

20. [   ]    To use the VPLEX GUI:

        

a.   Open a browser and type the following:

https://mgmt_server_address

Where mgmt_server_address is the IP address of the management server's public IP port.

b.    Log in with the username service, and the password Mi@Dim7T.

c.   To begin provisioning and exporting storage, select Provision Storage on the VPLEX Management Console’s navigation bar.

21. [   ]    To use the VPLEX CLI:

        

a.   On the VPLEX management server, log in to the VPlexcli and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

b.   From the VPlexcli prompt, type the following command to display the new LUNs:

ll array_name/logical-units

 

IBM DS5020

                   

  1. [   ]    Zone the IBM DS5020 storage to the VPLEX back-end ports.  Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

  2. [   ]    Install and start the IBM DS Storage Manager 10 Client (10.84.xx.30) in your system.

  3. [   ]    Select the “Setup” tab.

  4. [   ]    Click “Add Storage Subsystems”.

Figure 33    

  5. [   ]    In the Add New Storage Subsystem window, enter the Controller (DNS/Network Name/IPv4 address or IPv6 address).

  6. [   ]    Click Add.

 

Figure 34    

  7. [   ]    In the left panel, select the Devices tab.

  8. [   ]    Right click Storage Subsystem and select Manage Storage Subsystem.

Figure 35    

  9. [   ]    To create new Host group, in the IBM DS Storage manager 10 window, select the Host Mappings tab.

Figure 36    

10. [   ]    Right click on Storage Subsystem.

11. [   ]    Select Host Group.

Figure 37    

12. [   ]    In the Define Host Group window, enter the host group name.

13. [   ]    Click OK.

 

Figure 38    

14. [   ]    To create new host, right-click Host Group.

15. [   ]    Select Host.

Figure 39    

16. [   ]    In the Define Host window, enter the host name.

17. [   ]    Click Next.

Figure 40    

18. [   ]    In the Alias field, enter the new host port identifier.

19. [   ]    Click Add.

Figure 41    

20. [   ]    Select the Host port Identifier.

21. [   ]    Click Next.

22. [   ]    To specify the Host type (Operating System), select Base.

23. [   ]    Click Next.

24. [   ]    Click Finish.

Figure 42    

25. [   ]    To manage Host Port Identifiers, right click on Host.

26. [   ]    Select Manage Host Port Identifiers.

Figure 43    

27. [   ]    In the Manage Host Port Identifiers window, Add, Edit, Replace or Remove host ports as needed.

28. [   ]    Click close.

Figure 44    

29. [   ]    To create a new array select the Storage & Copy Services tab.

30. [   ]    Right click on Total Unconfigured Capacity.

31. [   ]    Select Create Array.

 

Figure 45    

32. [   ]    In the Create Array window, click Next.

Figure 46    

33. [   ]    Enter an Array name.

34. [   ]    Click Next.

Figure 47    

35. [   ]    Select the RAID level as required.

36. [   ]    Click Finish.

Figure 48    

37. [   ]    To create logical drive form the newly created Array, right click Free Capacity.

38. [   ]    Select Create Logical Drive.

Figure 49    

39. [   ]    Enter the New logical drive capacity and logical drive name.

40. [   ]    Select the host for mapping.

41. [   ]    Click Finish.

Figure 50    

42. [   ]     Map the additional LUNs to a host by giving the host group/host name and the logical drive name.

Figure 51    

43. [   ]    Verify that the newly created LUNs are mapped to the particular Host.

Figure 52    

44. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

array re-discover –a array-name --hard --force

 For example:

VPlexcli:/clusters/cluster-2/storage-elements/storage-arrays> array re-discover -a "IBM-1814      FAStT-60080e50002c3cb60000000050fd17ac" --hard –force

 

Note:   The array name must be enclosed in quotes (  )as the IBM array has a space in its name.

To enter into the array context, type the command:

VPlexcli:/clusters/cluster-2/storage-elements/storage-arrays>cd "IBM-1814      FAStT-60080e50002c3cb60000000050fd17ac"

 

45. [   ]    Verify the Active/Passive controllers for each LUN

VPlexcli:/clusters/cluster-2/storage-elements/storage-arrays/IBM-1814      FAStT-60080e50002c3cb60000000050fd17ac/logical-units> ll

 

46. [   ]    From the VPlexcli prompt, type the following command to display the LUNs:

VPlexcli:/> cd clusters/cluster-2/storage-elements/storage-volumes/

VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> ll

 

IBM v7000

IBM v7000 supports implicit ALUA mode.

To provision IBM v7000 LUNs to VPLEX:

                   

  1. [   ]    Zone the IBM v7000 storage to the VPLEX back-end ports. Follow the recommendations in Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

  2. [   ]    Log into the IBM v7000 Array Management GUI via https://IBM v7000 management server IP address in a web browser.

  3. [   ]    Create a Storage Pool:

        

a.   In the GUI, navigate on the left-hand menu icons to find Pools >.

b.   Select either Internal Storage or External Storage.

c.   Follow the instruction to create a storage pool.

  4. [   ]    Click the Pools menu icon

Figure 53    

  5. [   ]    Select Internal Storage menu option

Figure 54    

  6. [   ]    On Internal Storage window, click the Configure Storage button

Figure 55    

  7. [   ]    Create a Host:

        

a.   Navigate to Hosts >.

b.   Select the Hosts menu item.

c.   Provide a Host Name.

d.   Select Rescan

e.   In the Fibre Channel Ports pull-down menu, select VPLEX BE port WWNs.

Figure 56    

  8. [   ]    Add the selected Port to the list in Port Definitions section.

  9. [   ]    Choose Generic as Host Type.

10. [   ]    Keep the default settings for I/O Group.

11. [   ]    Click Create Host to create a host. In this case, the host should be a VPLEX.

Figure 57    

12. [   ]    Create a Volume:

        

a.   Click the Volumes icon.

b.   Select Volumes.

c.   Select a Preset such as Generic.

d.   Select a Pool.

Figure 58    

13. [   ]    In New Volume window:

        

a.   provide a Volume Name and Size.

b.   Click the Create button to create a new volume on the IBM v7000.

Figure 59    

14. [   ]    Verify the Property of newly created volume.

Figure 60    

15. [   ]    The newly created volume is not mapped to any host yet.

Figure 61    

16. [   ]    In the Action pull-down menu, select Map to Host option.

Figure 62    

17. [   ]    In the Modify Host Mappings window:

        

a.   Select a Host

b.   Select a volume to map to the host.

c.   Click the Map Volumes button to map the selected volume.

Figure 63  

18. [   ]    Verify that the volume was mapped to the host.

Figure 64    

19. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

20. [   ]    From the VPlexcli prompt, type the following command to display the new LUNs:

ll array_name/logical-units

 

IBM DS8xxx

To provision IBM DS8xxx LUNs for VPLEX:

                   

  1. [   ]    Establish a remote connection to the array, and open the GUI.

  2. [   ]    Select Host Connections.

  3. [   ]    On the Create New Host Connection screen:

        

a.   Set the Port Type to Fibre Channel Point-to-Point.

b.   Set the Host Type to Intel-based Servers (Microsoft Windows 2000) (Win2000).

c.   Set the Host WWPN to the VPLEX WWN.

d.   In the list of WWNs, select the VPLEX WWN.

e.   Click Next.

  4. [   ]    On the next Create New Host Connection screen, select the Volume Group you want mapped from the Volume Group list, and click Next.

  5. [   ]    On the next Create New Host Connection screen:

        

a.   Select the Manual selection of I/O ports.

b.   Select the IBM host ports to which you want the Volume Group mapped.

c.   Click Next.

d.   On the next Create New Host Connection screen, verify the information, and then click Finish.

  6. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

  7. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

IBM SVC

To provision IBM SVC LUNs for VPLEX:

                   

  1. [   ]    Establish a remote connection to the array, and open the GUI.

  2. [   ]    Navigate to Clusters, select the desired cluster, and launch the SAN Volume Controller Console.

  3. [   ]    Navigate to Virtual Disks, select the drop-down menu to create virtual disk, and follow the wizard and create striped Virtual Disks.

  4. [   ]    Select the I/O group, and then select the preferred node.

  5. [   ]    Select the disk type and the number of devices to create. Then click Next and select the capacity.

  6. [   ]    Verify the attributes and finish VDisk creation.

  7. [   ]    After VPLEX is zoned with SVC, navigate to Hosts, and select Create a Host from the pull-down menu.

  8. [   ]    Select a host name and I/O group, select generic as the type, and add respective VPLEX ports under Available Ports. Then click ok.

  9. [   ]    After the host is created, navigate to Virtual Disk, and verify all the vdisks have been created.  Select the vdisks you want to map to VPLEX, and then select Map VDisks to Host and click go.

10. [   ]    Select the host and click ok.

11. [   ]    After vdisks are mapped to the host, navigate to Virtual Disk-to-Host Mapping, and verify the disks are available for VPLEX.

12. [   ]    On the VPLEX cluster, perform an array re-discover, to discover all the provisioned disks.

13. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

14. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

 

IBM XIV

To provision IBM XIV LUNs for VPLEX:

                   

  1. [   ]    Launch the GUI, and enter the user credentials.

  2. [   ]    On the initial GUI screen, click Full Redundancy.

  3. [   ]    Click the Volumes icon, and select Volumes and Snapshots.

  4. [   ]    Click Add Volumes, and add the volume by choosing the name and size.

  5. [   ]    Select Host and Clusters > Add Host.

  6. [   ]    Select the host name, and default as the host type.

  7. [   ]    After the host is created, right-click it, and select Add Port. (The ports are the initiators that log in from VPLEX.)

  8. [   ]    Select the VPLEX initiators from Port Name list, and add them.

  9. [   ]    Right-click the VPLEX host and select Modify LUN Mapping.

10. [   ]    Select the volumes to map in the left pane, and then click Map.

11. [   ]    Return to the Volumes and Snapshots screen.

12. [   ]    Right-click the volume to be mapped to the VPLEX host, and then select Modify LUN Mapping.

13. [   ]    Select the host cluster (the VPLEX host you created earlier).

14. [   ]    Select the volume to be mapped in the left pane. In the right pane, select the LUN to which the volume will be mapped, and then click Map.

15. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands to discover the provisioned LUNs:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

16. [   ]    At the VPlexcli prompt, type the following command to display the new LUNs:

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes

 

17. [   ]    On the XIV array, type the following command to generate a map file:

xcli –u admin –p adminadmin –c nextra_lab vol_list

 

Note:  You must use the CLI to claim XIV-based storage volumes.

 

Fujitsu ETERNUS

ETERNUS DX8700 and DX8400 can be used in active / active and ALUA mode. If the customer is already using active / active mode, remain in this (active/active) mode.

ETERNUS DX440 / DX410 + DX440S2/DX410S2 supports ALUA mode only – active / active mode is not supported.

To provision ETERNUS LUNs for VPLEX:

                   

  1. [   ]    Log in to the ETERNUS GUI as administrative/root.

  2. [   ]    Navigate to Configuration > Host Interface Management > Set Host Response.

  3. [   ]    For Sense Code Conversion Pattern, select Linux Recommended (When not using GR/ETERNUS MPD).

  4. [   ]    This step depends on the ETERNUS model. Following are settings for different array models.

DX8400_active_active_mode.jpg

Figure 65     DX8700/DX8400 Host response pattern (active / active mode)

DX440_DX410_ALUA_mode.jpg

Figure 66   DX440/DX410 Host response pattern (ALUA mode)

DX440S2_DX410S2_ALUA_mode.jpg

Figure 67   DX440S2/DX410S2 Host response pattern (ALUA mode)

  5. [   ]    To create a mapping file for the VPLEX Claiming Wizard (Following steps apply to the situation when connecting Fujitsu ETERNUS arrays at very first time and no Fujitsu ETERNUS devices are exposed yet):

   

  6. [   ]    Log in to the VPlexcli on the VPLEX manager server.

  7. [   ]    Type the following commands to list all storage volumes:

cd \clusters/cluster-Cluster_ID/storage-elements/storage-volumes

 

ll

 

  8. [   ]    Save the output to a file named file1.

  9. [   ]    Type the following command to filter out all information except the VPDs:

cat /tmp/file1 |awk '{print $2,  "FUJITSU_"NR" "}' > /var/log/VPlex/cli/Fujitsu_Eternus.txt

 

VPD83T3:60060e801004f2b0052fabdb00000006 FUJITSU_1

VPD83T3:60060e801004f2b0052fabdb00000007 FUJITSU_2

VPD83T3:60060e801004f2b0052fabdb00000008 FUJITSU_3

VPD83T3:60060e801004f2b0052fabdb00000009 FUJITSU_4

 

10. [   ]    Add the heading Generic storage-volumes to the beginning of the file.

Generic storage-volumes

VPD83T3:60060e801004f2b0052fabdb00000006 FUJITSU_1

VPD83T3:60060e801004f2b0052fabdb00000007 FUJITSU_2

VPD83T3:60060e801004f2b0052fabdb00000008 FUJITSU_3

VPD83T3:60060e801004f2b0052fabdb00000009 FUJITSU_4

 

You can use this file as a hint file for the VPLEX Claiming Wizard, using either the VPlexcli or the GUI.

cd \clusters/cluster-2/storage-elements/storage-volumes

 

storage-volume>> claimingwizard –f /tmp/Fujitsu_Eternus.txt -c cluster-x

 

11. [   ]    To expose Fujitsu LUNs to VPLEX:

   

        

a.   Create a RAID Group.

b.   Select disks for the RAID Group, and then confirm your action.

c.   Select a RAID Group from which to create the logical volume.

d.   Create one or more logical volumes.

e.   Set host (initiator) WWNs.

f.    To create an Affinity Group, you must set CA parameters. On the Set CA Parameters screen, select a port.

g.   Select Fabric Connection for Connection Topology, and select ON for Affinity Mode.

h.   Create an Affinity Group to associate the host WWN(s) with the logical volumes.

i.    Allocate logical volumes to the Affinity Group.

j.    Confirm Affinity Group creation.

k.   Select a port for the Affinity Group.

l.    Select a host WWN.

m.  Allocate the host to the Affinity Group.

n.   Confirm Affinity Group creation.

o.   Map LUNs to one or more ports.

p.   Allocate LUNs to the port(s).

q.   From the VPlexcli prompt on the VPLEX management server, type array re-discover to provision the Fujitsu ETERNUS array and its LUNs.

HP EVA 4/6/8000, 4/6/8100 and 4/6/8400

To provision HP EVA 4/6/8000, 4/6/8100 and 4/6/8400 LUNs for VPLEX:

Note:  The HP EVA firmware must be version 5.1 or later.

                   

  1. [   ]    Using a browser, type the following to log in to the HP EVA Command View GUI:

https://IP_address/command_view_eva

 

  2. [   ]    Select the applicable EVA array.

  3. [   ]    Select the hosts folder.

  4. [   ]    Create a new host:

        

a.   Type a name for the VPLEX cluster.

b.   Select Fibre Channel as the type.

c.   Select or type the WWN of one of the VPLEX back-end ports.

Note:   If you are typing the WWN, separate every four characters with a hyphen. Colons are not used.

d.   Select Linux as the VPLEX host’s operating system.

e.   Click Add Host.

  5. [   ]    Select the newly created host, and then select the ports tab.

  6. [   ]    Add the additional VPLEX ports of the connected instance. Ensure that the host has enough ports to meet EMC’s high availability best practices.

  7. [   ]    Construct Vdisks as described in the HP EVA documentation.

  8. [   ]    Present the LUNs to the VPLEX host group.

  9. [   ]    On the VPLEX management server, log in to the VPlexcli, and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

10. [   ]    From the VPlexcli prompt, type the following command to display the new LUNs:

ll array_name/logical-units

NetApp FAS/V 3xxx/6xxx or IBM N6xxx/N7xxx Series arrays

To provision NetApp FAS/V 3xxx/6xxx series or IBM N6xxx Series LUNs for VPLEX:

                   

  1. [   ]    Zone the NetApp backend storage to the VPLEX back-end ports. Follow the recommendations in the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.

  2. [   ]    Perform the following steps to create an aggregate using the NetApp System Manager GUI.

Note:  There are different types of NetApp GUIs. The NetApp System Manager, which is most commonly used, manages multiple NetApp clusters in one GUI.

      

a.   In the left panel of the NetApp System Manager GUI, click Aggregate and then click Create to open the Create Aggregate Wizard.

Figure 68   Create Aggregate Wizard

b.   Enter the appropriate information to create an aggregate that is collection of disks with Raid protection. Note that the default RAID type is dual parity.

c.   Select the disk manually or let the system choose the disks for you.

d.   Select the maximum size.

e.   Click Next and choose the aggregate size:

Figure 69   Create Aggregate wizard (Aggregate Size)

f.    Click Next and hit Finish. It will create aggregate of the size selected.

  3. [   ]    Perform the following steps to create volumes:

        

a.   Click Volumes in the left panel, and then click Create to open the Create Volume dialog box.

Figure 70   Create Volume dialog box

Note:   You can create volumes, Flex Volumes or Traditional Volumes on top of the aggregate. Select guarantee either with Volume, None or file.  Volume means reserve the space that is equivalent to the FAT, None means thin which means only allocate space on demand.

Figure 71   Create LUN Wizard (General Properties)

  4. [   ]    Create LUNs on top of the volume and map them to the Initiator Group. If creating a fat LUN, select the space reserved:

        

a.   Click LUNs in the left panel, and then click Create to open the Create LUN Wizard.

b.   Enter the LUN name and set the Type as Linux (for VPLEX).

  5. [   ]    Click Next to continue and choose the aggregate and volume for the LUN:

Figure 72   Create LUN wizard (LUN Container)

  6. [   ]    Click Next to choose the volume. On the Aggregate Container screen, do not choose aggregate 0 or 1.

Figure 73   Create LUN wizard (Aggregate Container)

  7. [   ]    Click Next to select the volume.  The Volume Container screen appears.

Figure 74   Create LUN wizard (Volume Container)

  8. [   ]    Select Use the selected volume or qtree and verify the existing volume name in the list.

  9. [   ]     Click Next to select the initiator.

Figure 75   Create LUN wizard (Initiator Mapping)

10. [   ]    Connect your LUN to the initiator host and click Next to finish. The LUN will be created and mapped to the selected initiator host.

11. [   ]    Perform the following steps to create an initiator group:

        

a.   Click LUNs in the left panel, and then click the Initiator Groups tab.

b.   Click Add under Initiator Groups. The Add Initiator Groups dialog box appears. 

Figure 76   Add Initiator Group dialog box

c.   Choose FCP for Group Type and Linux for the operating system.

Note:   Do not choose Windows. If you do, you will not be able to claim LUNs under VPLEX. 

d.   Select the ALUA (Asymmetric Logical Unit Access) features enabled checkbox to enable the ALUA feature.

e.   Click Add.

12. [   ]    Once you create the Initiator Group, ensure that you add all the VPLEX initiators by clicking Add under the Initiator ID panel in the right pane. The Add Initiator ID dialog box appears.

13. [   ]    Select FCP for the Group Type, and then enter the Group Name you created earlier.

Figure 77   Add Initiator ID dialog box

14. [   ]    To use the VPLEX GUI:

        

a.   Open a browser and type the following:

https://mgmt_server_address

where mgmt_server_address is the IP address of the management server's public IP port.

b.    Log in with the username service, and the password Mi@Dim7T.

c.   To begin provisioning and exporting storage, click Provision Storage on the VPLEX Management Console’s navigation bar.

15. [   ]    To use the VPLEX CLI:

        

a.   On the VPLEX management server, log in to the VPlexcli and type the following commands:

cd /clusters/cluster-Cluster_ID/storage-elements/storage-arrays

 

array re-discover array_name

 

b.   At the VPLEXcli prompt, type the following command to display the new LUNs: 

ll /clusters/cluster-Cluster_ID/storage-elements/storage-volumes/

Special notes for working with NetApp arrays

If a Netapp device was created using a Windows Persona, it cannot be claimed by VPLEX because will not be divisible by 4K.  The LUN's capacity is altered by some number which makes it not divisible by 4K. Netapp adds extra space due to geometry used for Windows OS.  Brand new devices need to be created using LINUX persona.  Already created devices can't be encapsulated. A work around is to expand capacity so size is divisible by 4K.

Work Around for Windows volumes that need to be encapsulated:

                   

  1. [   ]    VPLEX Technical limitation can only claim storage which is divisible by 4K.

  2. [   ]    Manually increase the source LUN on Netapp in 1 mg increments until the lun can be claimed by the vplex.

  3. [   ]    Reference primus emc323481.

Creating a name mapping (or hints) file for VPLEX for third-party arrays

                   

To create a mapping file for the VPLEX Claiming Wizard:

        

  1. [   ]    Log in to the VPlexcli on the VPLEX management server.

  2. [   ]    Type the following commands to change to the storage-volumes context:

cd /clusters/cluster-ID/storage-elements/storage-volumes/

 

  3. [   ]    From the storage-volumes context, type the ll command to list all storage volumes.

  4. [   ]    Cut and paste the output on the screen and save it to a file (for example, file1) in the /tmp directory on the VPLEX management server or any directory outside the management server on a different system.

  5. [   ]    From the management server or any UNIX system, type the following command to filter out all information except the VPD IDs. The following command example is on the management server with a full path to file1. If file1 is on a different system outside the management server, use Cygwin on  Windows or UNIX systems to execute following awk command.

cat /tmp/file1 |awk '{print $2,  "array_name_"NR" "}' > /var/log/
      VPlex/cli/
array_name.txt

 

Output example:

VPD83T3:60060e801004f2b0052fabdb00000006 ARRAY_NAME_1

VPD83T3:60060e801004f2b0052fabdb00000007 ARRAY_NAME_2

VPD83T3:60060e801004f2b0052fabdb00000008 ARRAY_NAME_3

VPD83T3:60060e801004f2b0052fabdb00000009 ARRAY_NAME_4

 

  6. [   ]    Type the heading “Generic storage-volumes at the beginning of the file as shown in the following example:

Example

Generic storage-volumes

VPD83T3:60060e801004f2b0052fabdb00000006 ARRAY_NAME_1

VPD83T3:60060e801004f2b0052fabdb00000007 ARRAY_NAME_2

VPD83T3:60060e801004f2b0052fabdb00000008 ARRAY_NAME_3

VPD83T3:60060e801004f2b0052fabdb00000009 ARRAY_NAME_4

 

Use this file as a name mapping file for the VPLEX Claiming Wizard by using either the Vplexcli or the GUI. If using Vplexcli, the name mapping file should reside on the SMS. If using the VPLEX GUI, the name mapping file should be on the same system as the GUI.

  7. [   ]    If using VPlexcli to import the mapping file, type the following commands to cd to the storage-volumes directory and then use the name mapping file to claim storage:

cd /clusters/cluster-ID/storage-elements/storage-volumes

claimingwizard –f /tmp/array_name.txt -c cluster-ID

 

Registering VPLEX initiators with CLARiiON VNX arrays

                   

  1. [   ]    Connect the VNX array to VPLEX by cabling and zoning the VPLEX backend ports to the VNX target ports.

  2. [   ]    Log in to the switch’s web interface or Connectrix Manager as you would for other hosts.

Note:  Most hosts have agents that register the server’s HBA with the array automatically at startup. However, for VPLEX you must perform the following steps to register the HBA.

  3. [   ]    Click the Connectivity Status link on the Host Management wizard, highlighted in the following screen:

Figure 78    

In the following screen, note the unregistered VPLEX initiators on the VNX array (these were connected in step 1. [   ]).

Figure 79    

  4. [   ]    To register the initiator, select it and then click Edit to open the Edit Initiators screen:

Figure 80    

  5. [   ]    To register the initiators, you must provide the following information:

·     Unique Name

·     Unique IP Address (required but not used for communication)

·     Initiator Type: CLARiiON Open

·     Failover Mode: (4 for ALUA mode, 1 for non-ALUA). When all initiators are registered, set the failover mode for all initiators at the same time.

  6. [   ]    Click OK when all required ports are registered:

Figure 81    

  7. [   ]    Create a storage group for the VPLEX cluster as follows:

        

a.   Click Hosts on the navigation bar, and then select Storage Groups:

Figure 82    

b.   In the Storage Groups screen, click Create.

c.   In the Create Storage Group dialog box, enter a desired name for your storage array, and then click OK.

d.   Click Yes to confirm creation of the storage group.

e.   Click Connect Hosts to display all unassigned hosts in the list on the left:

Figure 83    

f.    Move the desired hosts to the list on the right.

Figure 84    

The following screen shows the host assigned to the storage group:

Figure 85    

  8. [   ]    Click the LUNs tab to select LUNs to add to the storage group. If you did not already create LUNs, click Cancel to exit the following dialog box and go to step 11. [   ].

 

The following sample screen shows some free LUNs that can be added to the new storage group.  The service processors (A and B) are expanded so that the LUNs are visible. 

  9. [   ]    Select the LUNs and click Add on the bottom-right of the Available LUNs list.

Figure 86    

10. [   ]    Note that the LUNs show up in the Selected LUNs list.  Review the selected LUNs and click OK or Apply to confirm. (Note that clicking OK closes the dialog box).

Figure 87    

11. [   ]    To create additional LUNs, click Storage in the navigation bar and then click LUNs.

Figure 88    

12. [   ]    Click Create at the bottom of the list to begin creating LUNs:

Figure 89    

13. [   ]    Choose the storage source for the LUN. If you are creating Thin LUNs, select Pool. Otherwise, click RAID Group.

Note:  Select the RAID Group or Pool that has enough space to meet your LUN requirements. Ideally, create LUNs across multiple RAID groups to minimize spindle contention.

Figure 90    

The following screens show two 25 GB LUNs successfully created. Note that if there is insufficient free space available, an error message appears to inform you.

Figure 91    

Figure 92    

14. [   ]    When you have successfully created all your LUNs, click Cancel to close the dialog box.

15. [   ]    Click Hosts > Storage Groups to assign the LUNs to the storage group you created earlier.

Figure 93    

16. [   ]    Choose Select LUNs from the context-menu:

Figure 94    

17. [   ]    Expand the lists within the service processors (SP A and B), select the LUNs to add, and then click Add. 

18. [   ]    Click OK when finished.

Figure 95    

19. [   ]    The LUNs are now successfully provisioned to VPLEX.

20. [   ]    Log in to the VPlexcli and issue the array re-discover command to rediscover the array and begin using the storage volumes.

Creating a name mapping (or hints) file for VPLEX for third-party arrays

To create a mapping file for the VPLEX Claiming Wizard:

                   

  1. [   ]    Log in to the VPlexcli on the VPLEX management server.

  2. [   ]    Type the following commands to change to the storage-volumes context:

cd /clusters/cluster-ID/storage-elements/storage-volumes/

 

  3. [   ]    From the storage-volumes context, type the ll command to list all storage volumes.

  4. [   ]     Cut and paste the output on the screen and save it to a file (for example, file1) in the /tmp directory on the VPLEX management server or any directory outside the management server on a different system.

Note:  The array_name in the next step cannot begin with a numeric and it can only begin with a letter or underscore (_) and the remaining characters can be a letter, number, hyphen (-). or underscore (_). Also the length of the array_name cannot exceed 58 characters (5 characters are reserved for numbering including an underscore character). Here the array_name is used as the name of the hint file as well as the partial name for all the storage volumes that are claimed using the 
hint file.

  5. [   ]    From the management server or any UNIX system, type the following command to filter out all information except the VPD IDs. The following command example is on the management server with a full path to file1. If file1 is on a different system outside the management server, use Cygwin on  Windows or UNIX systems to execute following awk command.

cat /tmp/file1 |awk '{print $2,  "array_name_"NR" "}' > /var/log/
      VPlex/cli/
array_name.txt

 

VPD83T3:60060e801004f2b0052fabdb00000006 ARRAY_NAME_1

VPD83T3:60060e801004f2b0052fabdb00000007 ARRAY_NAME_2

VPD83T3:60060e801004f2b0052fabdb00000008 ARRAY_NAME_3

VPD83T3:60060e801004f2b0052fabdb00000009 ARRAY_NAME_4

 

  6. [   ]    Type the heading Generic storage-volumes at the beginning of the file as shown in the following example:

Generic storage-volumes

VPD83T3:60060e801004f2b0052fabdb00000006 ARRAY_NAME_1

VPD83T3:60060e801004f2b0052fabdb00000007 ARRAY_NAME_2

VPD83T3:60060e801004f2b0052fabdb00000008 ARRAY_NAME_3

VPD83T3:60060e801004f2b0052fabdb00000009 ARRAY_NAME_4

 

Use this file as a name mapping file for the VPLEX Claiming Wizard by using either the VPlexcli or the GUI. If using VPlexcli, the name mapping file should reside on the SMS. If using the VPLEX GUI, the name mapping file should be on the same system as the GUI.

  7. [   ]    If using VPlexcli to import the mapping file, type the following commands to cd to the storage-volumes directory and then use the name mapping file to claim storage:

cd /clusters/cluster-ID/storage-elements/storage-volumes

claimingwizard –f /tmp/array_name.txt -c cluster-ID