Encapsulate arrays on MSCS

 

Topic

Customer Procedures

Selections

Procedures: Provision

Provisioning procedures: Encapsulate arrays on MSCS

 

 

Contents

Encapsulating LUNs Presented to MSCS Clustered Hosts/Servers. 3

Assumptions. 3

Procedure. 3

Expansion of Encapsulated Virtual-Volumes. 12

 


 

Encapsulating LUNs Presented to MSCS Clustered Hosts/Servers

This procedure describes the task of encapsulating a LUN through VPLEX in a non-virtualized environment.

Assumptions

         MSCS active-passive clustered hosts/servers are running with LUNs presented directly (or through a switch) from the storage-array.

         Clustered applications are running on the hosts/servers.

         VPLEX Local is commissioned.

         One new switch (pair of switches if high availability is required) available for use as front-end switche.

Procedure

Follow these steps to encapsulate the LUNs presented to MSCS clustered hosts and servers.

On the host

  1. [   ]    If you are using Symmetrix as the storage array, log in to the SYMCLI Control Host as the root user.

  2. [   ]    If you are using CLARiiON as storage array, log in to Navisphere or NaviCLI.

  3. [   ]    Make a note of LUN IDs of LUNs presented to server_N1/N2.

  4. [   ]    Optionally, run a full backup of host server_N1/N2.

  5. [   ]    Run the latest supported version of EMC Reports on both server_N1 and server_N2.

On server_N1

  6. [   ]    Log in as Administrator.

  7. [   ]    Make a note of the drive letters used for all disks from Failover Cluster Manager.

  8. [   ]    Use Failover Cluster Manager to move all clustered resources to server_N2.

  9. [   ]    Perform Sanity Boot of server_N1.

On server_N2

10. [   ]    Log in as Administrator

11. [   ]    Ensure the rebooted server comes up and joins the cluster.

12. [   ]    Use Failover Cluster Manager to move all clustered resources back over to server_N1.

13. [   ]    Perform Sanity Boot of server_N2.

14. [   ]    Ensure the rebooted server comes up and joins the cluster.

15. [   ]    If the report generated by EMC Reports indicates remediation is required on server_N2, follow these steps:

1.     Pause server_N2 for remediation.

2.     Apply all firmware, BIOS, and driver updates to host bus adapters (HBAs) on server_N2

3.     Resume server_N2 back into the cluster

4.     Use Failover Cluster Manager to move all clustered resources to server_N2.

16. [   ]    If the report generated by EMC Reports indicates that remediation is required on server_N1 follow these steps:

1.     Pause server_N1 for remediation.

2.     Apply all firmware, BIOS, and driver updates to host bus adapters (HBAs) on server_N1.

3.     Use Failover Cluster Manager to move all clustered resources back over to server_N1.

17. [   ]    Shutdown and power-off server_N2.

18. [   ]    Gracefully shutdown clustered apps (disable auto-start) and power-off server_N1.

19. [   ]    Remove connections between server_N1/N2 and back-end storage arrays.

         If there is a switch between server_N1/N2 and the storage arrays, remove zoning.

         If there is a direct connection between server_N1/N2 and the storage arrays, remove cables.

 

On the storage array

20. [   ]    At the storage array, remove the LUNs from the storage group (this includes the LUNs that were presented to the host server_N1/N2).

21. [   ]    For Symmetrix arrays, on SymmCLI, verify all SCSI locks have been released from back-end storage arrays. Check for stuck locks on the devices:

symdev -sid xxxx list -lock v

 

22. [   ]    Clear any stuck locks to the devices with the following command:

symdev -sid xxxx release -lock xxx force

 

On the VPLEX management server

WARNING:    When allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time. After a set of no more than 10 LUNs have been allocated, the VPLEX should be checked to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

23. [   ]    Log in to the VPLEX management server by typing its IP address in PuTTY. The default password is Mi@Dim7T.

login as: service

Using keyboard-interactive authentication.

Password:

service@vplexname:~>

 

24. [   ]    Log in to VPlexCLI by entering vplexcli and providing the VPlexCLI username and password.

service@vplexname:~> vplexcli

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

 

Enter User Name: service

 

Password:

creating logfile:/var/log/VPlex/cli/session.log_service_localhost_T01234_20110525063015

 

VPlexcli:/>

 

25. [   ]    Run the health-check and cluster status commands. The following is an example output of a healthy system running GeoSynchrony 5.0.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

On the storage array

26. [   ]    Connect the back-end storage and VPLEX back-end ports using a back-end switch and activate zoning between storage and VPLEX back-end ports.

27. [   ]    On the back end storage array, configure proper masking.

 

        1.   For CLARiiON, create a storage group for Host server_N1/N2.

        2.   In Connectivity Status, register the VPLEX back-end ports.

On the VPLEX management server

28. [   ]    Use the ls command in /engines/engine/directors/director/hardware/ports context to display port WWNs and node WWNs.

VPlexcli:/> ls -al /engines/engine-1-1/directors/director-1-1-A/hardware/ports

 

/engines/engine-1-1/directors/director-1-1-A/hardware/ports:

Name Address Role Port Status

------- ------------------ --------- -----------

A0-FC00 0x5000144240013600 front-end up

A0-FC01 0x5000144240013601 front-end up

A0-FC02 0x5000144240013602 front-end no-link

A0-FC03 0x5000144240013603 front-end no-link

A1-FC00 0x5000144240013610 front-end no-link

A1-FC01 0x5000144240013611 front-end no-link

A1-FC02 0x5000144240013612 front-end no-link

A1-FC03 0x5000144240013613 front-end no-link

A2-FC00 0x5000144240013620 back-end up

A2-FC01 0x5000144240013621 back-end up

A2-FC02 0x5000144240013622 back-end no-link

A2-FC03 0x5000144240013623 back-end no-link

A3-FC00 0x5000144240013630 back-end no-link

A3-FC01 0x5000144240013631 back-end no-link

A3-FC02 0x5000144240013632 back-end no-link

A3-FC03 0x5000144240013633 back-end no-link

A4-FC00 0x5000144240013640 local-com up

A4-FC01 0x5000144240013641 local-com up

A4-FC02 0x5000144240013642 wan-com no-link

A4-FC03 0x5000144240013643 wan-com no-link

A5-GE00 0.0.0.0|- - no-link

A5-GE01 0.0.0.0|- - no-link

A5-GE02 0.0.0.0 - no-link

A5-GE03 0.0.0.0 - no-link

 

/engines/engine-1-1/directors/director-1-1-B/hardware/ports:

Name Address Role Port Status

------- ------------------ --------- -----------

B0-FC00 0x5000144250013600 front-end up

B0-FC01 0x5000144250013601 front-end up

B0-FC02 0x5000144250013602 front-end no-link

B0-FC03 0x5000144250013603 front-end no-link

B1-FC00 0x5000144250013610 front-end no-link

B1-FC01 0x5000144250013611 front-end no-link

B1-FC02 0x5000144250013612 front-end no-link

B1-FC03 0x5000144250013613 front-end no-link

B2-FC00 0x5000144250013620 back-end up

B2-FC01 0x5000144250013621 back-end up

B2-FC02 0x5000144250013622 back-end no-link

B2-FC03 0x5000144250013623 back-end no-link

B4-FC00 0x5000144250013640 local-com up

B4-FC01 0x5000144250013641 local-com up

B4-FC02 0x5000144250013642 wan-com no-link

B4-FC03 0x5000144250013643 wan-com no-link

B5-GE00 0.0.0.0|- - no-link

B5-GE01 0.0.0.0|- - no-link

B5-GE02 0.0.0.0 - no-link

B5-GE03 0.0.0.0 - no-link

 

In this example, you see the ports 0x5000144240013620, 0x5000144240013621, 0x5000144250013620 and 0x5000144250013621 on the CLARiiON.

 

29. [   ]    Add the same LUNs to the new storage group that was exposed to the server_N1/N2.

30. [   ]    Make sure VPLEX can see the LUNs. If the WWN of a CLARiiON LUN is 6006016031111000d4991c2f7d50e011, it will be visible in the storage volume context as shown in this example:

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

/clusters/cluster-1/storage-elements/storage-volumes/:

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ ---------- ----- -----------

 

VPD83T3:6006016031111000d4991c2f7d50e011 VPD83T3:6006016031111000d4991c2f7d50e011 5G unclaimed DGC alive traditional false

 

Note:  If the required storage volume storage-array is not visible, in the storage-array context, enter the array re-discover command.

VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051

 

VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051/> array re-discover

WARNING: This command cannot detect LUN-swapping conditions on the array(s) being re-discovered. LUN swapping is the swapping of LUNs on the back-end. This command cannot detect LUN swapping conditions when the number of LUNs remains the same, but the underlying actual logical units change. I/O will not be disrupted on the LUNS that do not change. Continue? (Yes/No) y

 

31. [   ]    Claim all of these storage volumes for use from VPlexCLI with appc, which marks it application consistent. An example for claiming a volume is given.

VPlexcli:/> storage-volume claim -d VPD83T3:60000970000192601422533035424235 -n lun_1 appc

 

32. [   ]    Confirm this by checking the type of the storage-volume which should be data-protected.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ --------- ----- -----------

 

lun_1 VPD83T3:60000970000192601422533035424235 5G used DGC alive data-protected false

 

33. [   ]    Create one single extent on the entire storage-volume. Do not enter a size for the extent. Using the maximum size is the default.

VPlexcli:/> extent create d lun_1

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/extents/

 

/clusters/cluster-1/storage-elements/extents:

Name StorageVolume Capacity Use

------------------------ ------------------ -------- -------

extent_lun_1_1 lun_1 5G claimed

 

34. [   ]    Create a RAID-0 or RAID-c local-device with a single extent or a RAID-1 local-device. In case of Raid-1, put the application-consistent extent as source leg.

VPlexcli:/> local-device create -g raid-1 -e extent_lun_1_1, extent_lun_2_1 -n dev_lun_1 --source-leg extent_lun_1_1

VPlexcli:/> ls -al /clusters/cluster-1/devices/

 

/clusters/cluster-1/devices:

 

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- ---------- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- ---------- -------- -------------------

 

dev_lun_1 ok ok 20709376 4K 5G raid-1 local - -

 

Here extent_lun_2_1 is another extent of size same as or larger than that of extent_lun_1_1 and is being used as the mirror here. This extent cannot be application consistent.

35. [   ]    Create a virtual-volume on top of the local-device.

VPlexcli:/> virtual-volume create r dev_lun_1

VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/

 

/clusters/cluster-1/virtual-volumes:

Name Operational Health Service Block Block Capacity Locality Supporting Cache Mode Expandable Consistency

------------------- Status State Status Count Size -------- -------- Device ----------- ---------- Group

------------------- ----------- ------ ---------- -------- ----- -------- -------- --------------- ----------- ---------- -----------

 

dev_lun_1_vol ok ok unexported 20709376 4K 5G local dev_lun_1 synchronous true -

 

36. [   ]    Repeat the procedure of claiming storage volumes to creating virtual-volumes (Step 0 to Step 35. [   ]) for all the LUNs that were presented to the server_N1/N2 directly.

On server_N1

37. [   ]    Power-on server_N1.

38. [   ]    Zone the initiator-ports on server_N1 with the VPLEX front-end ports through front-end switches.

On the VPLEX management server

39. [   ]    Make sure you are able to see unregistered initiator-ports in the initiator-ports context.

VPlexcli:/> cd /clusters/cluster-1/exports/initiator-ports/

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> ls al

 

Name port-wwn node-wwn type Target Port Names

------------------------------- ------------------ ------------------ ------- --------------------------

 

UNREGISTERED-0x10000000c95c61c0 0x10000000c95c61c0 0x20000000c95c61c0 - -

 

UNREGISTERED-0x10000000c95c61c1 0x10000000c95c61c1 0x20000000c95c61c1 - -

40. [   ]    In the initiator-port context register the initiators of the host/server. Type can be set as default for windows operating system.

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i MscsInitiator_1 -p 0x10000000c95c61c0|0x20000000c95c61c0

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i MscsInitiator_2 -p 0x10000000c95c61c1|0x20000000c95c61c1

 

VPlexcli:/> ll /clusters/cluster-1/exports/initiator-ports/

 

/clusters/cluster-1/exports/initiator-ports:

Name port-wwn node-wwn type Target Port Names

----------- ------------------ ------------------ ------- --------------------------

MscsInitiator_1 0x10000000c95c07ec 0x20000000c95c07ec default -

MscsInitiator_2 0x10000000c95c07ed 0x20000000c95c07ed default -

41. [   ]    Create a new storage view on VPLEX.

VPlexcli:/> export storage-view create n MscsStorageView p P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01 -c cluster-1

42. [   ]    Add the initiator-ports and all the virtual-volumes created here to the storage-view. This example shows the addition of one initiator port and one virtual-volume.

VPlexcli:/> export storage-view addinitiatorport v MscsStorageView i MscsInitiator_1

 

VPlexcli:/> export storage-view addvirtualvolume v MscsStorageView o dev_lun_1_vol

 

VPlexcli:/> ls -al /clusters/cluster-1/exports/storage-views/MscsStorageView/

 

/clusters/cluster-1/exports/storage-views/EsxStorageView:

Name Value

------------------------ ----------------------------------------------------------------------

controller-tag -

initiators [MscsInitiator_1, MscsInitiator_2]

operational-status ok

port-name-enabled-status [P000000003CA00136-A0-FC00,true,ok, P000000003CA00136-A0-FC01,true,ok, P000000003CB00136-B0-FC00,true,ok, P000000003CB00136-B0-FC01,true,ok]

ports [P000000003CA00136-A0-FC00, P000000003CA00136-A0FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01]

virtual-volumes [(0,dev_lun_1_vol,VPD83T3:6000144000000010a001362eb24178d2,5G)]

[(0,dev_lun_2_vol,VPD83T3:6000144000000010a001362eb24178d3,5G)]

..

43. [   ]    Repeat step 42. [   ] to add the other initiator ports and virtual volumes created earlier.

On server_N1

44. [   ]    On server_N1, go to Disk Management in the Computer Management menu and rescan disks.

45. [   ]    At the command prompt verify all paths are present and working (If found, clear ALL dead paths)

powermt check dev=all

powermt display dev=all

46. [   ]    If all disks are not present, reboot server_N1 and check the number of paths.

47. [   ]    Make sure the drive letters are correct. If not, change them to what it was earlier.

48. [   ]    Manually start clustered applications.

49. [   ]    Reboot host/server and ensure everything mounts properly and all applications start as expected.

Note:  Remember to re-enable auto-start for your clustered apps at this time.

On server_N2

50. [   ]    Power on server_N2.

On the VPLEX management server

51. [   ]    On VPLEX, repeat steps 39. [   ] and 40. [   ] for server_N2. This will register Initiator-ports for server_N2 on VPLEX.

52. [   ]    Check the status of the storage-view.

VPlexcli:/> ls -al /clusters/cluster-1/exports/storage-views/MscsStorageView/

 

/clusters/cluster-1/exports/storage-views/EsxStorageView:

Name Value

------------------------ ----------------------------------------------------------------------

controller-tag -

initiators [MscsInitiator_1, MscsInitiator_2, MscsInitiator_3, MscsInitiator_4]

operational-status ok

port-name-enabled-status [P000000003CA00136-A0-FC00,true,ok, P000000003CA00136-A0-FC01,true,ok, P000000003CB00136-B0-FC00,true,ok, P000000003CB00136-B0-FC01,true,ok]

ports [P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01]

virtual-volumes [(0,dev_lun_1_vol,VPD83T3:6000144000000010a001362eb24178d2,5G)]

[(0,dev_lun_2_vol,VPD83T3:6000144000000010a001362eb24178d3,5G)]

..

You should be able to see the initiator ports for Server_N2.

On server_N2

53. [   ]    On server_N2, go to Disk Management in Computer Management menu and rescan disks.

54. [   ]    Go to command prompt and verify all paths are present and working (If found, clear ALL dead paths)

powermt check dev=all

powermt display dev=all

 

55. [   ]    If all disks are not present, reboot server_N2 and check the number of paths.

56. [   ]    Go to Failover Cluster Manager and verify that server_N2 has joined the cluster properly.

57. [   ]    Use Failover Cluster Manager to move ALL Clustered resources to server_N2.

On server_N1

58. [   ]    Perform Sanity Boot of server_N1.

59. [   ]    Ensure server_N1 comes up and joins the cluster.

60. [   ]    Use Failover Cluster Manager to move ALL clustered resources back over to server_N1.

On server_N2

61. [   ]    Perform Sanity Boot of server_N2.

62. [   ]    Ensure server_N2 comes up and joins the cluster.

63. [   ]    Go to the Failover Cluster Manager and validate server cluster's overall health.

64. [   ]    Enable and validate cluster servers application health.

On the VPLEX management server

65. [   ]    In the VPlexCLI run the health-check and cluster status commands again. The output should be as follows.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

Expansion of Encapsulated Virtual-Volumes

In case Virtual-volume expansion is required for the encapsulated volume, follow the procedure as given here.

1.     Go to the device context of source virtual-volume, which is created on the encapsulated disk here and set it application-consistent false.

VPlexcli:/> cd /clusters/cluster-1/devices/dev_lun_1/

 

VPlexcli:/clusters/cluster-1/devices/dev_lun_1> set application-consistent false

 

2.     Expand the source virtual-volume with an extent or local-device which has no data on it. In case the target extent/local-device has data, it will be lost after expansion.

VPlexcli:/> virtual-volume expand dev_lun_1_vol/ -e extent_target_sv_1/

 

3.     Now you should be able to see the expanded volume from the host, with the data on the source intact.