Encapsulate arrays on Solaris x86 or Solaris Sparc

 

 

Topic

Customer Procedures

Selections

Procedures: Provision

Provisioning procedures: Encapsulate arrays on Solaris x86 or Solaris Sparc

 

 

Contents

Encapsulating a LUN Presented to a SOLARIS Server. 3

Configuration tested. 3

Assumptions. 3

Procedure. 3

Expansion of Encapsulated Virtual-Volumes. 10

 


 

Encapsulating a LUN Presented to a SOLARIS Server

This procedure describes the task of encapsulating a LUN through VPLEX from a non-virtualized environment.

Note:  This procedure is applicable to both SPARC and X86 architectures.

Configuration tested

    Operating System: Solaris SPARC

    Operating System Version: 5.10 Update 9

    EMC PowerPath (c) Version: 5.3 (build 473)

    VPLEX version : 5.0.1.00.00.07

    Operating System: Solaris X-86

    Operating system Version: Solaris 5.10, Update 9

    EMC PowerPath (c) Version: 5.3 SP 1 (build 84)

    VPLEX version : 5.0.1.00.00.07

 

Note:  VPLEX is not capable of correctly encapsulating a back-end device whose capacity is not perfectly divisible by 4k block size. As a consequence of this behavior, data at the end of an encapsulated disk may be lost through virtualization, and migrations from such devices are incomplete. The user would first have to expand a non-conformant volume on the back-end array to make it VPLEX conformant.

Assumptions

    The SOLARIS server is running with LUNs presented directly (or through a switch) from the storage-array.

    The LUN is presented directly (or through a switch) from the storage-array and its capacity should be divisible by 4k. If it is not please migrate the LUN to a new one which is divisible by a block-size of 4k.

    I/Os are running on the LUNs presented to the SOLARIS server.

    VPLEX must be commissioned.

    One new switch (pair of switches if HA is required) is available for use as front-end switches.

Procedure

  1. [   ]    Stop the running I/Os.

  2. [   ]    Note down the WWN number of the BE lun which needs to be encapsulated.

  3. [   ]    Log in to the PuTTY used for managing the SOLARIS server. Unmount the LUs that need to be encapsulated from the Solaris server.

bash-3.00# umount /mnt2/

 

  4. [   ]    Change the configuration so that the storage array ports are no longer connected to the SOLARIS server directly or using a switch.

On the VPLEX management server

WARNING:    When allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time. After a set of no more than 10 LUNs have been allocated, the VPLEX should be checked to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

  5. [   ]    Log in to the VPLEX management server by typing its IP address in PuTTY. The default password is Mi@Dim7T.

login as: service

Using keyboard-interactive authentication.

Password:

service@vplexname:~>

 

  6. [   ]    Log in to VPlexCLI by entering vplexcli and providing the VPlexCLI username and password.

service@vplexname:~> vplexcli

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

 

Enter User Name: service

 

Password:

creating logfile:/var/log/VPlex/cli/session.log_service_localhost_T01234_20110525063015

 

VPlexcli:/>

 

  7. [   ]    Run the health-check and cluster status commands. The following is an example output of a healthy system running GeoSynchrony 5.0.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

On the back-end switches

  8. [   ]    Remove any old zones from direct zoning between the SOLARIS server and storage-array.

  9. [   ]    Zone the back-end ports on the VPLEX directors with the storage-array ports.

On the storage array

10. [   ]    Make appropriate masking changes on the storage-array. For example if you are using a CLARiiON as storage-array, create a storage group on CLARiiON, connect VPLEX as initiator to it and add the same LUNs which were earlier exposed to the SOLARIS server.

On the VPLEX management server

11. [   ]    View the storage array in the storage-array context.

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/storage-arrays/

 

/clusters/cluster-1/storage-elements/storage-arrays:

Name Connectivity Auto Ports Logical

--------------------------- Status Switch ------------------- Unit

--------------------------- ------------ ------ ------------------- Count

--------------------------- ------------ ------ ------------------- -------

EMC-CLARiiON-FNM00094200051 ok true 0x50060160446019f5, 252

0x50060166446019f5,

0x50060168446419f5,

0x5006016f446019f5

 

12. [   ]    Make sure VPLEX can see the LUNs. If the WWN of a CLARiiON LUN is 6006016031111000d4991c2f7d50e011, it displays in the storage volume context as follows:

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

/clusters/cluster-1/storage-elements/storage-volumes/:

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ ---------- ----- -----------

 

VPD83T3:6006016031111000d4991c2f7d50e011 VPD83T3:6006016031111000d4991c2f7d50e011 5G unclaimed DGC alive traditional false

 

Note:  If the required storage volumes are not visible, in the storage-array context, type the array re-discover command.

VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051

 

VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051/> array re-discover

WARNING: This command cannot detect LU-swapping conditions on the array(s) being re-discovered. LU swapping is the swapping of LUs on the back-end. This command cannot detect LU swapping conditions when the number of LUs remains the same, but the underlying actual logical units change. I/O will not be disrupted on the LUS that do not change. Continue? (Yes/No) (Yes/No) y

 

13. [   ]    Claim the volume with --appc which marks it application consistent. Make sure that you are claiming the right LUN by comparing the VPD ID of storage volume with the WWN of the back-end LUN.

VPlexcli:/> storage-volume claim -d VPD83T3:6006016031111000d4991c2f7d50e011 -n lun_1 --appc

 

This can be confirmed by checking the Type of the storage-volume which should display as data-protected.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ --------- ----- -----------

 

lun_1 VPD83T3:6006016031111000d4991c2f7d50e011 5G used DGC alive data-protected false

 

14. [   ]    Create one single extent on the entire storage volume. Do not supply the size parameter for the extent.

VPlexcli:/> extent create d lun_1

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/extents/

 

/clusters/cluster-1/storage-elements/extents:

Name StorageVolume Capacity Use

------------------------ ------------------ -------- -------

extent_lun_1_1 lun_1 5G claimed

 

15. [   ]    Create a RAID-0 or RAID-c local-device with a single extent or a RAID-1 local-device. In case of Raid-1, put the application consistent extent as source-leg.

VPlexcli:/> local-device create -g raid-1 -e extent_lun_1_1, extent_lun_2_1 -n dev_lun_1 --source-leg extent_lun_1_1

VPlexcli:/> ls -al /clusters/cluster-1/devices/

 

/clusters/cluster-1/devices:

 

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- ---------- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- ---------- -------- -------------------

 

dev_lun_1 ok ok 20709376 4K 5G raid-1 local - -

 

Here extent_lun_2_1 is another extent of size same as or larger than that of extent_lun_1_1 and is being used as the mirror here. This extent cannot be application consistent.

16. [   ]    Create a virtual volume on top of the local device.

VPlexcli:/> virtual-volume create r dev_lun_1

 

VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/

 

/clusters/cluster-1/virtual-volumes:

Name Operational Health Service Block Block Capacity Locality Supporting Cache Mode Expandable Consistency

------------------- Status State Status Count Size -------- -------- Device ----------- ---------- Group

------------------- ----------- ------ ---------- -------- ----- -------- -------- --------------- ----------- ---------- -----------

 

dev_lun_1_vol ok ok unexported 20709376 4K 5G local dev_lun_1 synchronous true -

 

17. [   ]    Create a new storage view on VPLEX.

VPlexcli:/> export storage-view create n SolarisStorageView p P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01 -c cluster-1

On the front-end switch

18. [   ]    On the front-end switch, zone the front-end ports of VPLEX with the SOLARIS server ports.

On VPLEX management server

19. [   ]    View unregistered initiator-ports in the initiator-ports context.

VPlexcli:/> cd /clusters/cluster-1/exports/initiator-ports/

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> ls al

 

Name port-wwn node-wwn type Target Port Names

------------------------------- ------------------ ------------------ ------- --------------------------

 

UNREGISTERED-0x10000000c95c61c0 0x10000000c95c61c0 0x20000000c95c61c0 - -

 

UNREGISTERED-0x10000000c95c61c1 0x10000000c95c61c1 0x20000000c95c61c1 - -

 

20. [   ]    In the initiator-port context, register the initiators of the SOLARIS server. Set the Type to one of the following:

    For SOLARIS servers, use sun-vcs.

    For HPUX and AIX servers use hpux and aix respectively.

    All other servers use default.

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i SolarisInitiator_1 -p 0x10000000c95c61c0|0x20000000c95c61c0 t sun-vcs

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i SolarisInitiator_2 -p 0x10000000c95c61c1|0x20000000c95c61c1 t sun-vcs

VPlexcli:/> ll /clusters/cluster-1/exports/initiator-ports/

 

/clusters/cluster-1/exports/initiator-ports:

Name port-wwn node-wwn type Target Port Names

----------- ------------------ ------------------ ------- --------------------------

SolarisInitiator_1 0x10000000c95c07ec 0x20000000c95c07ec sun-vcs -

SolarisInitiator_2 0x10000000c95c07ed 0x20000000c95c07ed sun-vcs -

 

21. [   ]    Add the initiator ports from the SOLARIS server to the storage view.

VPlexcli:/> export storage-view addinitiatorport v SolarisStorageView i SolarisInitiator_1

 

VPlexcli:/> export storage-view addinitiatorport v SolarisStorageView i SolarisInitiator_2

 

22. [   ]    Export the virtual volumes to the storage view, making sure the SOLARIS server can see only one path for the LUN. The following is an example of adding one virtual-volume.

VPlexcli:/> export storage-view addvirtualvolume v SolarisStorageView o dev_lun_1_vol

VPlexcli:/> ls -al /clusters/cluster-1/exports/storage-views/SolarisStorageView/

 

/clusters/cluster-1/exports/storage-views/SolarisStorageView:

Name Value

------------------------ ----------------------------------------------------------------------

controller-tag -

initiators [SolarisInitiator_1, SolarisInitiator_2]

operational-status ok

port-name-enabled-status [P000000003CA00136-A0-FC00,true,ok, P000000003CA00136-A0-FC01,true,ok, P000000003CB00136-B0-FC00,true,ok, P000000003CB00136-B0-FC01,true,ok]

ports [P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01]

virtual-volumes [(0,dev_lun_1_vol,VPD83T3:6000144000000010a001362eb24178d2,5G)]

 

Note:  Write down the wwn number of the VPlex virtual volume which would be useful to identify the same LUN on the server. In the above example 6000144000000010a001362eb24178d2 would be the wwn number.

On the SOLARIS server

23. [   ]    Scan for the devices using SOLARIS specific commands.

bash-3.00# devfsadm C

 

At times it may also be required to run cfgadm al and powercf q commands.

24. [   ]    Mount the LUNs to the locations where they were earlier mounted. The names of power devices may change.

25. [   ]    Compare the WWN number from step 22. [   ] with the logical device ID from the output of powermt display command dev=all to confirm the correct power device to mount.

bash-3.00# mount /dev/dsk/emcpower35c /mnt2/

 

On the VPLEX management server

26. [   ]    In the VPlexCLI run the health-check and cluster status commands again. The output should be as follows.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

Expansion of Encapsulated Virtual-Volumes

In case Virtual-volume expansion is required for the encapsulated volume, follow the procedure as given here.

1.     Go to the device context of source virtual-volume, which is created on the encapsulated disk here and set it application-consistent false.

VPlexcli:/> cd /clusters/cluster-1/devices/dev_lun_1/

 

VPlexcli:/clusters/cluster-1/devices/dev_lun_1> set application-consistent false

 

2.     Expand the source virtual-volume with an extent or local-device which has no data on it. In case the target extent/local-device has data, it will be lost after expansion.

VPlexcli:/> virtual-volume expand dev_lun_1_vol/ -e extent_target_sv_1/

 

3.     Now you should be able to see the expanded volume from the host, with the data on the source intact.