Encapsulate arrays on LINUX without boot from SAN

 

Topic

Customer Procedures

Selections

Procedures: Provision

Provisioning procedures: Encapsulate arrays on LINUX without boot from SAN

 

 

Contents

Encapsulating a LUN Presented to a LINUX Server. 3

Assumptions. 3

Procedure. 3

Expansion of Encapsulated Virtual-Volumes. 9

 


 

Encapsulating a LUN Presented to a LINUX Server

This procedure describes the task of encapsulating a LUN through VPLEX in a non-virtualized environment.

Assumptions

         The LINUX server is running with LUNs presented directly (or through a switch) from the storage-array.

         I/Os are running on the LUNs presented to the LINUX server.

         VPLEX must be commissioned.

         One new switch (pair of switches if HA is required) is available for use as front-end switches.

Procedure

  1. [   ]    Stop the running I/Os.

  2. [   ]    Shutdown the LINUX server with the command shutdown h now.

  3. [   ]    Change the configuration so that the storage array ports are no longer directly connected to the LINUX server or connected using a switch.

On the VPLEX management server

WARNING:    When allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time. After a set of no more than 10 LUNs have been allocated, the VPLEX should be checked to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

  4. [   ]    Log in to the VPLEX management server by typing its IP address in PuTTY. The default password is Mi@Dim7T.

login as: service

Using keyboard-interactive authentication.

Password:

service@vplexname:~>

 

  5. [   ]    Log in to VPlexCLI by entering vplexcli and providing the VPlexCLI username and password.

service@vplexname:~> vplexcli

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

 

Enter User Name: service

 

Password:

creating logfile:/var/log/VPlex/cli/session.log_service_localhost_T01234_20110525063015

 

VPlexcli:/>

 

  6. [   ]    Run the health-check and cluster status commands. The following is an example output of a healthy system running GeoSynchrony 5.0.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

On the back-end switches

  7. [   ]    Remove any old zones from direct zoning between the LINUX server and storage-array.

  8. [   ]    Zone the back-end ports on the VPLEX directors with the storage-array ports.

On the storage array

  9. [   ]    Make appropriate masking changes on the storage-array. For example if you are using a CLARiiON as storage-array, create a storage group on CLARiiON, connect VPLEX as initiator to it and add the same LUNs which were earlier exposed to the LINUX server.

On the VPLEX management server

10. [   ]    View the storage array in the storage-array context.

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/storage-arrays/

 

/clusters/cluster-1/storage-elements/storage-arrays:

Name Connectivity Auto Ports Logical

--------------------------- Status Switch ------------------- Unit

--------------------------- ------------ ------ ------------------- Count

--------------------------- ------------ ------ ------------------- -------

EMC-CLARiiON-FNM00094200051 ok true 0x50060160446019f5, 252

0x50060166446019f5,

0x50060168446419f5,

0x5006016f446019f5

 

11. [   ]    Make sure VPLEX can see the LUNs. If the WWN of a CLARiiON LUN is 6006016031111000d4991c2f7d50e011, it will be visible in the storage volume context as shown in this example:

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

/clusters/cluster-1/storage-elements/storage-volumes/:

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ ---------- ----- -----------

 

VPD83T3:6006016031111000d4991c2f7d50e011 VPD83T3:6006016031111000d4991c2f7d50e011 5G unclaimed DGC alive traditional false

 

Note:  If the required storage volume storage-array is not visible, in the storage-array context, enter the array re-discover command.

VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051

 

VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051/> array re-discover

WARNING: This command cannot detect LUN-swapping conditions on the array(s) being re-discovered. LUN swapping is the swapping of LUNs on the back-end. This command cannot detect LUN swapping conditions when the number of LUNs remains the same, but the underlying actual logical units change. I/O will not be disrupted on the LUNS that do not change. Continue? (Yes/No) y

 

12. [   ]    Claim the volume with --appc which marks it application consistent.

VPlexcli:/> storage-volume claim -d VPD83T3:6006016031111000d4991c2f7d50e011 -n lun_1 --appc

This can be confirmed by checking the Type of the storage-volume which should display as data-protected.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ --------- ----- -----------

 

lun_1 VPD83T3:6006016031111000d4991c2f7d50e011 5G used DGC alive data-protected false

 

13. [   ]    Create one single extent on the entire storage volume. Do not supply the size parameter for the extent.

VPlexcli:/> extent create d lun_1

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/extents/

 

/clusters/cluster-1/storage-elements/extents:

Name StorageVolume Capacity Use

------------------------ ------------------ -------- -------

extent_lun_1_1 lun_1 5G claimed

 

14. [   ]    Create a RAID-0 or RAID-c local-device with a single extent or a RAID-1 local-device. In case of Raid-1, put the application consistent extent as source-leg.

VPlexcli:/> local-device create -g raid-1 -e extent_lun_1_1, extent_lun_2_1 -n dev_lun_1 --source-leg extent_lun_1_1

VPlexcli:/> ls -al /clusters/cluster-1/devices/

 

/clusters/cluster-1/devices:

 

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- ---------- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- ---------- -------- -------------------

 

dev_lun_1 ok ok 20709376 4K 5G raid-1 local - -

 

In this example, extent_lun_2_1 is another extent of at least the same size as that of extent_lun_1_1. This extent is also used as the mirror here. This extent cannot be application consistent.

 

15. [   ]    Create a virtual volume on top of the local device.

VPlexcli:/> virtual-volume create r dev_lun_1

 

VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/

 

/clusters/cluster-1/virtual-volumes:

Name Operational Health Service Block Block Capacity Locality Supporting Cache Mode Expandable Consistency

------------------- Status State Status Count Size -------- -------- Device ----------- ---------- Group

------------------- ----------- ------ ---------- -------- ----- -------- -------- --------------- ----------- ---------- -----------

 

dev_lun_1_vol ok ok unexported 20709376 4K 5G local dev_lun_1 synchronous true -

 

16. [   ]    Create a new storage view on VPlex.

VPlexcli:/> export storage-view create n LinuxStorageView p P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01 -c cluster-1

 

On the LINUX server

17. [   ]    Power up the LINUX server so that servers initiator ports can log in to the switch.

On the front-end switch

18. [   ]    Go to the front-end switch and zone the front-end ports of VPLEX with the LINUX server ports.

On VPLEX management server

19. [   ]    View unregistered initiator-ports in the initiator-ports context.

VPlexcli:/> cd /clusters/cluster-1/exports/initiator-ports/

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> ls al

 

Name port-wwn node-wwn type Target Port Names

------------------------------- ------------------ ------------------ ------- --------------------------

 

UNREGISTERED-0x10000000c95c61c0 0x10000000c95c61c0 0x20000000c95c61c0 - -

 

UNREGISTERED-0x10000000c95c61c1 0x10000000c95c61c1 0x20000000c95c61c1 - -

 

20. [   ]    In the initiator-port context, register the initiators of the LINUX server. Set the Type to one of the following:

         For LINUX servers, use default or do not specify.

         For HPUX, Solaris, and AIX servers use hpux, sun-vcs and aix respectively.

         All other servers, use default.

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i LinuxInitiator_1 -p 0x10000000c95c61c0|0x20000000c95c61c0

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i LinuxInitiator_2 -p 0x10000000c95c61c1|0x20000000c95c61c1

VPlexcli:/> ll /clusters/cluster-1/exports/initiator-ports/

 

/clusters/cluster-1/exports/initiator-ports:

Name port-wwn node-wwn type Target Port Names

----------- ------------------ ------------------ ------- --------------------------

LinuxInitiator_1 0x10000000c95c07ec 0x20000000c95c07ec default -

LinuxInitiator_2 0x10000000c95c07ed 0x20000000c95c07ed default -

 

21. [   ]    Add the initiator ports from the LINUX server to the storage view.

VPlexcli:/> export storage-view addinitiatorport v LinuxStorageView i LinuxInitiator_1

 

VPlexcli:/> export storage-view addinitiatorport v LinuxStorageView i LinuxInitiator_2

 

22. [   ]    Export the virtual volumes to the storage view, making sure the LINUX server can see only one path for the LU. The following is an example of adding one virtual-volume.

VPlexcli:/> export storage-view addvirtualvolume v LinuxStorageView o dev_lun_1_vol

VPlexcli:/> ls -al /clusters/cluster-1/exports/storage-views/LinuxStorageView/

 

/clusters/cluster-1/exports/storage-views/LinuxStorageView:

Name Value

------------------------ ----------------------------------------------------------------------

controller-tag -

initiators [LinuxInitiator_1, LinuxInitiator_2]

operational-status ok

port-name-enabled-status [P000000003CA00136-A0-FC00,true,ok, P000000003CA00136-A0-FC01,true,ok, P000000003CB00136-B0-FC00,true,ok, P000000003CB00136-B0-FC01,true,ok]

ports [P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01]

virtual-volumes [(0,dev_lun_1_vol,VPD83T3:6000144000000010a001362eb24178d2,5G)]

 

On the LINUX server

23. [   ]    Log in to the PuTTY used for managing the LINUX server. Scan for the devices using LINUX specific commands.

24. [   ]    Create file systems on the LUNs exposed from CLARIION storage, and run I/Os .

On the VPLEX management server

25. [   ]    In the VPlexCLI run the health-check and cluster status commands again. The output should be as follows.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

Expansion of Encapsulated Virtual-Volumes

In case Virtual-volume expansion is required for the encapsulated volume, follow the procedure as given here.

1.     Go to the device context of source virtual-volume, which is created on the encapsulated disk here and set it application-consistent false.

VPlexcli:/> cd /clusters/cluster-1/devices/dev_lun_1/

 

VPlexcli:/clusters/cluster-1/devices/dev_lun_1> set application-consistent false

 

2.     Expand the source virtual-volume with an extent or local-device which has no data on it. In case the target extent/local-device has data, it will be lost after expansion.

VPlexcli:/> virtual-volume expand dev_lun_1_vol/ -e extent_target_sv_1/

 

3.     Now you should be able to see the expanded volume from the host, with the data on the source intact.