Encapsulate arrays on ESX with boot from SAN

 

Topic

Customer Procedures

Selections

Procedures: Provision

Provisioning procedures: Encapsulate arrays on ESX with boot from SAN

 

 

Contents

Encapsulating an ESX SAN Boot Disk. 3

Assumptions. 3

Procedure. 3

Expose other LUNs to the ESX host 9

Expansion of Encapsulated Virtual-Volumes. 12

 


 

Encapsulating an ESX SAN Boot Disk

This procedure describes the task of encapsulating a SAN boot disk through VPLEX in a non-virtualized environment.

Assumptions

    The ESX host is running with a SAN booted disk connected directly (or through a switch) to the storage-array.

    At least one virtual machine is running I/Os on the other disks exposed to the ESX host.

    VPLEX must be commissioned.

    One new switch (pair of switches if high availability is required) is available for use as a front-end switch.

Note:  This document can be used for hosts other than ESX with the difference that virtual machines are not applicable to the other hosts. Instead, we need to create file systems on the disks exposed to the other hosts.

Procedure

  1. [   ]    Power down the virtual machines on the host by shutting down the operating systems on the virtual machines.

  2. [   ]    Shut down the ESX host using the command shutdown h now.

  3. [   ]    Remove the direct connection between ESX server and storage-array. If there is a switch between the two, remove zones and disconnect the ESX server from the Switch.

  4. [   ]    Remove the LUNs from the storage group on the storage array.

 

On the VPLEX management console

WARNING:    When allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time. After a set of no more than 10 LUNs have been allocated, the VPLEX should be checked to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

  5. [   ]    Log in to the VPLEX management server by typing its IP address in PuTTY. The default password is Mi@Dim7T.

login as: service

Using keyboard-interactive authentication.

Password:

service@vplexname:~>

 

  6. [   ]    Log in to VPlexCLI by entering vplexcli and providing the VPlexCLI username and password.

service@vplexname:~> vplexcli

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

 

Enter User Name: service

 

Password:

creating logfile:/var/log/VPlex/cli/session.log_service_localhost_T01234_20110525063015

 

VPlexcli:/>

 

  7. [   ]    Run the health-check and cluster status commands. The following is an example output of a healthy system running GeoSynchrony 5.0.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

On the back-end switches

  8. [   ]    Remove any old zones from direct zoning between the host and storage-array.

  9. [   ]    Zone the back-end ports on VPLEX directors with the storage-array ports.

On the storage array

10. [   ]    Make appropriate masking changes on the storage array. For example, if you are using a CLARiiON storage array:

        1.   Create a storage group on CLARiiON.

        2.   Connect VPLEX as an initiator to the storage group.

        3.   Add the LUNs that were exposed to the server directly earlier.

On the VPLEX management server

11. [   ]    In the VPlexCLI, in the storage-array context, use the ls command to see the storage array.

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/storage-arrays/

 

/clusters/cluster-1/storage-elements/storage-arrays:

Name Connectivity Auto Ports Logical

--------------------------- Status Switch ------------------- Unit

--------------------------- ------------ ------ ------------------- Count

--------------------------- ------------ ------ ------------------- -------

EMC-CLARiiON-FNM00094200051 ok true 0x50060160446019f5, 252

0x50060166446019f5,

0x50060168446419f5,

0x5006016f446019f5

 

12. [   ]    Make sure VPLEX can see the LUNs. If the WWN of a CLARiiON LUN is 6006016031111000d4991c2f7d50e011, it will be visible in the storage volume context as shown in this example:

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

/clusters/cluster-1/storage-elements/storage-volumes/:

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ ---------- ----- -----------

 

VPD83T3:6006016031111000d4991c2f7d50e011 VPD83T3:6006016031111000d4991c2f7d50e011 5G unclaimed DGC alive traditional false

 

Note:  If the required storage volume storage-array is not visible, in the storage-array context, enter the array re-discover command.

VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051

 

VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-FNM00094200051/> array re-discover

WARNING: This command cannot detect LUN-swapping conditions on the array(s) being re-discovered. LUN swapping is the swapping of LUNs on the back-end. This command cannot detect LUN swapping conditions when the number of LUNs remains the same, but the underlying actual logical units change. I/O will not be disrupted on the LUNS that do not change. Continue? (Yes/No) y

 

13. [   ]    Claim the volume with --appc which marks it application consistent.

VPlexcli:/> storage-volume claim -d VPD83T3:6006016031111000d4991c2f7d50e011 -n lun_79g_sanboot --appc

 

This can be confirmed by checking that the Type of the storage volume is data-protected.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ --------- ----- -----------

 

lun_79g_sanboot VPD83T3:6006016031111000d4991c2f7d50e011 79G used DGC alive data-protected false

14. [   ]    Create a single extent on the entire storage volume. Do not enter a size parameter for the extent.

VPlexcli:/> extent create d lun_79g_sanboot

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/extents/

 

/clusters/cluster-1/storage-elements/extents:

Name StorageVolume Capacity Use

------------------------ ------------------------- -------- -------

extent_lun_79g_sanboot_1 lun_79g_sanboot 79G claimed

15. [   ]    Create a local RAID-0, RAID-1 or RAID-c device with a single extent. In the case of Raid-1, use the same extent as the one used for the source leg.

VPlexcli:/> local-device create -g raid-1 -e extent_lun_79g_sanboot_1, extent_ext_for_mirroring_1 -n sanboot_dev_79g --source-leg extent_lun_79g_sanboot_1

VPlexcli:/> ls -al /clusters/cluster-1/devices/

 

/clusters/cluster-1/devices:

 

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- ---------- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- ---------- -------- -------------------

 

sanboot_dev_79g ok ok 20709376 4K 79G raid-1 local - -

 

Here extent_ext_for_mirroring_1 is another extent of at least the same size as that of the SAN boot LUN and is being used as the mirror. This extent cannot be application consistent.

16. [   ]    Create a virtual volume on top of the local device.

VPlexcli:/> virtual-volume create r sanboot_dev_79g

VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/

 

/clusters/cluster-1/virtual-volumes:

Name Operational Health Service Block Block Capacity Locality Supporting Cache Mode Expandable Consistency

------------------- Status State Status Count Size -------- -------- Device ----------- ---------- Group

------------------- ----------- ------ ---------- -------- ----- -------- -------- --------------- ----------- ---------- -----------

 

sanboot_dev_79g_vol ok ok unexported 20709376 4K 79G local sanboot_dev_79g synchronous true -

 

17. [   ]    Create a new storage view on VPLEX.

VPlexcli:/> export storage-view create n EsxStorageView p P000000003CA00136-A0-FC00 -c cluster-1

 

On the host

18. [   ]    Power up the host, so that host ports can log in to the switch.

At the front-end switch

19. [   ]    Zone the front-end ports of VPLEX with the host ports.

20. [   ]    In VPlexCLI, in the initiator-ports context, use the ls command to see unregistered initiator ports.

VPlexcli:/> cd /clusters/cluster-1/exports/initiator-ports/

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> ls al

 

Name port-wwn node-wwn type Target Port Names

------------------------------- ------------------ ------------------ ------- --------------------------

 

UNREGISTERED-0x10000000c95c61c0 0x10000000c95c61c0 0x20000000c95c61c0 - -

 

UNREGISTERED-0x10000000c95c61c1 0x10000000c95c61c1 0x20000000c95c61c1 - -

On the VPLEX management console

21. [   ]    In initiator-port context, register the initiators of the host. Set the Type as follows:

    For ESX hosts, default or do not supply the Type parameter

    For HPUX, Solaris, and AIX use hpux, sun-vcs, and aix respectively

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i EsxInitiator_1 -p 0x10000000c95c61c0|0x20000000c95c61c0

 

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i EsxInitiator_2 -p 0x10000000c95c61c1|0x20000000c95c61c1

VPlexcli:/> ll /clusters/cluster-1/exports/initiator-ports/

 

/clusters/cluster-1/exports/initiator-ports:

Name port-wwn node-wwn type Target Port Names

----------- ------------------ ------------------ ------- --------------------------

EsxInitiator_1 0x10000000c95c07ec 0x20000000c95c07ec default -

EsxInitiator_2 0x10000000c95c07ed 0x20000000c95c07ed default -

22. [   ]    Add an initiator port from the host to the storage view.

VPlexcli:/> export storage-view addinitiatorport v EsxStorageView i EsxInitiator_1

23. [   ]    Export the virtual volume to the storage view, making sure the host can see only one path for the LUN.

VPlexcli:/> export storage-view addvirtualvolume v EsxStorageView o sanboot_dev_79g_vol

On the ESX host

24. [   ]    In the HBA BIOS settings, change the boot LUN WWN to VPLEX. The host can now see the LUN as an Invista LUN instead of a storage-array LUN.

25. [   ]    Boot the host with these new settings. Because the LUN has the operating system on it, the machine boots from the VPLEX LUN.

Note:  If the boot process gets stuck at vmd-mount, follow the process given in the VMWare KB article 1012874, or 1012142 (can be tried in the same order as the former may lead to the latter based on outputs).

On the VPLEX management server

26. [   ]    After the ESX host boots, add the other initiator port for the ESX host and the remaining VPLEX front-end ports to the storage view. This adds more paths from VPLEX to the ESX host as required for high availability.

VPlexcli:/> export storage-view addinitiatorport v EsxStorageView i EsxInitiator_2

 

VPlexcli:/> export storage-view addport v EsxStorageView p P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01

VPlexcli:/> ls -al /clusters/cluster-1/exports/storage-views/EsxStorageView/

 

/clusters/cluster-1/exports/storage-views/EsxStorageView:

Name Value

------------------------ ----------------------------------------------------------------------

controller-tag -

initiators [EsxInitiator_1, EsxInitiator_2]

operational-status ok

port-name-enabled-status [P000000003CA00136-A0-FC00,true,ok, P000000003CA00136-A0-FC01,true,ok, P000000003CB00136-B0-FC00,true,ok, P000000003CB00136-B0-FC01,true,ok]

ports [P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01]

virtual-volumes [(0,sanboot_dev_79g_vol,VPD83T3:6000144000000010a001362eb24178d2,79G)]

On the host

27. [   ]    For high availability, on the boot LUN, reboot the ESX host.

28. [   ]    While rebooting, on the ESX host, in the HBA BIOS settings, add the other paths for the same LUN as secondary boot LUNs.

Expose other LUNs to the ESX host

On the VPLEX management server

To expose any other LUNs to the ESX host, follow these steps:

                   

  1. [   ]    Claim the volume with --appc which marks it application consistent.

VPlexcli:/> storage-volume claim -d VPD83T3:6006016031111000d4991c2f7d50e011 -n lun_79g_sanboot --appc

 

This can be confirmed by checking that the Type of the storage volume is data-protected.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

 

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ --------- ----- -----------

 

lun_79g_sanboot VPD83T3:6006016031111000d4991c2f7d50e011 79G used DGC alive data-protected false

  2. [   ]    Create a single extent on the entire storage volume. Do not enter a size parameter for the extent.

VPlexcli:/> extent create d lun_79g_sanboot

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/extents/

 

/clusters/cluster-1/storage-elements/extents:

Name StorageVolume Capacity Use

------------------------ ------------------------- -------- -------

extent_lun_79g_sanboot_1 lun_79g_sanboot 79G claimed

  3. [   ]    Create a local RAID-0, RAID-1 or RAID-c device with a single extent. In the case of Raid-1, use the same extent as the one used for the source leg.

VPlexcli:/> local-device create -g raid-1 -e extent_lun_79g_sanboot_1, extent_ext_for_mirroring_1 -n sanboot_dev_79g --source-leg extent_lun_79g_sanboot_1

VPlexcli:/> ls -al /clusters/cluster-1/devices/

 

/clusters/cluster-1/devices:

 

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- ---------- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- ---------- -------- -------------------

 

sanboot_dev_79g ok ok 20709376 4K 79G raid-1 local - -

 

Here extent_ext_for_mirroring_1 is another extent of at least the same size as that of the SAN boot LUN and is being used as the mirror. This extent cannot be application consistent.

  4. [   ]    Create a virtual volume on top of the local device.

VPlexcli:/> virtual-volume create r sanboot_dev_79g

VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/

 

/clusters/cluster-1/virtual-volumes:

Name Operational Health Service Block Block Capacity Locality Supporting Cache Mode Expandable Consistency

------------------- Status State Status Count Size -------- -------- Device ----------- ---------- Group

------------------- ----------- ------ ---------- -------- ----- -------- -------- --------------- ----------- ---------- -----------

 

sanboot_dev_79g_vol ok ok unexported 20709376 4K 79G local sanboot_dev_79g synchronous true -

 

  5. [   ]    Create a new storage view on VPLEX.

VPlexcli:/> export storage-view create n EsxStorageView p P000000003CA00136-A0-FC00 -c cluster-1

 

On the host

  6. [   ]    Power up the host, so that host ports can log in to the switch.

  1. [   ]    Log in to the vCenter/vSphere client used for managing the ESX host.

  2. [   ]    In the Communication tab for the host, under Storage, the LUN should be visible as a datastore.

  3. [   ]    In the Configuration tab for the host, under Storage Adapters, check the status of paths for Fibre Channel Host Adapters. They should show as active.

  4. [   ]    If the virtual machine is not yet present in the inventory, perform the following steps.

        4.   Right click the datastore

        5.   Select Browse Datastore

        6.   Right click the required virtual machines

        7.   Select Add to Inventory

        8.   Complete the process by replying to the question asked with I moved it on the vCenter prompt

  5. [   ]    In the left pane of the vCenter/vSphere Client, to power on the required virtual machines, right click the virtual machine names and run I/Os from the virtual machines.

On the host

Reboot the ESX host and confirm that the SAN boot is working and I/Os run fine on the other devices.

On the VPLEX management server

  6. [   ]    In the VPlexCLI run the health-check and cluster status commands again. The output should be as follows.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

Expansion of Encapsulated Virtual-Volumes

In case Virtual-volume expansion is required for the encapsulated volume, follow the procedure as given here.

1.     Go to the device context of source virtual-volume, which is created on the encapsulated disk here and set it application-consistent false.

VPlexcli:/> cd /clusters/cluster-1/devices/dev_lun_1/

 

VPlexcli:/clusters/cluster-1/devices/dev_lun_1> set application-consistent false

 

2.     Expand the source virtual-volume with an extent or local-device which has no data on it. In case the target extent/local-device has data, it will be lost after expansion.

VPlexcli:/> virtual-volume expand dev_lun_1_vol/ -e extent_target_sv_1/

 

3.     Now you should be able to see the expanded volume from the host, with the data on the source intact.