Encapsulate arrays on AIX without boot from SAN

 

Topic

Customer Procedures

Selections

Procedures: Provision

Provisioning procedures: Encapsulate arrays on AIX without boot from SAN

 

 

Contents

Encapsulate LUN on AIX and expand encapsulated virtual volume. 3

Configuration tested. 3

Assumptions. 3

Procedure. 3

Expansion of Encapsulated Virtual-Volumes. 9

 


 

Encapsulate LUN on AIX and expand encapsulated virtual volume

This procedure describes the task of encapsulating a data LUN (other than SAN boot disk) through VPLEX in a non-virtualized environment.

Configuration tested

    Operating system: AIX

    VIO Server version: 6.1.4.0

    vSCSI client version :6.1

    EMC PowerPath (c) Version: 5.3 SP 1 (build 84)

    VPLEX version : 5.0.1.00.00.07

Note:  VPLEX requires that back end devices designated for encapsulation have a capacity that is perfectly divisible by 4k block size. As a consequence of this behavior, data at the end of an encapsulated disk whose capacity is not perfectly divisible by 4k blocks may be lost through virtualization, and migrations from such devices are incomplete. You should first expand non-conformant volume on the back-end array to make it VPLEX conformant.

Assumptions

    AIX vSCSI client has data disk (i.e. user data and not the Operating System) served from VIO server connected directly (or through a switch) to the storage array.

    VPLEX must be commissioned.

    One new switch (pair of switches if high availability is required) is available for use as a front-end switch.

Procedure

  1. [   ]    Perform a graceful shutdown of the VIO client, and then the VIO server.

  2. [   ]    Remove the direct connection between the AIX VIO server and the storage array. If there is a switch between the two, remove zones and disconnect the AIX VIO server.

  3. [   ]    Remove the LUNs from the storage group on the storage array.

On the VPLEX management server

WARNING:    When allocating LUNs to a VPLEX from a storage array that is already being actively used by the VPLEX, no more than 10 LUNs should be allocated at a time. After a set of no more than 10 LUNs have been allocated, the VPLEX should be checked to confirm that all 10 have been discovered before the next set is allocated. Attempting to allocate more than 10 LUNs at one time, or in rapid succession, can cause VPLEX to treat the array as if it were faulted. This precaution does not need to be followed when the array is initially introduced to the VPLEX, before it is an active target of I/O.

  4. [   ]    Log in to the VPLEX management server by typing its IP address in PuTTY. The default password is Mi@Dim7T.

login as: service

Using keyboard-interactive authentication.

Password:

service@vplexname:~>

 

  5. [   ]    Log in to VPlexCLI by entering vplexcli and providing the VPlexCLI username and password.

service@vplexname:~> vplexcli

Trying 127.0.0.1...

Connected to localhost.

Escape character is '^]'.

 

Enter User Name: service

 

Password:

creating logfile:/var/log/VPlex/cli/session.log_service_localhost_T01234_20110525063015

 

VPlexcli:/>

 

  6. [   ]    Run the health-check and cluster status commands. The following is an example output of a healthy system running GeoSynchrony 5.0.

VPlexcli:/> health-check

Product Version: 5.0.0.00.00.28

 

Clusters:

---------

Cluster Cluster Oper Health Connected Expelled

Name ID State State

--------- ------- ----- ------ --------- --------

cluster-1 1 ok ok True False

 

Meta Data:

----------

Cluster Volume Volume Oper Health Active

Name Name Type State State

--------- ----------------------------- ----------- ----- ------ ------

cluster-1 meta1 meta-volume ok ok True

cluster-1 meta1_backup_2011Apr12_040403 meta-volume ok ok False

 

Front End:

----------

Cluster Total Unhealthy Total Total Total Total

Name Storage Storage Registered Ports Exported ITLs

Views Views Initiators Volumes

--------- ------- --------- ---------- ----- -------- -----

cluster-1 2 0 4 16 0 0

 

Storage:

--------

Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible

Name Storage Storage Virtual Virtual Dist Dist Dual from

Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs

--------- ------- --------- ------- --------- ----- --------- ----- -----------

cluster-1 8669 0 0 0 0 0 0 0

 

Consistency Groups:

-------------------

Cluster Total Unhealthy Total Unhealthy

Name Synchronous Synchronous Asynchronous Asynchronous

Groups Groups Groups Groups

--------- ----------- ----------- ------------ ------------

cluster-1 0 0 0 0

 

WAN Connectivity:

-----------------

Cluster Local Remote MTU Connectivity

Name Cluster Ips Cluster Ips

------- ----------- ----------- --- ------------

 

WAN Connectivity information is not available

Cluster Witness:

----------------

Cluster Witness is not configured

 

VPlexcli:/> cluster status

Cluster cluster-1

operational-status: ok

transitioning-indications:

transitioning-progress:

health-state: ok

health-indications:

 

On the back-end switches

  7. [   ]    Remove any old zones from direct zoning between the host and storage array.

  8. [   ]    Zone the storage-array ports and VPLEX back-end ports.

On the storage array

  9. [   ]    Make appropriate masking changes on the storage array. For example, if you are using a CLARiiON storage array:

        1.   Create a storage group on CLARiiON.

        2.   Connect VPLEX as an initiator to the storage group.

        3.   Add the data LUN that was exposed to the VIO Server directly.

        4.   Write down the LUN unique ID.

On the VPLEX management server

10. [   ]    In the storage-array context, view the storage array.

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/storage-arrays/

 

/clusters/cluster-1/storage-elements/storage-arrays:

Name Connectivity Auto Ports Logical

--------------------------- Status Switch ------------------- Unit

--------------------------- ------------ ------ ------------------- Count

--------------------------- ------------ ------ ------------------- -------

EMC-CLARiiON-HK190807370012 ok true 0x5006016041e05545, 20

0X006016141e05545,

0x5006016841e05545,

0x5006016941e05545

EMC-SYMMETRIX-192601422 ok - 0x50000972081639dc, 774

0x50000972081639dd

EMC-SYMMETRIX-192602021 ok - 0x50000972081f9518, 202

0x50000972081f9519,

0x50000972081f95d8,

0x50000972081f95d9

 

11. [   ]    To ensure that VPLEX can see the data volume, in storage-volume context, use the ls command. Now you should be able to see the storage volume with the same LUN ID that you wrote down in step 9. [   ], sub step 4. In this example, the LUN unique ID is 60000970000192601422533032413931.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

/clusters/cluster-1/storage-elements/storage-volumes/:

Name VPD83 ID Capacity Use Vendor IO Status Type

Thin Rebuild

---- -------- -------- --- ------ ---------- -----

-----------

VPD83T3: 60000970000192601422533032413931 VPD83T3: 60000970000192601422533032413931 25 unclaimed DGC alive traditional false

 

12. [   ]    If the storage volume is not visible, in the storage-array context, enter the command array re-discover.

VPlexcli:/> cd /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-HK190807370012/

VPlexcli:/clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiONFNM00094200051/> array re-discover

WARNING: This command cannot detect LUN-swapping conditions on the

array(s) being re-discovered. LUN swapping is the swapping of LUNs on the

back-end. This command cannot detect LUN swapping conditions when the

number of LUNs remains the same, but the underlying actual logical units

change. I/O will not be disrupted on the LUNS that do not change.

Continue? (Yes/No) y

 

13. [   ]    Claim the data volume with --appc flag which marks it application consistent.

VPlexcli:/> storage-volume claim -d VPD83T3:60000970000192601422533032413931 n Data_Volume -appc

 

14. [   ]    This can be confirmed by checking the Type of the storage volume which should display as data-protected.

VPlexcli:/> ls al /clusters/cluster-1/storage-elements/storage-volumes/

Name VPD83 ID Capacity Use Vendor IO Status Type Thin Rebuild

---- -------- -------- --- ------ --------- ----- ------- ----

Data_Volume VPD83T3:60000970000192601422533032413931 25G used DGC alive

data-protected false

 

15. [   ]    Create one single extent on the entire storage volume. Do not include the size parameter for the extent.

VPlexcli:/> extent create d Data_Volume

VPlexcli:/> ls -al /clusters/cluster-1/storage-elements/extents/

/clusters/cluster-1/storage-elements/extents:

Name StorageVolume Capacity Use

------------------------ ------------------------- -------- -------

extent_Data_Volume_1 Data_Volume 25G claimed

 

16. [   ]    Create a RAID-0 or RAID-c local-device with a single extent or a Raid-1 local device.

VPlexcli:/> /clusters/cluster-1/storage-elements/extents > local-device create -g raid-0 d 1 -e extent_Data_Volume_1 -n Data_Volume_Device

 

VPlexcli:/> ls -al /clusters/cluster-1/devices/

/clusters/cluster-1/devices:

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- -------- -- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- --------

-- -------- -------------------

Data_Volume_Device ok ok 20709376 4K 25G raid-0 local - -

 

Example of creating a RAID-1 local device: In this case, put the application-consistent extent as the source leg.

VPlexcli:/> local-device create -g raid-1 -e extent_Data_Volume_1,extent_ext_for_mirroring_1 -n Data_Volume_Device --source-leg extent_Data_Volume_1

VPlexcli:/> ls -al /clusters/cluster-1/devices/

/clusters/cluster-1/devices:

Name Operational Health Block Block Capacity Geometry Visibility Transfer Virtual Volume

--------------- Status State Count Size -------- -------- --------

-- Size -------------------

--------------- ----------- ------ -------- ----- -------- -------- --------

-- -------- -------------------

Data_Volume_Device ok ok 20709376 4K 25G raid-1 local

-------- ------ ---------

 

In this example, extent_ext_for_mirroring_1 is another extent of at least the same size as that of the extent_Data_Volume_1. This extent is being used as the mirror in this example. This extent cannot be application consistent.

17. [   ]    Create a virtual volume on the local device.

VPlexcli:/> virtual-volume create r Data_Volume_Device

VPlexcli:/> ll /clusters/cluster-1/virtual-volumes/

/clusters/cluster-1/virtual-volumes:

Name Operational Health Service Block Block Capacity Locality Supporting Cache Mode Expandable Consistency

------------------- Status State Status Count Size -------- -- ------ Device ----------- ---------- Group

------------------- ----------- ------ ---------- -------- ----- -------- --

------ --------------- ----------- ---------- -----------

Data_Volume_Device_vol ok ok unexported 20709376 4K 25G local Data_Volume_Device synchronous true

 

18. [   ]    Create a new storage view on VPLEX. For high availability purposes, add all front-end ports of VPLEX.

VPlexcli:/> export storage-view create n AIX_Test p P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01, P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01 -c cluster-1

 

On the VIO Server

19. [   ]    Power up the AIX VIO Server, so that server ports can log in to the switch.

At the front-end switch

20. [   ]    Zone the front-end ports of VPLEX with the VIO Server ports. (You can use 2 switches for high availability purpose)

On the VPLEX management server

21. [   ]    In VPlexCLI, in the initiator-ports context, use the ls command to see unregistered initiator ports.

VPlexcli :/> cd /clusters/cluster-1/exports/initiator-ports/

VPlexcli:/clusters/cluster-1/exports/initiator-ports> ls al

Name port-wwn node-wwn

type Target Port Names

------------------------------- ------------------ ------------------ -

------ --------------------------

UNREGISTERED-0x10000000c95c61c0 0x10000000c95c61c0 0x20000000c95c61c0 -

-

UNREGISTERED-0x10000000c95c61c1 0x10000000c95c61c1 0x20000000c95c61c1 -

-

22. [   ]    In initiator-port context, register the initiators of the host. Set the Type as follows:

For ESX hosts, use default or do not supply the Type parameter

For HPUX, Solaris, and AIX use hpux, sun-vcs, and aix respectively

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i

AIX_Initiator_1 -p 0x10000000c95c61c0|0x20000000c95c61c0 t aix

VPlexcli:/clusters/cluster-1/exports/initiator-ports> register -i

AIX_Initiator_2 -p 0x10000000c95c61c1|0x20000000c95c61c1 t aix

 

VPlexcli :/> ll /clusters/cluster-1/exports/initiator-ports/

/clusters/cluster-1/exports/initiator-ports:

Name port-wwn node-wwn type Target Port Names

----------- ------------------ ------------------ ------- ------------

--------------

AIX_Initiator_1 0x10000000c95c61c0 0x20000000c95c61c0 aix P000000003CA00136-A0-FC00

AIX_Initiator_2 0x10000000c95c61c1 0x20000000c95c61c1 aix

 

23. [   ]    For high availability purposes, add both the initiator ports from the AIX VIO server to the storage view.

VPlexcli :/> export storage-view addinitiatorport v AIX_Test i AIX_Initiator_1

VPlexcli :/> export storage-view addinitiatorport v AIX_Test i AIX_Initiator_2

 

24. [   ]    Export the virtual volume to the storage view, making sure that the host can see all of the paths for the LUN.

VPlexcli :/> export storage-view addvirtualvolume v AIX_Test o Data_Volume_Device_vol

 

VPlexcli:/> ls -al /clusters/cluster-1/exports/storage-views/AIX_Test

/clusters/cluster-1/exports/storage-views/AIX_Test:

Name Value

------------------------ -------------------------------------------------------------------------

controller-tag -

initiators [AIX_Initiator_1, AIX_Initiator_2]

operational-status ok

port-name-enabled-status [P000000003CA00136-A0-FC00,true,ok

P000000003CA00136-A0-FC01,true,ok,
P000000003CB00136-B0-FC00,true,ok,
P000000003CB00136-B0-FC01,true,ok]

ports [P000000003CA00136-A0-FC00, P000000003CA00136-A0-FC01,
P000000003CB00136-B0-FC00, P000000003CB00136-B0-FC01
]

virtual-volumes (0, Data_Volume_Device_vol,

VPD83T3:60000970000192601422533032413931, 25G)

(1 total)

 

Note:  Write down the VPD ID for the virtual volume here.

On VIO Server

25. [   ]    Login to the VIO Server and run either the AIX native cfgmgr or the EMC provided emc_cfgmgr command.

The script emc_cfgmgr is usually located under /usr/lpp/EMC directory. This command detects new devices assigned to system.

On vSCSI client

26. [   ]    Run the cfgmgr command. Now you should be able to see the disk. If not, verify the physical to virtual device mappings on the VIO server. Run lsmap all as padmin user on the VIO server.

27. [   ]    To check the disk size, run the bootinfo s hdiskn command (where n is the number of the disk). Check out the data on the disk, it should be intact. If the bootinfo command returns 0, contact EMC support.

Expansion of Encapsulated Virtual-Volumes

In case Virtual-volume expansion is required for the encapsulated volume, follow the procedure as given here.

1.     Go to the device context of source virtual-volume, which is created on the encapsulated disk here and set it application-consistent false.

VPlexcli:/> cd /clusters/cluster-1/devices/dev_lun_1/

 

VPlexcli:/clusters/cluster-1/devices/dev_lun_1> set application-consistent false

 

2.     Expand the source virtual-volume with an extent or local-device which has no data on it. In case the target extent/local-device has data, it will be lost after expansion.

VPlexcli:/> virtual-volume expand dev_lun_1_vol/ -e extent_target_sv_1/

 

3.     Now you should be able to see the expanded volume from the host, with the data on the source intact.