Professional Documents
Culture Documents
Introduction
Traditionally, AIX storage devices were made available for use by assigning disk devices to Volume
Groups (VGs) and then defining Logical Volumes (LVs) in the VGs. When a disk is assigned to a VG,
the Logical Volume Manager (LVM) writes information to the disk and to the AIX Object Data Manager
(ODM). Information on the disk identifies that the disk was assigned to the LVM; other information in
the ODM identifies which VG the disk belongs to. System components other than the LVM can use the
same convention. For example, IBM’s General Parallel File System (GPFS) uses the same area of the
disk to identify disks assigned to it. AIX commands can use the identifying information on the disk or in
the ODM to help prevent a disk already in use from being reassigned to another use. The information can
also be used to display useful information to help identify disk usage. For example, the lspv command
will display the VG name of disks that are assigned to a VG.
When using Oracle ASM, which does its own disk management, the disk devices are typically assigned
directly to the Oracle application and are not managed by the LVM. The Oracle OCR and VOTING disks
are also commonly assigned directly to storage devices and are not managed by the LVM. In these cases,
the identifying information associated with disks that are managed by the LVM is not present. AIX has
special functionality to help manage these disks and to help prevent an AIX administrator from
inadvertently reassigning a disk already in use by Oracle and inadvertently corrupting the Oracle data.
Where possible, AIX commands that write to the LVM information block have special checking added to
check if the disk is already in used by Oracle to prevent these disks from being assigned to the LVM,
which would result in the Oracle data becoming corrupted. These commands, if checking is done, and the
AIX levels where checking was added is listed in Table 1.
Table 1 – AIX commands which write control information on the disk and if and when checking for
Oracle disk signatures was added
AIX 6.1 and AIX 7.1 LVM commands contain new functionality that can be used to better manage AIX
devices used by Oracle. This new functionality includes commands to better identify shared disks across
multiple nodes, the ability to assign a meaningful name to a device, and a locking mechanism that the
system administrator can use when the disk is assigned to Oracle to help prevent the accidental reuse of a
disk at a later time. This new functionality is listed in Table 2, along with the minimum AIX level
providing that functionality.
Table 2 – New AIX Commands Useful for Managing AIX Devices Used by Oracle ASM
The use of each of the commands in Table 2 is described in the sections below.
The AIX lkdev command should be used by the system administrator when a disk is assigned to Oracle to
lock the disk device to prevent the device from inadvertently being altered by a system administrator at a
later time. The lkdev command locks the specified device so that any attempt to modify the device
attributes (chdev, chpath) or remove the device or one of its paths (rmdev, rmpath) will be denied. This is
intended to get the attention of the administrator and warn that the device is already being used. The “-d”
option of the lkdev command can be used to remove the lock if the disk is no long being used by Oracle.
The lspv command with the “-u” option indicates if the disk device is locked. The example section of this
paper shows how to use lkdev and the related lspv output.
The AIX rendev command can be used to assign meaningful names to disks used by the Oracle Database,
Cluster Ready Services (CRS) and ASM. This is useful in identifying disk usage because there is no
indication in output from AIX disk commands indicating that a disk is being used by Oracle. For
example, the lsps command does not indicate the disk is used by Oracle. The command can be used to
assign a meaningful name to the Oracle CRS OCR and VOTING disks, whether they are accessed as raw
devices (prior to 11gR2) or through the ASM (11gR2 and later).
For non-RAC installations a system administrator should identify the disks that will be managed by ASM
and assign meaningful names. Any name that is 15 characters or less and not already used in the system
can be used, but it is recommended that you keep the "hdisk" prefix on the device name, as this will allow
the default ASM discovery string to find the disks, and make it obvious it is a hdisk device. The ASM
disk discovery process will find the disks even though the names have changed as long as the new names
match the ASM discovery string.
For RAC installations the disks are shared across nodes but the names of the shared disk devices are not
necessarily the same across all of the nodes in the cluster. The ASM can manage identifying storage
devices even if the device names do not match across the nodes in the cluster, however it is useful to
make the names for each shared disk device consistent across the cluster. The new “-u” option of the lspv
command is useful in identifying disks across nodes of a cluster.
When rendev is used to rename a disk both the block and character mode devices are renamed. If the
device being renamed is in the Available state, the rendev command must unconfigure the device before
renaming it. If the unconfigure operation fails, the renaming will also fail. If the unconfigure succeeds, the
rendev command will configure the device, after renaming it, to restore it to the Available state. In the
process of unconfiguring and reconfiguring the device the ownership and permissions will be reset to the
default values. So after renaming the disk device the ownership and permission should be checked and if
necessary changed to values required by your Oracle RAC installation. Device settings stored in the AIX
ODM, for example reserve_policy, will not be changed by the renaming process.
Some disk multipathing solutions my have problems with device renaming. At least some versions of
EMC PowerPath and some IBM SDDPCM tools (IBM storage MPIO tools) have dependencies on disk
names, which can lead to problems when disks are renamed. For this reason, device renaming should only
be used with the AIX native MPIO device driver, unless confirming with the storage vendor your storage
solution is compatible with renaming the disks.
The “-u” option of the AIX lspv command provides additional device identification information, the
UDID and UUID. The lspv command will also indicate if the device is locked. These IDs are unique to a
disk and can be used to identify a shared disk across nodes of the cluster. For example, if you want to
rename a device to use a meaningful name that is the same for that disk on all nodes. Which identification
information is available is dependant on the storage and device driver being used. In addition, newer
device driver and system versions may add identification information that was not previously available.
When present either if these IDs can be used to identify a disk across nodes.
The Unique device identifier (UDID) is the unique_id attribute from the ODM CuAt ObjectClass. The
UDID is present for PowerPath, AIX MPIO, and some other devices. The Universally Unique Identifier
(UUID) is present for AIX MPIO devices
Because the ASM uses the same area of the disk where AIX stores the PVID, a PVID should never be
written to a disk after the disk has been assigned to the ASM, as this would corrupt the ASM disk header.
If the PVID was written to the disk before the disk was assigned to ASM, and all the nodes in the cluster
discovered the disk while the PVID was still on the disk, then the PVID will remain in the AIX ODM and
will show up in the output of the lspv command. However because this method can not be used when
adding nodes after the disks are assigned to the ASM, and because of the risk of overwriting the ASM
disk header if someone inadvertently adds a PVID after the disk is assigned to ASM the UDID or UUID
should be used to identify shared disks.
Examples:
The following examples show how to use the AIX rendev and lkdev commands with Oracle RAC and
ASM.
Example 1:
This example uses rendev and lkdev commands to rename and lock ASM disk
devices on an existing Oracle RAC cluster. In this example the disks are already
owned by ASM, so we use an ASM view to match up the disk names across the
cluster instead of ‘lspv –u’.
Example 2:
This example shows how to add a new node to an existing RAC cluster, using ‘lspv –
u’ to identify the disks on the new node.
Example 1:
In this example rendev and lkdev are used to rename and lock the ASM disks on an existing Oracle RAC
cluster. In this example the cluster has four nodes, and Oracle data, OCR and Voting files are all in the
ASM. The ASM disks can be locked while the cluster is active, but instances and clusterware need to be
shutdown to rename the ASM disks. ASM uses the discovery string and control information on the ASM
disks to identify the disks, so the names of the ASM disk devices can change without confusing ASM; no
ASM commands need to be issued. In the example meaningful disk device names that start with “hdisk”
are used, so there is no need to change the ASM discovery string. Because the ASM disk names do not
need to match across the cluster the changes can be made one node at a time, while the Oracle database
and clusterware are active on the other nodes. The following commands show the renaming and locking
on the first node. The other nodes can be changed in a similar way.
SQL> select name, path, mode_status, state from v$asm_disk order by name;
The following commands and output show that the Oracle RAC cluster is active on all nodes. So the
cluster and CRS are stopped on node rac222 where we will make the changes.
With Oracle RAC and the CRS stopped all of the ASM disks devices can be renamed. For the new
meaningful name we keep “hdisk” in the start of the name so it is clear these are disk devices and so the
default ASM discovery string will still search the correct disks. We include “ASM” in the name to
indicate the disks are owned by ASM, followed by “d” for datafiles or “c” for clusterware files, followed
by a number for uniqueness. The name is arbitrary, so can be adapted to what is meaningful in the context
of your configuration. Then ownership of the devices is set back to the original values required by Oracle,
oracle:dba in this example.
We check the CRS is ONLINE and verify the OCR and voting files.
root@rac222 # ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 4220
Available space (kbytes) : 257900
ID : 133061953
Device/File Name : +OCR
Device/File integrity check succeeded
We can now verify the names and status of the of the disks in ASM.
18 rows selected.
Now we lock the ASM disk devices. This can be done while Oracle RAC is active on the cluster. For each
of the ASM disk devices lock the device as show below for the first device. Then use the lspv command
to check the status of the disks.
root@rac222 # lspv
hdisk0 00c1894ca63631c2 None
hdisk1 00c1892cab6286eb None
hdisk2 00c1892cab72bef3 None
hdisk3 00c1892cac105522 None
hdiskASMd001 00c1892c11ce6578 None locked
hdiskASMd002 00c1892c11ce6871 None locked
hdiskASMd003 00c1892c11ce6b66 None locked
hdiskASMd004 00c1892c11ce6e39 None locked
hdiskASMd005 00c1892c11ce7129 None locked
hdiskASMd006 00c1892c11ce73fb None locked
hdiskASMd007 00c1892c11ce76d4 None locked
hdiskASMd008 00c1892c11ce79a3 None locked
hdiskASMd009 00c1892c11ce7d63 None locked
hdiskASMd010 00c1892c11ce8044 None locked
hdiskASMc011 00c1894c211cc685 None locked
hdiskASMc012 00c1894c211cc7f2 None locked
hdiskASMc013 00c1894c211cc94c None locked
hdiskASMc014 00c1894c211ccaa8 None locked
hdiskASMc015 00c1894c211ccbfb None locked
hdiskASMc016 none None locked
hdiskASMc017 none None locked
hdiskASMc018 00c1894c211cd008 None locked
hdisk22 00c1894c992baec7 None
hdisk23 00c1894c992bb220 None
hdisk24 00c1894c992bb5a3 None
hdisk25 00c1894c992bb8c7 None
hdisk26 00c1894c992bbc0f None
hdisk27 00c1894c992bbfdc None
hdisk28 00c1894cbfa5952b oravg active
hdisk29 00c1894c05f04253 rootvg active
hdisk30 00c1894c4a3c24bc oravg active
hdisk31 00c1894cce049074 oravg active
hdisk32 00c1894c4a3bf8b0 None
hdisk33 00c1894c5567ae8d None
hdisk34 00c1894c4a3c1347 None
Note the indication of which disks are locked in the last column of the lspv output listed above. Also, note
that on this system lspv is showing a PVID (second column) for some of the ASM disks but not for two of
the disks (hdiskASMc016/017). The ASM disks do not have a PVID on the physical disk, because that
area is reused by the ASM. The values being shown by lspv are the values in the AIX ODM. When AIX
first identifies a new disk device is saves information about that device, including the PVID in the ODM.
So depending on the state of the disk when AIX first configured it determines if a PVID is in the AIX
ODM or not.
Example 2:
This example shows how to add a new node to an existing RAC cluster, using ‘lspv –u’ to identify the
disks on the new node, and then rename and lock them. When a new node is added to an existing cluster
using ASM, the PVIDs will not be present on the ASM disks. Since the disks are already in use by the
ASM on the other nodes there is no PVID on the disk when AIX on the new node configures the disk
devices. In this case ‘lspv –u’ is used to identify the disks owned by ASM. The example only show how
In this example the shared disks (LUNs) have been mapped to the new node (rac223) and the AIX
command cfgmgr was run on rac223 to configure the new disks. Following is the resulting lspv output.
root@rac223 # cfgmgr
In the above lspv output note that hdisk7 through hdisk24 was configured by running cfgmgr. These are
the ASM shared disks that were mapped to the new Oracle RAC node rac223. Also not that the PVID
field is “none” for these disks. This is because there was no PVID on the disk when cfgmgr was run. At
Show the disk name and UUID on existing cluster node rac222
Now the UUID is used to join the rac223 disk name with the rac222 name for the same disk.
Based on the above paring the disks on rac223 are renamed to match the rac222 name.
Now the we set the disk permissions, and any disk device attributes that may be required, for example the
reserve_policy. Then the ASM device can be locked.
Set the any required disk attributeson all ASM disks, for example the
reserve_policy:
lkdev –l hdiskASMd001 –a
hdiskASMd001 locked
(repeat for other disks)
Check lspv:
1
A userid and password are required to access MOS
Author
The author of this Technical Note is Dennis Massanari, an IBM Software Engineer with the IBM System
and Technology Group, working with IBM Power/AIX and Oracle Database.
Additional Information
For more information on this Technical Note, please send your questions to the IBM Oracle International
Competency Center at ibmoracle@us.ibm.com