Oracle Multitenant plugin
the file names in the multitenant manifest or in the full transportable database import command must match all files exactly for the operation to complete

There are some difficulties when OMF/ASM is used, files are copied, and a physical standby database is in place. There have been improvements made to the multitenant plugin operation on both the primary and standby environments, however at this time additional work must still be done on the standby database when a full transportable database import operation is performed.

RMAN has been enhanced so that, when copying files between databases it recognizes the GUID and acts accordingly when writing the files.

* If the clone/auxiliary instance being connected to for clone operations is a CDB root, the GUID of the RMAN target database is used to determine the directory structure to write the datafiles. Connect to the CDB root as the RMAN clone/auxiliary instance when the source database should be a 12c non-CDB or PDB that is going to be migrated and plugged into a remote CDB as a brand new PDB. This will ensure that the files copied by RMAN will be written to the GUID directory of source database for the migration.

* If the clone/auxiliary instance being connected to for clone operations is a PDB, the GUID of the auxiliary PDB will be used to determine the directory structure to write the datafiles. Connect to the destination PDB as the RMAN clone auxiliary instance when the source database is a 12c non-CDB or PDB that requires a cross platform full transportable database import and the data and files will be imported into an existing PDB. This will ensure the files copied by RMAN will be written to the GUID directory of the PDB target database for the migration.

Posted in ASM


I am aggressively preparing for demos for my 2 hour deep dive session at IOUG Collaborate 2014: Session 974: 04/08/14 – 01:45 PM – 04:00 PM (Level 3, Lido 3101B) Extreme Oracle DB-Infrastructure-As-A-Service. Co-presenting with me with me will be Nitin Vengurlekar

We will cover all the topics from Linux as a Service to RAC as a Service to ASM as a Service and finish at Database as a Service.

From a RAC perspective, here’s a sample screen of what we will discuss. We have similar screen shots for ASM, Data Guard, RMAN and Linux:

[oracle@rac01 rac]$ ./rac
# ------------------------------------------------------------------------- #
#                RAC Menu System - rac-clust                             
# ------------------------------------------------------------------------- #
#   First Node:	rac01	VIP:  rac01-vip.viscosity.com       
#  Second Node:	rac02	VIP:  rac02-vip.viscosity.com       
# ------------------------------------------------------------------------- #
#  00.  Sync DBI Scripts Across All RAC Nodes                               #
# ------------------------------------------------------------------------- #
#  01.  Prepare Source RAC Cluster for Cloning (sudo)                       #
#       Will shutdown the Cluster and Unlock the /u01/app/12.1.0/grid Home 
#  02.  Lock the Grid Home:  /u01/app/12.1.0/grid (sudo) 
# ------------------------------------------------------------------------- #
#  03.  Prepare Oracle Directories (sudo)                                   #
# ------------------------------------------------------------------------- #
#  04.  Extract GI Home from Tarball (sudo)                                 #
#  05.  Extract DB Home from Tarball (sudo)                                 #
# ------------------------------------------------------------------------- #
#  20.  Setup SSH User Equivalence for All RAC Nodes                        #
# ------------------------------------------------------------------------- #
#  30.  Cleanup and Deconfig Submenu (sudo)                                 #
# ------------------------------------------------------------------------- #
#  40.  Clone Grid Infrastructure - /u01/app/12.1.0/grid                                 
#  41.  Run orainstRoot.sh and root.sh serially on all RAC nodes            #
# ------------------------------------------------------------------------- #
#  50.  Execute config.sh in silent mode                                    #
# ------------------------------------------------------------------------- #
#  60.  Create DATA and FRA diskgroups
# ------------------------------------------------------------------------- #
#  70.  Clone Database Home - /u01/app/oracle/product/12.1.0/dbhome_1                               
# ------------------------------------------------------------------------- #
#  80.  Create RAC Database - VPROD                                
# ------------------------------------------------------------------------- #
# 100.  Post Database Tasks                                                 #
# ------------------------------------------------------------------------- #
#   x.  Exit                                                                #
# ------------------------------------------------------------------------- #
#   Enter Task Number:


We are assuming that you have already installed kmod-oracleasm and oracleasm-support RPMs with yum:

 
# yum install kmod-oracleasm -y
# yum install oracleasm-support -y

For Red Hat Linux, you can download kmod-oracleasm from their support site. Check out my previous blog on where to download kmod-oracleasm for Red Hat 6.4 and above.

oracleasmlib is not available from the default yum repository. You can pull the oracleasmlib RPM from Oracle’s ASMLIB page for Oracle Linux 6:

[root@rac01 software]# rpm -ihv oracleasmlib-2.0.4-1.el6.x86_64.rpm 
Preparing...                ########################################### [100%]
   1:oracleasmlib           ########################################### [100%]

After we install the RPMs, we need to configure ASMLIB to scan immediately and to re-start on reboot for the Oracle user.

[root@rac01 software]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done

As the final step in the process, we need to initialize ASMLIB and confirm that it was successfully started:

 
[root@rac01 software]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size 
Mounting ASMlib driver filesystem: /dev/oracleasm

[root@rac01 software]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

Posted by Charles Kim, Oracle ACE Director


As an Oracle RAC environment increases in size, complexity and importance the more important it is to achieve high levels of automation and standardization.  Increased levels of automation and standardization creates high reliability and allows production DBAs to focus on improvements in the infrastructure and proactive performance tuning. In this blog post, I am going to share with you how I deal with rotating various log files that have the tendency to grow and grow in the Oracle environment. I do not use “cp /dev/null” commands to the log files.

The following script will generate the scripts necessary to rotate all the database alert logs, ASM alert log, listener log and scan listener log(s) for the RAC node.

The script has a dependence on the SH environment variable script being set. The SH environment variable will simple be the location where you store all your shell scripts. This script will create a sub-directory called logrotate. In the logrotate directory, it will create 2 files for each database alert logs, ASM alert log, listener log and scan listener log(s). The first file is the logrotate state file. The second file is the actual log rotate directives. For the database instance and scan listener, this script will perform a “ps -ef” command and look for actively running occurrence of the scan listener and database instance.

In the very end of the script, we will generate the logrotate script for you to put into your weekly master cleanup script. We promote 2 sets of cleanup scripts: a daily cleanup script to handle things like audit log purges and a weekly cleanup scripts to address all the growing log file required by Oracle. We no longer have to deal with the Oracle cluster services log files as Oracle started to rotate the logs for us automatically starting in Oracle Database 11g Release 2.

function log_rotate {
export SQLP="sqlplus -s / as sysdba"
export SET="set pages 0 trims on lines 2000 echo off ver off head off feed off"


export ASM_RUNNING=$(ps -ef |grep -i asm_pmon |awk {'print $8'} |sed "s/asm_pmon_//g" |egrep -v "sed|grep")
[ "$ASM_RUNNING" != "" ] && ASM_INSTANCE=$(echo $ASM_RUNNING |sed '$s/.$//')

LISTENER_LOG=$ORACLE_BASE/diag/tnslsnr/$(hostname -s)/listener/trace/listener.log

function diag {
export DIAG_DEST=$(
echo "
$SET
select value from v\$diag_info where name='Diag Trace';" |$SQLP )
}

# -- Determine ASM Log
export ORACLE_SID=$ASM_RUNNING
export ORAENV_ASK=NO
. oraenv -s
export GRID_HOME=$ORACLE_HOME
diag;

ASM_LOG=$DIAG_DEST/alert_${ORACLE_SID}.log

ls -l  $ASM_LOG
ls -l  $LISTENER_LOG

function rotate {
export LOGFILE=$1
export CONFIG_FILE=$2
export PATH=$PATH:/usr/sbin
export CONF_DIR=$SH/logrotate
[ ! -d "$CONF_DIR" ] && ( echo $CONF_DIR does not exist .. issuing mkdir; mkdir -p $CONF_DIR )

export CONF=$CONF_DIR/$CONFIG_FILE

cat <<!! >$CONF
$LOGFILE {
weekly
copytruncate
rotate 2
compress
}
!!

echo logrotate -s $CONF_DIR/log_rotate_status.$CONFIG_FILE -f $CONF
}

for DATABASES in $(ps -ef |grep -i pmon |grep -v ASM |awk {'print $8'} |sed "s/ora_pmon_//g" |egrep -v "sed|grep")
do
  export DB=$(echo $DATABASES |sed '$s/.$//')
  export ORACLE_SID=$DATABASES
  export ORAENV_ASK=NO
  . oraenv -s
  diag;
  export DB_LOG=$DIAG_DEST/alert_${ORACLE_SID}.log
  ls -l $DB_LOG

  rotate $DB_LOG $DATABASES
done

for SCAN in $(ps -ef |grep -i tns |grep SCAN |awk {'print $9'})
do
export LOWER_SCAN_LISTENER=$(echo $SCAN |tr '[A-Z]' '[a-z]')
SCAN_LISTENER_LOG=$GRID_HOME/log/diag/tnslsnr/$(hostname -s)/$LOWER_SCAN_LISTENER/trace/$LOWER_SCAN_LISTENER.log
ls -l  $SCAN_LISTENER_LOG
done

rotate $LISTENER_LOG listener
rotate $SCAN_LISTENER_LOG $LOWER_SCAN_LISTENER
rotate $ASM_LOG $ASM_RUNNING
}

Here’s a sample output of the log rotation script:

logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.test1 -f /u01/app/oracle/general/sh/scripts/logrotate/test1
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.erpqa1 -f /u01/app/oracle/general/sh/scripts/logrotate/erpqa1
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.listener -f /u01/app/oracle/general/sh/scripts/logrotate/listener
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.listener_scan1 -f /u01/app/oracle/general/sh/scripts/logrotate/listener_scan1
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.+ASM1 -f /u01/app/oracle/general/sh/scripts/logrotate/+ASM1

Note the -s option is to specify an alternate state file. Since we are executing logrotate as the oracle or grid user, we must specify the -s option. The default state file is /var/lib/logrotate/status.

As you can see, it create a logrotate script for 2 of our databases, the local ASM instance, database listener and the scan listener. If you drill down into the actual logrotate script, you will notice that it is designed to rotate on a weekly basis, copy the file, truncate the original file, keep 2 copies and compress the copies. Here’s a sample logrotate script:

cat logrotate/listener_scan1
/u01/app/grid/11203/log/diag/tnslsnr/dallinux01/listener_scan1/trace/listener_scan1.log {
weekly
copytruncate
rotate 2
compress
}

Book Title:
Successfully Virtualize Business Critical Oracle Databases

VMware iBook Cover

Here’s the book Description:
Written by VMware vExperts (Charles Kim (VCP), Viscosity North America, and George Trujillo (VCI), HortonWorks) and leading experts within VMware VCI and Learning Specialist (Steve Jones) and Chief Database Architect in EMC’s Global IT organization (Darryl Smith), this book will provide critical instructions for deploying Oracle Standalone and Real Application Cluster (RAC) on VMware enterprise virtualized platforms. You will learn how to setup an Oracle ecosystem within a virtualized infrastructure for enterprise deployments. We will share industry best practices to install and configure Linux, and to deploy Oracle Grid Infrastructure and Databases in a matter of hours. Whether you are deploying a 4 node RAC cluster or deploying a single standalone database, we will lead you to the foundation which will allow you to rapidly provision database-as-a-service. We will disseminate key details on creating golden image templates from the virtual machine to the Oracle binaries and databases. You will learn from industry experts how to troubleshoot production Oracle database servers running in VMware virtual infrastructures.

Audience:
Database Admins/Architects, VMware Admins, System Admins/Architects, Project Architects
This book designed for Oracle DBAs and VMware administrators needing to learn the art of virtualizing Oracle.


Recent events enabled Red Hat and Oracle to work together, and Oracle is now officially supporting ASMlib on Red Hat Enterprise Linux 6.4 and newer.

The knowledge base article which was updated in early May indicates that now we have ‘kmod-oracleasm’,’oracleasmlib’ and ‘oracleasm-support’ packages that need to be installed on the system:
https://access.redhat.com/site/solutions/315643

kmod-oracleasm is available from the Red Hat Network (RHN) and can be installed from the RHEL Server Supplementary (v. 6 64-bit x86_64) channel. oracleasmlib and oracleasm-support packages are available for download at this location.

Here’s what the installation steps look like in RHEL 6.4. It does not look any different than before except we are playing with a new player kmod-oracleasm from Red Hat instead of Oracle:

[root@rh64a ~]# ls -l *oracleasm*
-rw-r--r-- 1 root root 35044 Aug 22 20:41 kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
-rw-r--r-- 1 root root 13300 Aug 22 20:45 oracleasmlib-2.0.4-1.el6.x86_64.rpm
-rw-r--r-- 1 root root 74984 Aug 22 20:56 oracleasm-support-2.1.8-1.el6.x86_64.rpm

Installation is done with a simple rpm -ihv command on each of the RPMs that we downloaded. There does not seem to be any dependencies between any of the RPMs. In this example, we will install the kmod-oracleasm RPM, followed by oracleasmlib, followed by oracle-support RPMs.

[root@rh64a ~]# rpm -ihv kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
warning: kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing...                ########################################### [100%]
   1:kmod-oracleasm         ########################################### [100%]

[root@rh64a ~]# rpm -ihv oracleasmlib-2.0.4-1.el6.x86_64.rpm
warning: oracleasmlib-2.0.4-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                ########################################### [100%]
   1:oracleasmlib           ########################################### [100%]

[root@rh64a ~]# rpm -ihv oracleasm-support-2.1.8-1.el6.x86_64.rpm
warning: oracleasm-support-2.1.8-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [100%]

Next we will configure oracleasm. To configure oracleasm interactively, we have to prefix the command with the service syntax first:

[root@rh64a ~]# service oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle    
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [  OK  ]
Scanning the system for Oracle ASMLib disks: [  OK  ]

[root@rh64a ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

Posted by Charles Kim, Oracle ACE Director and VMware vExpert


5stars
ASM 12c Book

We desperately needed an update to Nitin Vengurlekar’s ASM book by Oracle Press. This book essentially covers all the aspects for Oracle Database 11gR2 ASM, 12c ASM plus ACFS and pulls it all together in the context of Private Database Cloud and consolidation. I particularly like the Appendix section that covers the best practices of building a Private Database Cloud. I also like Chapter 14: ASM 12c: the New Frontier which covers 12c ASM at a high level. The ASM instance chapter is worth the price of the book itself!!

This book is a MUST read for anyone managing RAC/ASM environments.

This book is broken down to 14 chapters and an appendix and targeted to DBAs, technical managers, storage architects, linux administrators, and consultants who are involved with RAC and ASM implementations. Here’s the breakdown of the chapters:
1. Automatic Storage Management in a Cloud World
2. ASM and Grid Infrastructure Stack
3. ASM Instances
4. ASM Disks and Disk Groups
5. Managing Databases in ASM
6. ASMLIB Concepts and Overview
7. ASM Files, Aliases, and Security
8. ASM Space Allocation and Rebalance
9. ASM Operations
10. ACFS Design and Deployment
11. ACFS Data Services
12. ASM Optimizations in Oracle Engineered Systems
13. ASM Tools and Utilities
14. ASM 12c: The New Frontier
Appendix: Best Practices for Database Consolidation in Private Clouds


At Viscosity, I heavily promote automation our database builds for our clients. It allows our consultants to create databases that are consistent and reliable every time we build them.

In this example, I will walk you through the command line options of the database configuration assistant (dbca) for Oracle Database 12c.

$ cat dbca.sh
cd /u01/app/oracle/product/12.1.0/dbhome_1/bin
./dbca -silent \
 -createDatabase \
 -templateName General_Purpose.dbc \
 -gdbName TEST \
 -sid TEST     \
 -SysPassword oracle123 \
 -createAsContainerDatabase true \
    -numberofPDBs 2 \
    -pdbName VNA \
 -SystemPassword oracle123 \
 -emConfiguration DBEXPRESS  \
 -redoLogFileSize 100   \
 -recoveryAreaDestination FRA \
 -storageType ASM             \
   -asmsnmpPassword oracle123 \
   -asmSysPassword oracle123  \
   -diskGroupName DATA \
 -listeners LISTENER   \
 -registerWithDirService false \
 -characterSet AL32UTF8 \
 -nationalCharacterSet AL16UTF16 \
 -databaseType MULTIPURPOSE \
 -nodelist ol59a,ol59b \
 -initparams audit_file_dest='/u01/app/oracle/admin/TEST/adump' \
     -initparams compatible='12.1.0.0' \
     -initparams db_create_file_dest='+DATA' \
     -initparams db_create_online_log_dest_1='+DATA' \
     -initparams db_create_online_log_dest_2='+FRA' \
     -initparams db_recovery_file_dest='+FRA' \
     -initparams pga_aggregate_target=100M \
     -initparams diagnostic_dest='/u01/app/oracle' \
     -initparams parallel_max_servers=8 \
     -initparams processes=400 \
     -initparams sga_target=524288000 \
     -initparams db_recovery_file_dest_size=4322230272

Unfortunately, the parameter for the PDBAdmin password does not exist in the dbca -silent option. This will make automation of this part little more difficult. Additional parameters such as createAsContainerDatabase, numberofPDBs, pdbName, and registerWithService are new to Oracle Database 12c. Additional option for emConfiguation for DBEXPRESS is also new to Oracle Database 12c.

In this database creation script, I also embedded relevant initialization parameters. You can opt to include additional parameters that are enterprise standards for your corporation.

$ ./dbca.sh
Enter PDBADMIN User Password:

Copying database files
1% complete
2% complete
23% complete
Creating and starting Oracle instance
24% complete
27% complete
28% complete
29% complete
32% complete
35% complete
36% complete
38% complete
Creating cluster database views
40% complete
54% complete
Completing Database Creation
56% complete
58% complete
65% complete
67% complete
74% complete
77% complete
Creating Pluggable Databases
81% complete
86% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/TEST/TEST1.log" for further details.

We should always review the log file generated from dbca to confirm there are no relevant warning or error messages. As you can see from the example output below, the cluster verification utility was executed against novelist defined in our dbca database creation script.

$ cat /u01/app/oracle/cfgtoollogs/dbca/TEST/TEST1.log

Cluster Verification check "Node Connectivity" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether connectivity exists amongst all the nodes. The connectivity is being tested for the subnets "192.168.1.0,10.0.0.0"
Cluster Verification check "Multicast check" succeeded on nodes: ol59a,ol59b.
This task checks that network interfaces in subnet are able to communicate over multicast IP address
Cluster Verification Check "Physical Memory" succeeded on node "ol59a", expected value: 1GB (1048576.0KB) actual value: 2.9461GB (3089208.0KB).
Cluster Verification Check "Physical Memory" succeeded on node "ol59b", expected value: 1GB (1048576.0KB) actual value: 2.9461GB (3089208.0KB).
Cluster Verification check "Physical Memory" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system has at least 1GB (1048576.0KB) of total physical memory.
Cluster Verification Check "Available Physical Memory" succeeded on node "ol59a", expected value: 50MB (51200.0KB) actual value: 2.3107GB (2422936.0KB).
Cluster Verification Check "Available Physical Memory" succeeded on node "ol59b", expected value: 50MB (51200.0KB) actual value: 2.3824GB (2498168.0KB).
Cluster Verification check "Available Physical Memory" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system has at least 50MB (51200.0KB) of available physical memory.
Cluster Verification Check "Swap Size" succeeded on node "ol59b", expected value: 2.9461GB (3089208.0KB) actual value: 3.9062GB (4095996.0KB).
Cluster Verification Check "Swap Size" failed on node "ol59a", expected value: 2.9461GB (3089208.0KB) actual value: 1.9687GB (2064380.0KB).
PRVF-7573 : Sufficient swap size is not available on node "ol59a" [Required = 2.9461GB (3089208.0KB) ; Found = 1.9687GB (2064380.0KB)]
Cluster Verification check failed on nodes: ol59a.
This is a prerequisite condition to test whether sufficient total swap space is available on the system.
Cluster Verification Check "Free Space: ol59b:/tmp" succeeded on node "ol59b", expected value: 1GB  actual value: 2.4834GB .
Cluster Verification check "Free Space: ol59b:/tmp" succeeded on nodes: ol59b.
This is a prerequisite condition to test whether sufficient free space is available in the file system.
Cluster Verification Check "Free Space: ol59a:/tmp" succeeded on node "ol59a", expected value: 1GB  actual value: 3.9453GB .
Cluster Verification check "Free Space: ol59a:/tmp" succeeded on nodes: ol59a.
This is a prerequisite condition to test whether sufficient free space is available in the file system.
Cluster Verification check "User Existence: oracle" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether user "oracle" exists on the system.
Cluster Verification Check "Run Level" succeeded on node "ol59a", expected value: 3,5 actual value: 5.
Cluster Verification Check "Run Level" succeeded on node "ol59b", expected value: 3,5 actual value: 5.
Cluster Verification check "Run Level" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system is running with proper run level.
Cluster Verification Check "Hard Limit: maximum open file descriptors" succeeded on node "ol59a", expected value: 65536 actual value: 131072.
Cluster Verification Check "Hard Limit: maximum open file descriptors" succeeded on node "ol59b", expected value: 65536 actual value: 65536.
Cluster Verification check "Hard Limit: maximum open file descriptors" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the hard limit for "maximum open file descriptors" is set correctly.
Cluster Verification Check "Soft Limit: maximum open file descriptors" succeeded on node "ol59a", expected value: 1024 actual value: 131072.
Cluster Verification Check "Soft Limit: maximum open file descriptors" succeeded on node "ol59b", expected value: 1024 actual value: 1024.
Cluster Verification check "Soft Limit: maximum open file descriptors" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the soft limit for "maximum open file descriptors" is set correctly.
Cluster Verification Check "Hard Limit: maximum user processes" succeeded on node "ol59a", expected value: 16384 actual value: 131072.
Cluster Verification Check "Hard Limit: maximum user processes" succeeded on node "ol59b", expected value: 16384 actual value: 16384.
Cluster Verification check "Hard Limit: maximum user processes" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the hard limit for "maximum user processes" is set correctly.
Cluster Verification Check "Soft Limit: maximum user processes" succeeded on node "ol59a", expected value: 2047 actual value: 131072.
Cluster Verification Check "Soft Limit: maximum user processes" succeeded on node "ol59b", expected value: 2047 actual value: 16384.
Cluster Verification check "Soft Limit: maximum user processes" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the soft limit for "maximum user processes" is set correctly.
Cluster Verification Check "Architecture" succeeded on node "ol59a", expected value: x86_64 actual value: x86_64.
Cluster Verification Check "Architecture" succeeded on node "ol59b", expected value: x86_64 actual value: x86_64.
Cluster Verification check "Architecture" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system has a certified architecture.
Cluster Verification Check "OS Kernel Version" succeeded on node "ol59a", expected value: 2.6.18 actual value: 2.6.39-400.109.4.el5uek.
Cluster Verification Check "OS Kernel Version" succeeded on node "ol59b", expected value: 2.6.18 actual value: 2.6.39-400.109.4.el5uek.
Cluster Verification check "OS Kernel Version" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system kernel version is at least "2.6.18".
Cluster Verification Check "OS Kernel Parameter: semmsl" succeeded on node "ol59a", expected value: 250 actual value: Current=250; Configured=250.
Cluster Verification Check "OS Kernel Parameter: semmsl" succeeded on node "ol59b", expected value: 250 actual value: Current=250; Configured=250.
Cluster Verification check "OS Kernel Parameter: semmsl" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semmsl" is properly set.
Cluster Verification Check "OS Kernel Parameter: semmns" succeeded on node "ol59a", expected value: 32000 actual value: Current=32000; Configured=32000.
Cluster Verification Check "OS Kernel Parameter: semmns" succeeded on node "ol59b", expected value: 32000 actual value: Current=32000; Configured=32000.
Cluster Verification check "OS Kernel Parameter: semmns" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semmns" is properly set.
Cluster Verification Check "OS Kernel Parameter: semopm" succeeded on node "ol59a", expected value: 100 actual value: Current=100; Configured=100.
Cluster Verification Check "OS Kernel Parameter: semopm" succeeded on node "ol59b", expected value: 100 actual value: Current=100; Configured=100.
Cluster Verification check "OS Kernel Parameter: semopm" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semopm" is properly set.
Cluster Verification Check "OS Kernel Parameter: semmni" succeeded on node "ol59a", expected value: 128 actual value: Current=142; Configured=142.
Cluster Verification Check "OS Kernel Parameter: semmni" succeeded on node "ol59b", expected value: 128 actual value: Current=128; Configured=128.
Cluster Verification check "OS Kernel Parameter: semmni" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semmni" is properly set.
Cluster Verification Check "OS Kernel Parameter: shmmax" succeeded on node "ol59a", expected value: 1581674496 actual value: Current=4398046511104; Configured=4398046511104.
Cluster Verification Check "OS Kernel Parameter: shmmax" succeeded on node "ol59b", expected value: 1581674496 actual value: Current=68719476736; Configured=68719476736.
Cluster Verification check "OS Kernel Parameter: shmmax" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "shmmax" is properly set.
Cluster Verification Check "OS Kernel Parameter: shmmni" succeeded on node "ol59a", expected value: 4096 actual value: Current=4096; Configured=4096.
Cluster Verification Check "OS Kernel Parameter: shmmni" succeeded on node "ol59b", expected value: 4096 actual value: Current=4096; Configured=4096.
Cluster Verification check "OS Kernel Parameter: shmmni" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "shmmni" is properly set.
Cluster Verification Check "OS Kernel Parameter: shmall" succeeded on node "ol59a", expected value: 308920 actual value: Current=1073741824; Configured=1073741824.
Cluster Verification Check "OS Kernel Parameter: shmall" succeeded on node "ol59b", expected value: 308920 actual value: Current=4294967296; Configured=4294967296.
Cluster Verification check "OS Kernel Parameter: shmall" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "shmall" is properly set.
Cluster Verification Check "OS Kernel Parameter: file-max" succeeded on node "ol59a", expected value: 6815744 actual value: Current=6815744; Configured=6815744.
Cluster Verification Check "OS Kernel Parameter: file-max" succeeded on node "ol59b", expected value: 6815744 actual value: Current=6815744; Configured=6815744.
Cluster Verification check "OS Kernel Parameter: file-max" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "file-max" is properly set.
Cluster Verification Check "OS Kernel Parameter: ip_local_port_range" succeeded on node "ol59a", expected value: between 9000 & 65535 actual value: Current=between 9000 & 65500; Configured=between 9000 & 65500.
Cluster Verification Check "OS Kernel Parameter: ip_local_port_range" succeeded on node "ol59b", expected value: between 9000 & 65535 actual value: Current=between 9000 & 65535; Configured=between 9000 & 65535.
Cluster Verification check "OS Kernel Parameter: ip_local_port_range" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "ip_local_port_range" is properly set.
Cluster Verification Check "OS Kernel Parameter: rmem_default" succeeded on node "ol59a", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification Check "OS Kernel Parameter: rmem_default" succeeded on node "ol59b", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification check "OS Kernel Parameter: rmem_default" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "rmem_default" is properly set.
Cluster Verification Check "OS Kernel Parameter: rmem_max" succeeded on node "ol59a", expected value: 4194304 actual value: Current=4194304; Configured=4194304.
Cluster Verification Check "OS Kernel Parameter: rmem_max" succeeded on node "ol59b", expected value: 4194304 actual value: Current=4194304; Configured=4194304.
Cluster Verification check "OS Kernel Parameter: rmem_max" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "rmem_max" is properly set.
Cluster Verification Check "OS Kernel Parameter: wmem_default" succeeded on node "ol59a", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification Check "OS Kernel Parameter: wmem_default" succeeded on node "ol59b", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification check "OS Kernel Parameter: wmem_default" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "wmem_default" is properly set.
Cluster Verification Check "OS Kernel Parameter: wmem_max" succeeded on node "ol59a", expected value: 1048576 actual value: Current=1048576; Configured=1048576.
Cluster Verification Check "OS Kernel Parameter: wmem_max" succeeded on node "ol59b", expected value: 1048576 actual value: Current=1048576; Configured=1048576.
Cluster Verification check "OS Kernel Parameter: wmem_max" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "wmem_max" is properly set.
Cluster Verification Check "OS Kernel Parameter: aio-max-nr" succeeded on node "ol59a", expected value: 1048576 actual value: Current=3145728; Configured=3145728.
Cluster Verification Check "OS Kernel Parameter: aio-max-nr" succeeded on node "ol59b", expected value: 1048576 actual value: Current=1048576; Configured=1048576.
Cluster Verification check "OS Kernel Parameter: aio-max-nr" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "aio-max-nr" is properly set.
Cluster Verification Check "Package: make-3.81" succeeded on node "ol59a", expected value: make-3.81 actual value: make-3.81-3.el5.
Cluster Verification Check "Package: make-3.81" succeeded on node "ol59b", expected value: make-3.81 actual value: make-3.81-3.el5.
Cluster Verification check "Package: make-3.81" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "make-3.81" is available on the system.
Cluster Verification Check "Package: binutils-2.17.50.0.6" succeeded on node "ol59a", expected value: binutils-2.17.50.0.6 actual value: binutils-2.17.50.0.6-20.el5_8.3.
Cluster Verification Check "Package: binutils-2.17.50.0.6" succeeded on node "ol59b", expected value: binutils-2.17.50.0.6 actual value: binutils-2.17.50.0.6-20.el5_8.3.
Cluster Verification check "Package: binutils-2.17.50.0.6" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "binutils-2.17.50.0.6" is available on the system.
Cluster Verification Check "Package: gcc-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: gcc(x86_64)-4.1.2 actual value: gcc(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: gcc-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: gcc(x86_64)-4.1.2 actual value: gcc(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: gcc-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "gcc-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libaio-0.3.106 (x86_64)" succeeded on node "ol59a", expected value: libaio(x86_64)-0.3.106 actual value: libaio(x86_64)-0.3.106-5.
Cluster Verification Check "Package: libaio-0.3.106 (x86_64)" succeeded on node "ol59b", expected value: libaio(x86_64)-0.3.106 actual value: libaio(x86_64)-0.3.106-5.
Cluster Verification check "Package: libaio-0.3.106 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libaio-0.3.106 (x86_64)" is available on the system.
Cluster Verification Check "Package: glibc-2.5-58 (x86_64)" succeeded on node "ol59a", expected value: glibc(x86_64)-2.5-58 actual value: glibc(x86_64)-2.5-107.
Cluster Verification Check "Package: glibc-2.5-58 (x86_64)" succeeded on node "ol59b", expected value: glibc(x86_64)-2.5-58 actual value: glibc(x86_64)-2.5-107.
Cluster Verification check "Package: glibc-2.5-58 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "glibc-2.5-58 (x86_64)" is available on the system.
Cluster Verification Check "Package: compat-libstdc++-33-3.2.3 (x86_64)" succeeded on node "ol59a", expected value: compat-libstdc++-33(x86_64)-3.2.3 actual value: compat-libstdc++-33(x86_64)-3.2.3-61.
Cluster Verification Check "Package: compat-libstdc++-33-3.2.3 (x86_64)" succeeded on node "ol59b", expected value: compat-libstdc++-33(x86_64)-3.2.3 actual value: compat-libstdc++-33(x86_64)-3.2.3-61.
Cluster Verification check "Package: compat-libstdc++-33-3.2.3 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "compat-libstdc++-33-3.2.3 (x86_64)" is available on the system.
Cluster Verification Check "Package: glibc-devel-2.5 (x86_64)" succeeded on node "ol59a", expected value: glibc-devel(x86_64)-2.5 actual value: glibc-devel(x86_64)-2.5-107.
Cluster Verification Check "Package: glibc-devel-2.5 (x86_64)" succeeded on node "ol59b", expected value: glibc-devel(x86_64)-2.5 actual value: glibc-devel(x86_64)-2.5-107.
Cluster Verification check "Package: glibc-devel-2.5 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "glibc-devel-2.5 (x86_64)" is available on the system.
Cluster Verification Check "Package: gcc-c++-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: gcc-c++(x86_64)-4.1.2 actual value: gcc-c++(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: gcc-c++-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: gcc-c++(x86_64)-4.1.2 actual value: gcc-c++(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: gcc-c++-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "gcc-c++-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libaio-devel-0.3.106 (x86_64)" succeeded on node "ol59a", expected value: libaio-devel(x86_64)-0.3.106 actual value: libaio-devel(x86_64)-0.3.106-5.
Cluster Verification Check "Package: libaio-devel-0.3.106 (x86_64)" succeeded on node "ol59b", expected value: libaio-devel(x86_64)-0.3.106 actual value: libaio-devel(x86_64)-0.3.106-5.
Cluster Verification check "Package: libaio-devel-0.3.106 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libaio-devel-0.3.106 (x86_64)" is available on the system.
Cluster Verification Check "Package: libgcc-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: libgcc(x86_64)-4.1.2 actual value: libgcc(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: libgcc-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: libgcc(x86_64)-4.1.2 actual value: libgcc(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: libgcc-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libgcc-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libstdc++-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: libstdc++(x86_64)-4.1.2 actual value: libstdc++(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: libstdc++-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: libstdc++(x86_64)-4.1.2 actual value: libstdc++(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: libstdc++-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libstdc++-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libstdc++-devel-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: libstdc++-devel(x86_64)-4.1.2 actual value: libstdc++-devel(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: libstdc++-devel-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: libstdc++-devel(x86_64)-4.1.2 actual value: libstdc++-devel(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: libstdc++-devel-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libstdc++-devel-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: sysstat-7.0.2" succeeded on node "ol59a", expected value: sysstat-7.0.2 actual value: sysstat-7.0.2-12.0.1.el5.
Cluster Verification Check "Package: sysstat-7.0.2" succeeded on node "ol59b", expected value: sysstat-7.0.2 actual value: sysstat-7.0.2-12.0.1.el5.
Cluster Verification check "Package: sysstat-7.0.2" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "sysstat-7.0.2" is available on the system.
Cluster Verification Check "Package: ksh-..." succeeded on node "ol59a", expected value: ksh-... actual value: ksh-20100621-12.el5.
Cluster Verification Check "Package: ksh-..." succeeded on node "ol59b", expected value: ksh-... actual value: ksh-20100621-12.el5.
Cluster Verification check "Package: ksh-..." succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "ksh-..." is available on the system.
Cluster Verification check "Users With Same UID" succeeded on nodes: ol59a,ol59b.
This test checks that multiple users do not exist with user id as "0".
Cluster Verification check "Current Group ID" succeeded on nodes: ol59a,ol59b.
This test verifies that the user is currently logged in to the user's primary group.
Cluster Verification check "Root user consistency" succeeded on nodes: ol59a,ol59b.
This test checks the consistency of the primary group of the root user across the cluster nodes
Cluster Verification check "CRS Integrity" succeeded on nodes: ol59a,ol59b.
This test checks the integrity of Oracle Clusterware stack across the cluster nodes.
Cluster Verification check "Node Application Existence" succeeded on nodes: ol59a,ol59b.
This test checks the existence of Node Applications on the system.
Cluster Verification check "Time zone consistency" succeeded on nodes: ol59a,ol59b.
This task checks for the consistency of time zones across systems.

Cluster Verification check "Node Connectivity" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether connectivity exists amongst all the nodes. The connectivity is being tested for the subnets "192.168.1.0,10.0.0.0"
Cluster Verification check "Multicast check" succeeded on nodes: ol59a,ol59b.
This task checks that network interfaces in subnet are able to communicate over multicast IP address
Cluster Verification Check "Physical Memory" succeeded on node "ol59a", expected value: 1GB (1048576.0KB) actual value: 2.9461GB (3089208.0KB).
Cluster Verification Check "Physical Memory" succeeded on node "ol59b", expected value: 1GB (1048576.0KB) actual value: 2.9461GB (3089208.0KB).
Cluster Verification check "Physical Memory" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system has at least 1GB (1048576.0KB) of total physical memory.
Cluster Verification Check "Available Physical Memory" succeeded on node "ol59a", expected value: 50MB (51200.0KB) actual value: 2.3107GB (2422936.0KB).
Cluster Verification Check "Available Physical Memory" succeeded on node "ol59b", expected value: 50MB (51200.0KB) actual value: 2.3824GB (2498168.0KB).
Cluster Verification check "Available Physical Memory" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system has at least 50MB (51200.0KB) of available physical memory.
Cluster Verification Check "Swap Size" succeeded on node "ol59b", expected value: 2.9461GB (3089208.0KB) actual value: 3.9062GB (4095996.0KB).
Cluster Verification Check "Swap Size" failed on node "ol59a", expected value: 2.9461GB (3089208.0KB) actual value: 1.9687GB (2064380.0KB).
PRVF-7573 : Sufficient swap size is not available on node "ol59a" [Required = 2.9461GB (3089208.0KB) ; Found = 1.9687GB (2064380.0KB)]
Cluster Verification check failed on nodes: ol59a.
This is a prerequisite condition to test whether sufficient total swap space is available on the system.
Cluster Verification Check "Free Space: ol59b:/tmp" succeeded on node "ol59b", expected value: 1GB  actual value: 2.4834GB .
Cluster Verification check "Free Space: ol59b:/tmp" succeeded on nodes: ol59b.
This is a prerequisite condition to test whether sufficient free space is available in the file system.
Cluster Verification Check "Free Space: ol59a:/tmp" succeeded on node "ol59a", expected value: 1GB  actual value: 3.9453GB .
Cluster Verification check "Free Space: ol59a:/tmp" succeeded on nodes: ol59a.
This is a prerequisite condition to test whether sufficient free space is available in the file system.
Cluster Verification check "User Existence: oracle" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether user "oracle" exists on the system.
Cluster Verification Check "Run Level" succeeded on node "ol59a", expected value: 3,5 actual value: 5.
Cluster Verification Check "Run Level" succeeded on node "ol59b", expected value: 3,5 actual value: 5.
Cluster Verification check "Run Level" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system is running with proper run level.
Cluster Verification Check "Hard Limit: maximum open file descriptors" succeeded on node "ol59a", expected value: 65536 actual value: 131072.
Cluster Verification Check "Hard Limit: maximum open file descriptors" succeeded on node "ol59b", expected value: 65536 actual value: 65536.
Cluster Verification check "Hard Limit: maximum open file descriptors" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the hard limit for "maximum open file descriptors" is set correctly.
Cluster Verification Check "Soft Limit: maximum open file descriptors" succeeded on node "ol59a", expected value: 1024 actual value: 131072.
Cluster Verification Check "Soft Limit: maximum open file descriptors" succeeded on node "ol59b", expected value: 1024 actual value: 1024.
Cluster Verification check "Soft Limit: maximum open file descriptors" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the soft limit for "maximum open file descriptors" is set correctly.
Cluster Verification Check "Hard Limit: maximum user processes" succeeded on node "ol59a", expected value: 16384 actual value: 131072.
Cluster Verification Check "Hard Limit: maximum user processes" succeeded on node "ol59b", expected value: 16384 actual value: 16384.
Cluster Verification check "Hard Limit: maximum user processes" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the hard limit for "maximum user processes" is set correctly.
Cluster Verification Check "Soft Limit: maximum user processes" succeeded on node "ol59a", expected value: 2047 actual value: 131072.
Cluster Verification Check "Soft Limit: maximum user processes" succeeded on node "ol59b", expected value: 2047 actual value: 16384.
Cluster Verification check "Soft Limit: maximum user processes" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the soft limit for "maximum user processes" is set correctly.
Cluster Verification Check "Architecture" succeeded on node "ol59a", expected value: x86_64 actual value: x86_64.
Cluster Verification Check "Architecture" succeeded on node "ol59b", expected value: x86_64 actual value: x86_64.
Cluster Verification check "Architecture" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system has a certified architecture.
Cluster Verification Check "OS Kernel Version" succeeded on node "ol59a", expected value: 2.6.18 actual value: 2.6.39-400.109.4.el5uek.
Cluster Verification Check "OS Kernel Version" succeeded on node "ol59b", expected value: 2.6.18 actual value: 2.6.39-400.109.4.el5uek.
Cluster Verification check "OS Kernel Version" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the system kernel version is at least "2.6.18".
Cluster Verification Check "OS Kernel Parameter: semmsl" succeeded on node "ol59a", expected value: 250 actual value: Current=250; Configured=250.
Cluster Verification Check "OS Kernel Parameter: semmsl" succeeded on node "ol59b", expected value: 250 actual value: Current=250; Configured=250.
Cluster Verification check "OS Kernel Parameter: semmsl" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semmsl" is properly set.
Cluster Verification Check "OS Kernel Parameter: semmns" succeeded on node "ol59a", expected value: 32000 actual value: Current=32000; Configured=32000.
Cluster Verification Check "OS Kernel Parameter: semmns" succeeded on node "ol59b", expected value: 32000 actual value: Current=32000; Configured=32000.
Cluster Verification check "OS Kernel Parameter: semmns" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semmns" is properly set.
Cluster Verification Check "OS Kernel Parameter: semopm" succeeded on node "ol59a", expected value: 100 actual value: Current=100; Configured=100.
Cluster Verification Check "OS Kernel Parameter: semopm" succeeded on node "ol59b", expected value: 100 actual value: Current=100; Configured=100.
Cluster Verification check "OS Kernel Parameter: semopm" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semopm" is properly set.
Cluster Verification Check "OS Kernel Parameter: semmni" succeeded on node "ol59a", expected value: 128 actual value: Current=142; Configured=142.
Cluster Verification Check "OS Kernel Parameter: semmni" succeeded on node "ol59b", expected value: 128 actual value: Current=128; Configured=128.
Cluster Verification check "OS Kernel Parameter: semmni" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "semmni" is properly set.
Cluster Verification Check "OS Kernel Parameter: shmmax" succeeded on node "ol59a", expected value: 1581674496 actual value: Current=4398046511104; Configured=4398046511104.
Cluster Verification Check "OS Kernel Parameter: shmmax" succeeded on node "ol59b", expected value: 1581674496 actual value: Current=68719476736; Configured=68719476736.
Cluster Verification check "OS Kernel Parameter: shmmax" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "shmmax" is properly set.
Cluster Verification Check "OS Kernel Parameter: shmmni" succeeded on node "ol59a", expected value: 4096 actual value: Current=4096; Configured=4096.
Cluster Verification Check "OS Kernel Parameter: shmmni" succeeded on node "ol59b", expected value: 4096 actual value: Current=4096; Configured=4096.
Cluster Verification check "OS Kernel Parameter: shmmni" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "shmmni" is properly set.
Cluster Verification Check "OS Kernel Parameter: shmall" succeeded on node "ol59a", expected value: 308920 actual value: Current=1073741824; Configured=1073741824.
Cluster Verification Check "OS Kernel Parameter: shmall" succeeded on node "ol59b", expected value: 308920 actual value: Current=4294967296; Configured=4294967296.
Cluster Verification check "OS Kernel Parameter: shmall" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "shmall" is properly set.
Cluster Verification Check "OS Kernel Parameter: file-max" succeeded on node "ol59a", expected value: 6815744 actual value: Current=6815744; Configured=6815744.
Cluster Verification Check "OS Kernel Parameter: file-max" succeeded on node "ol59b", expected value: 6815744 actual value: Current=6815744; Configured=6815744.
Cluster Verification check "OS Kernel Parameter: file-max" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "file-max" is properly set.
Cluster Verification Check "OS Kernel Parameter: ip_local_port_range" succeeded on node "ol59a", expected value: between 9000 & 65535 actual value: Current=between 9000 & 65500; Configured=between 9000 & 65500.
Cluster Verification Check "OS Kernel Parameter: ip_local_port_range" succeeded on node "ol59b", expected value: between 9000 & 65535 actual value: Current=between 9000 & 65535; Configured=between 9000 & 65535.
Cluster Verification check "OS Kernel Parameter: ip_local_port_range" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "ip_local_port_range" is properly set.
Cluster Verification Check "OS Kernel Parameter: rmem_default" succeeded on node "ol59a", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification Check "OS Kernel Parameter: rmem_default" succeeded on node "ol59b", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification check "OS Kernel Parameter: rmem_default" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "rmem_default" is properly set.
Cluster Verification Check "OS Kernel Parameter: rmem_max" succeeded on node "ol59a", expected value: 4194304 actual value: Current=4194304; Configured=4194304.
Cluster Verification Check "OS Kernel Parameter: rmem_max" succeeded on node "ol59b", expected value: 4194304 actual value: Current=4194304; Configured=4194304.
Cluster Verification check "OS Kernel Parameter: rmem_max" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "rmem_max" is properly set.
Cluster Verification Check "OS Kernel Parameter: wmem_default" succeeded on node "ol59a", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification Check "OS Kernel Parameter: wmem_default" succeeded on node "ol59b", expected value: 262144 actual value: Current=262144; Configured=262144.
Cluster Verification check "OS Kernel Parameter: wmem_default" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "wmem_default" is properly set.
Cluster Verification Check "OS Kernel Parameter: wmem_max" succeeded on node "ol59a", expected value: 1048576 actual value: Current=1048576; Configured=1048576.
Cluster Verification Check "OS Kernel Parameter: wmem_max" succeeded on node "ol59b", expected value: 1048576 actual value: Current=1048576; Configured=1048576.
Cluster Verification check "OS Kernel Parameter: wmem_max" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "wmem_max" is properly set.
Cluster Verification Check "OS Kernel Parameter: aio-max-nr" succeeded on node "ol59a", expected value: 1048576 actual value: Current=3145728; Configured=3145728.
Cluster Verification Check "OS Kernel Parameter: aio-max-nr" succeeded on node "ol59b", expected value: 1048576 actual value: Current=1048576; Configured=1048576.
Cluster Verification check "OS Kernel Parameter: aio-max-nr" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the OS kernel parameter "aio-max-nr" is properly set.
Cluster Verification Check "Package: make-3.81" succeeded on node "ol59a", expected value: make-3.81 actual value: make-3.81-3.el5.
Cluster Verification Check "Package: make-3.81" succeeded on node "ol59b", expected value: make-3.81 actual value: make-3.81-3.el5.
Cluster Verification check "Package: make-3.81" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "make-3.81" is available on the system.
Cluster Verification Check "Package: binutils-2.17.50.0.6" succeeded on node "ol59a", expected value: binutils-2.17.50.0.6 actual value: binutils-2.17.50.0.6-20.el5_8.3.
Cluster Verification Check "Package: binutils-2.17.50.0.6" succeeded on node "ol59b", expected value: binutils-2.17.50.0.6 actual value: binutils-2.17.50.0.6-20.el5_8.3.
Cluster Verification check "Package: binutils-2.17.50.0.6" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "binutils-2.17.50.0.6" is available on the system.
Cluster Verification Check "Package: gcc-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: gcc(x86_64)-4.1.2 actual value: gcc(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: gcc-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: gcc(x86_64)-4.1.2 actual value: gcc(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: gcc-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "gcc-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libaio-0.3.106 (x86_64)" succeeded on node "ol59a", expected value: libaio(x86_64)-0.3.106 actual value: libaio(x86_64)-0.3.106-5.
Cluster Verification Check "Package: libaio-0.3.106 (x86_64)" succeeded on node "ol59b", expected value: libaio(x86_64)-0.3.106 actual value: libaio(x86_64)-0.3.106-5.
Cluster Verification check "Package: libaio-0.3.106 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libaio-0.3.106 (x86_64)" is available on the system.
Cluster Verification Check "Package: glibc-2.5-58 (x86_64)" succeeded on node "ol59a", expected value: glibc(x86_64)-2.5-58 actual value: glibc(x86_64)-2.5-107.
Cluster Verification Check "Package: glibc-2.5-58 (x86_64)" succeeded on node "ol59b", expected value: glibc(x86_64)-2.5-58 actual value: glibc(x86_64)-2.5-107.
Cluster Verification check "Package: glibc-2.5-58 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "glibc-2.5-58 (x86_64)" is available on the system.
Cluster Verification Check "Package: compat-libstdc++-33-3.2.3 (x86_64)" succeeded on node "ol59a", expected value: compat-libstdc++-33(x86_64)-3.2.3 actual value: compat-libstdc++-33(x86_64)-3.2.3-61.
Cluster Verification Check "Package: compat-libstdc++-33-3.2.3 (x86_64)" succeeded on node "ol59b", expected value: compat-libstdc++-33(x86_64)-3.2.3 actual value: compat-libstdc++-33(x86_64)-3.2.3-61.
Cluster Verification check "Package: compat-libstdc++-33-3.2.3 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "compat-libstdc++-33-3.2.3 (x86_64)" is available on the system.
Cluster Verification Check "Package: glibc-devel-2.5 (x86_64)" succeeded on node "ol59a", expected value: glibc-devel(x86_64)-2.5 actual value: glibc-devel(x86_64)-2.5-107.
Cluster Verification Check "Package: glibc-devel-2.5 (x86_64)" succeeded on node "ol59b", expected value: glibc-devel(x86_64)-2.5 actual value: glibc-devel(x86_64)-2.5-107.
Cluster Verification check "Package: glibc-devel-2.5 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "glibc-devel-2.5 (x86_64)" is available on the system.
Cluster Verification Check "Package: gcc-c++-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: gcc-c++(x86_64)-4.1.2 actual value: gcc-c++(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: gcc-c++-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: gcc-c++(x86_64)-4.1.2 actual value: gcc-c++(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: gcc-c++-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "gcc-c++-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libaio-devel-0.3.106 (x86_64)" succeeded on node "ol59a", expected value: libaio-devel(x86_64)-0.3.106 actual value: libaio-devel(x86_64)-0.3.106-5.
Cluster Verification Check "Package: libaio-devel-0.3.106 (x86_64)" succeeded on node "ol59b", expected value: libaio-devel(x86_64)-0.3.106 actual value: libaio-devel(x86_64)-0.3.106-5.
Cluster Verification check "Package: libaio-devel-0.3.106 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libaio-devel-0.3.106 (x86_64)" is available on the system.
Cluster Verification Check "Package: libgcc-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: libgcc(x86_64)-4.1.2 actual value: libgcc(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: libgcc-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: libgcc(x86_64)-4.1.2 actual value: libgcc(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: libgcc-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libgcc-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libstdc++-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: libstdc++(x86_64)-4.1.2 actual value: libstdc++(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: libstdc++-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: libstdc++(x86_64)-4.1.2 actual value: libstdc++(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: libstdc++-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libstdc++-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: libstdc++-devel-4.1.2 (x86_64)" succeeded on node "ol59a", expected value: libstdc++-devel(x86_64)-4.1.2 actual value: libstdc++-devel(x86_64)-4.1.2-54.el5.
Cluster Verification Check "Package: libstdc++-devel-4.1.2 (x86_64)" succeeded on node "ol59b", expected value: libstdc++-devel(x86_64)-4.1.2 actual value: libstdc++-devel(x86_64)-4.1.2-54.el5.
Cluster Verification check "Package: libstdc++-devel-4.1.2 (x86_64)" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "libstdc++-devel-4.1.2 (x86_64)" is available on the system.
Cluster Verification Check "Package: sysstat-7.0.2" succeeded on node "ol59a", expected value: sysstat-7.0.2 actual value: sysstat-7.0.2-12.0.1.el5.
Cluster Verification Check "Package: sysstat-7.0.2" succeeded on node "ol59b", expected value: sysstat-7.0.2 actual value: sysstat-7.0.2-12.0.1.el5.
Cluster Verification check "Package: sysstat-7.0.2" succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "sysstat-7.0.2" is available on the system.
Cluster Verification Check "Package: ksh-..." succeeded on node "ol59a", expected value: ksh-... actual value: ksh-20100621-12.el5.
Cluster Verification Check "Package: ksh-..." succeeded on node "ol59b", expected value: ksh-... actual value: ksh-20100621-12.el5.
Cluster Verification check "Package: ksh-..." succeeded on nodes: ol59a,ol59b.
This is a prerequisite condition to test whether the package "ksh-..." is available on the system.
Cluster Verification check "Users With Same UID" succeeded on nodes: ol59a,ol59b.
This test checks that multiple users do not exist with user id as "0".
Cluster Verification check "Current Group ID" succeeded on nodes: ol59a,ol59b.
This test verifies that the user is currently logged in to the user's primary group.
Cluster Verification check "Root user consistency" succeeded on nodes: ol59a,ol59b.
This test checks the consistency of the primary group of the root user across the cluster nodes
Cluster Verification check "CRS Integrity" succeeded on nodes: ol59a,ol59b.
This test checks the integrity of Oracle Clusterware stack across the cluster nodes.
Cluster Verification check "Node Application Existence" succeeded on nodes: ol59a,ol59b.
This test checks the existence of Node Applications on the system.
Cluster Verification check "Time zone consistency" succeeded on nodes: ol59a,ol59b.
This task checks for the consistency of time zones across systems.

Unique database identifier check passed.
Validation of server pool succeeded.
Default listener validation succeeded.
Scan listener validation succeeded.

+DATA is  shared across the cluster nodes.
+FRA is  shared across the cluster nodes.
+FRA has enough space. Required space is 5625 MB , available space is 7628 MB.
+DATA has enough space. Required space is 4645 MB , available space is 27912 MB.
File Validations Successful.
Copying database files
DBCA_PROGRESS : 1%
DBCA_PROGRESS : 2%
DBCA_PROGRESS : 23%
Creating and starting Oracle instance
DBCA_PROGRESS : 24%
DBCA_PROGRESS : 27%
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 29%
DBCA_PROGRESS : 32%
DBCA_PROGRESS : 35%
DBCA_PROGRESS : 36%
DBCA_PROGRESS : 38%
Creating cluster database views
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 54%
Completing Database Creation
DBCA_PROGRESS : 56%
DBCA_PROGRESS : 58%
DBCA_PROGRESS : 65%
DBCA_PROGRESS : 67%
DBCA_PROGRESS : 74%
DBCA_PROGRESS : 77%
Creating Pluggable Databases
DBCA_PROGRESS : 81%
DBCA_PROGRESS : 86%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/TEST.
Database Information:
Global Database Name:TEST
System Identifier(SID) Prefix:TEST

I am a big time command line guy. I will always choose command line over GUI or web based applications. I like command line options because it allows me to automate builds and configurations and deliver consistent configurations that are reliable and expedient. In this post, I will show examples of how to create disk groups with asmca but with a command line interface.

Here’s all the disks that’s available for us. OCRVOTE1 disk was consumed when we installed Oracle 12c Release 1 Grid Infrastructure.

[oracle@ol59a ~]$ /usr/sbin/oracleasm listdisks
DATA1
DATA2
DATA3
DATA4
FRA1
OCRVOTE1

In the simple shell script below, We will create two additional disk groups: DATA and FRA. We will create the DATA disk group first and then proceed with the FRA disk group. For the FRA disk group, we will add one additional compatible.advm property so that we can create an ACFS file system.

[oracle@ol59a ~]$ cat asmca_cr_dg.sh 
asmca -silent -createDiskGroup \
  -diskGroupName DATA \
  -diskList 'ORCL:DATA1,ORCL:DATA2,ORCL:DATA3,ORCL:DATA4' \
  -redundancy external \
  -au_size 4 -compatible.asm 12.1 \
  -compatible.rdbms 12.1 \
  -sysAsmPassword oracle123

asmca -silent -createDiskGroup \
  -diskGroupName FRA \
  -diskList 'ORCL:FRA1' \
  -redundancy external \
  -au_size 4 -compatible.asm 12.1 \
  -compatible.rdbms 12.1 \
  -compatible.advm 12.1 \
  -sysAsmPassword oracle123


[oracle@ol59a ~]$ ./asmca_cr_dg.sh 

Disk Group DATA created successfully.

Disk Group FRA created successfully.

Here’s a simple command to determine the status of ASM disk groups. The -A option will tell the grep executable to print 2 additional lines below the matching lines:

[oracle@ol59a ~]$ crsctl stat res -t |grep -A 2 ".dg"
ora.DATA.dg
               ONLINE  ONLINE       ol59a                    STABLE
               ONLINE  ONLINE       ol59b                    STABLE
ora.FRA.dg
               ONLINE  ONLINE       ol59a                    STABLE
               ONLINE  ONLINE       ol59b                    STABLE
--
ora.OCRVOTE.dg
               ONLINE  ONLINE       ol59a                    STABLE
               ONLINE  ONLINE       ol59b                    STABLE