Abstract

Learn the secrets of the trade to rapidly provisioning Oracle DB Infrastructure-As-A-Service. This extreme session will cover topics of delivering Linux-As-S-Service, RAC-As-A-Service, ASM-As-A-Service, Database-As-A-Service, Backup-As-A-Service, and even Data-Guard-As-A-Service . Advanced techniques to deploy enterprise RAC and non-RAC database deployments in an automated fashion will be shared . Save days and even weeks of deploy time by attending this session. There is no reason why you as a DBA or Architect, should not be able to deploy a fully patched RAC environment from bare metal Linux and create a RAC database in less than one hour. Anyone deploying RAC or even non-RAC will learn the secret sauce and knowledge of how to properly deploy mission critical systems that is repeatable and consistent. Learn to deploy a fully patched (11.2.0.3 or 11.2.0.4 with PSU x or 12.1 with PSU x) two node RAC in less than one hour.

Learn how to automate database builds and to leverage golden image database templates.

We can’t forget about multi-tenant deployment of Oracle 12c Pluggable Databases. Learn how to deploy pluggable databases (PDB) and to migrate PDBs and significantly increase your database consolidation density.

The details of adding nodes to an existing clusters and removing nodes from the cluster will also be disseminated.

Collaborate 2014 – Extreme Oracle DB Infrastructure As A Service.pdf


As of Oracle 12.1.0.2, with the Oracle Grid Infrastructure Typical Installation, the OUI will automatically create the Oracle GI Management Repository (GIMR). The Oracle GI Management Repository will be housed in the same location of where the first ASM disk group is created .. in my case, where the OCR and Vote disks will reside. It even creates the container database for it.

PastedGraphic 1


Lot of my customers migrate databases from Solaris or AIX to Red Hat or Oracle Linux. I see more AIX databases being migrated to Linux than Solaris but this is probably just a reflection of the customers that I am involved with. Here’s a simple diagram that I created for a customer in the financial sector (of course, all confidential information is removed) who migrated from AIX to Red Hat Linux.

Shareplex Zero Downtime Database Migration Strategy

This same strategy can be leveraged to migrate customers from AIX/Solaris to Linux on a virtualized infrastructure or even AIX/Solaris to Exadata depending on the target platform. We do see more VMware customers than Oracle VM customers who want to migrate from a big endianness platform to a little endianness platform. I’ve got this entire transportable tablespace (TTS) migration almost automated. It is definitely scripted all the way through and have thoroughly tested the scripts in several customers. I guess I need to “put that lipstick on the pig” and GUI-ize it and productize the scripts to provide an additional value to my customers.

In this blog, everything starts with Shareplex. We need to plan for Shareplex installation on the production database servers (both source and target) couple of weeks prior to the final production cut-over date. We ask for couple of weeks as we are likely to encounter firewall ports that need to be opened between the AIX/Solaris database server to the new Linux servers. We will install Shareplex on both AIX and Linux and start Shareplex on both environments. On the Linux side, the skeleton database should also be pre-created and all the Oracle software installed and patched. Also on the Linux side, we will need to stop the post process (we will define what the post process is later).

On the source system (in our example AIX database), we will define the Sharplex configuration which identifies all the schemas or schema.tables that need to be replicated from the source database to the target database (in our example Linux database). I have a script that I can share which will generate the configuration file depending on which approach you choose. Once we define and activate the configuration, the capture process will start reading the redo logs or archive logs on the source system for changes to objects listed in the configuration. The Export process runs on the source system and reads data from the export queue and sends it across the network to the target system. The import process works directly with the export process. The import process runs on the target system to receive data and build a post queue. We may have more than one export and import process; they are always paired so if we have 2 export processes, we will have 2 import processes. By default, we have one of each. The post process also runs on the target system and reads the post queue, constructs SQL statements, and applies the SQL statements to replicated objects. We may have one or more post processes depending on performance design and considerations.

Depending on the size of the database and the approach that we take (RMAN image copy, datapump, export/import, CTAS over network, etc), the database cloning process can take 1 hours, 1/2 day, 1 day, 1 week or longer. We need to architect our zero downtime migration so that with any of these cloning options, the business perceives a zero downtime or a near zero downtime database migration. So how do we do that? We defined all the processes involved with Shareplex at a high-level. Let’s see how we can leverage our knowledge to start the zero downtime migration efforts. Earlier we discussed that we have a configuration file which defines the objects that need to be replicated. We need to activate our configuration so that the capture process will start reading redo logs/archivelogs and generating Shareplex queues. Once we activate our configuration, changes on the source system will be captures, exported and imported to the target system. Remember earlier, we stopped our post process as part of our high-level installation overview. All the changes from the source system will be sent to the target system (as we stopped the post process) and will accumulate for the entire duration of the migration window until we start the post process. We will need to size the target Shareplex file system with proper design considerations so that the file system can house all the Shareplex transaction queue files.

If you look at the top left corner of the diagram, we start with the RMAN image copy of the database to a file system. If you are on AIX, this can be a Veritas file system. If you cannot afford Veritas, you can perform a RMAN backup to a NFS file system. For VLDB databases, you can perceive the performance differences between a locally mounted file system versus a NFS file system. If you happen to have 10GigE available, you may not notice much performance differences.

The RMAN image copy strategy involves performing incremental update. We will perform an initial level 0 image copy backup of the database and take a incremental level 1 backup numerous times with the intention of updating the image copy with the incremental updates (aka Forever Incremental or Incrementally Updated Backups). Make sure to have block change tracking enabled before you start this process.

In this diagram, we also introduce an AIX staging server near the production database server. If we look at the transportable tables architecture, we must put the tablespaces in read-only mode to perform the TTS metadata datapump export. If you introduce the staging server, you simplify your approach and can eliminate any of the migration activity (such as putting the database in read-only mode) on the production database.

We need to go through the steps to synchronize the production database and the image copy database on the staging server. We can perform the final incremental level 1 backup update and/or even apply archivelogs to the database on the staging server as necessary depending on your approach.

  • This is where we need to decide if we want to work with SCNs and perform a zero downtime migration or take a little outage and have some flexibility. Some of our customers can afford the little downtime and some of our customers have told us that it must be zero downtime.
  • The staging server is needed so that you do not have to put the production database in read only mode for the duration that the TTS export is running

Next, we open the copied database with the resetlog option. Once the database is open, we issue the commands to put the entire database in read-only mode and copy the database files (in the absence of NFS or Veritas) to the Linux server. If we have Veritas in the equation, we can simply swing the file system to the Linux server and mount the volume. If we are using NFS, we simply present the NFS share to the Linux OS and mount the NFS share. For Solaris folks, we can mount a Solaris file system on Linux in read only mode and Veritas is not needed.

For the next step, this is where your datapump expertise starts to pay off. We need to perform a TTS export of the tablespaces that we are migrating over from AIX to Linux. The TTS datapump export is relatively simple for those who have done this before but can be a little intimidating to some who are new to this process. Once we are complete with the TTS metadata export, we need to SFTP the metadata export and log to the Linux server. After this step, we no longer need the staging server and can be shutdown. We want to the TTS export log so that we can parse the log to generate our RMAN endian conversion script. In our example, we are going to ASM so the RMAN endianness conversion will place the datafilee inside of ASM. The amount of time to migrate the database from file system to ASM will vary on the source and target storage array and wether we are talking 10gigE, bonded 1gigE, 4gig HBAs, 8gig HBAs or IB. Even for the slower HBA on older storage arrays, we can effectively drive 1 TB of endianness conversion per hour.


I am aggressively preparing for demos for my 2 hour deep dive session at IOUG Collaborate 2014: Session 974: 04/08/14 – 01:45 PM – 04:00 PM (Level 3, Lido 3101B) Extreme Oracle DB-Infrastructure-As-A-Service. Co-presenting with me with me will be Nitin Vengurlekar

We will cover all the topics from Linux as a Service to RAC as a Service to ASM as a Service and finish at Database as a Service.

From a RAC perspective, here’s a sample screen of what we will discuss. We have similar screen shots for ASM, Data Guard, RMAN and Linux:

[oracle@rac01 rac]$ ./rac
# ------------------------------------------------------------------------- #
#                RAC Menu System - rac-clust                             
# ------------------------------------------------------------------------- #
#   First Node:	rac01	VIP:  rac01-vip.viscosity.com       
#  Second Node:	rac02	VIP:  rac02-vip.viscosity.com       
# ------------------------------------------------------------------------- #
#  00.  Sync DBI Scripts Across All RAC Nodes                               #
# ------------------------------------------------------------------------- #
#  01.  Prepare Source RAC Cluster for Cloning (sudo)                       #
#       Will shutdown the Cluster and Unlock the /u01/app/12.1.0/grid Home 
#  02.  Lock the Grid Home:  /u01/app/12.1.0/grid (sudo) 
# ------------------------------------------------------------------------- #
#  03.  Prepare Oracle Directories (sudo)                                   #
# ------------------------------------------------------------------------- #
#  04.  Extract GI Home from Tarball (sudo)                                 #
#  05.  Extract DB Home from Tarball (sudo)                                 #
# ------------------------------------------------------------------------- #
#  20.  Setup SSH User Equivalence for All RAC Nodes                        #
# ------------------------------------------------------------------------- #
#  30.  Cleanup and Deconfig Submenu (sudo)                                 #
# ------------------------------------------------------------------------- #
#  40.  Clone Grid Infrastructure - /u01/app/12.1.0/grid                                 
#  41.  Run orainstRoot.sh and root.sh serially on all RAC nodes            #
# ------------------------------------------------------------------------- #
#  50.  Execute config.sh in silent mode                                    #
# ------------------------------------------------------------------------- #
#  60.  Create DATA and FRA diskgroups
# ------------------------------------------------------------------------- #
#  70.  Clone Database Home - /u01/app/oracle/product/12.1.0/dbhome_1                               
# ------------------------------------------------------------------------- #
#  80.  Create RAC Database - VPROD                                
# ------------------------------------------------------------------------- #
# 100.  Post Database Tasks                                                 #
# ------------------------------------------------------------------------- #
#   x.  Exit                                                                #
# ------------------------------------------------------------------------- #
#   Enter Task Number:


We are assuming that you have already installed kmod-oracleasm and oracleasm-support RPMs with yum:

 
# yum install kmod-oracleasm -y
# yum install oracleasm-support -y

For Red Hat Linux, you can download kmod-oracleasm from their support site. Check out my previous blog on where to download kmod-oracleasm for Red Hat 6.4 and above.

oracleasmlib is not available from the default yum repository. You can pull the oracleasmlib RPM from Oracle’s ASMLIB page for Oracle Linux 6:

[root@rac01 software]# rpm -ihv oracleasmlib-2.0.4-1.el6.x86_64.rpm 
Preparing...                ########################################### [100%]
   1:oracleasmlib           ########################################### [100%]

After we install the RPMs, we need to configure ASMLIB to scan immediately and to re-start on reboot for the Oracle user.

[root@rac01 software]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: 
Writing Oracle ASM library driver configuration: done

As the final step in the process, we need to initialize ASMLIB and confirm that it was successfully started:

 
[root@rac01 software]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size 
Mounting ASMlib driver filesystem: /dev/oracleasm

[root@rac01 software]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

Posted by Charles Kim, Oracle ACE Director


Iptables

Culprit to this lovely message during the Validating public and private interfaces across cluster nodes phase was to turn off IPTABLES. On Red Hat 6 or Oracle Linux 6, you should have IPTABLES turned off during the installation. Here’s how you can turn off IPTABLES and disable it from re-starting after a reboot:

[root@rac02 ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@rac02 ~]# chkconfig iptables off



It’s been a while since I stood up a RAC environment on VirtualBox. Here’s the steps to create a a VDI or VMDK to provision for the RAC VMs:

$ VBoxManage createhd --filename asm1.vmdk --size 4096 --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 1e2ab785-7687-42a5-bfd9-28962820f0eb
$ VBoxManage createhd --filename asm2.vmdk --size 4096 --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 50f8d391-9d0a-44ce-ad3a-0d247685d07a
$ VBoxManage createhd --filename asm3.vmdk --size 4096 --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: da189364-7f5e-4012-9c33-5696b32b27be
$ VBoxManage createhd --filename asm4.vmdk --size 4096 --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 05aa7755-8e4e-4023-90c2-c82a911ac94f
$ VBoxManage createhd --filename asm5.vmdk --size 4096 --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: baaba3c2-a589-4f2d-acc9-1430f6cbce06
$ VBoxManage createhd --filename asm6.vmdk --size 4096 --format VMDK --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 3e647f23-9ca3-4bf4-bb40-85a61629c1ae

I am creating VMDK format disks to that I can port them to VMware later. To create a VDI disk, change the –format to be VDI. You will create a VDI disk, if you do not plan to migrate to another virtualization platform.

Once I’ve created the ASM LUNs, I will designate them to be shareable with the VBoxManage executable:
$ VBoxManage modifyhd asm1.vmdk -type shareable
$ VBoxManage modifyhd asm2.vmdk -type shareable
$ VBoxManage modifyhd asm3.vmdk -type shareable
$ VBoxManage modifyhd asm4.vmdk -type shareable
$ VBoxManage modifyhd asm5.vmdk -type shareable
$ VBoxManage modifyhd asm6.vmdk -type shareable

Posted in RAC

As an Oracle RAC environment increases in size, complexity and importance the more important it is to achieve high levels of automation and standardization.  Increased levels of automation and standardization creates high reliability and allows production DBAs to focus on improvements in the infrastructure and proactive performance tuning. In this blog post, I am going to share with you how I deal with rotating various log files that have the tendency to grow and grow in the Oracle environment. I do not use “cp /dev/null” commands to the log files.

The following script will generate the scripts necessary to rotate all the database alert logs, ASM alert log, listener log and scan listener log(s) for the RAC node.

The script has a dependence on the SH environment variable script being set. The SH environment variable will simple be the location where you store all your shell scripts. This script will create a sub-directory called logrotate. In the logrotate directory, it will create 2 files for each database alert logs, ASM alert log, listener log and scan listener log(s). The first file is the logrotate state file. The second file is the actual log rotate directives. For the database instance and scan listener, this script will perform a “ps -ef” command and look for actively running occurrence of the scan listener and database instance.

In the very end of the script, we will generate the logrotate script for you to put into your weekly master cleanup script. We promote 2 sets of cleanup scripts: a daily cleanup script to handle things like audit log purges and a weekly cleanup scripts to address all the growing log file required by Oracle. We no longer have to deal with the Oracle cluster services log files as Oracle started to rotate the logs for us automatically starting in Oracle Database 11g Release 2.

function log_rotate {
export SQLP="sqlplus -s / as sysdba"
export SET="set pages 0 trims on lines 2000 echo off ver off head off feed off"


export ASM_RUNNING=$(ps -ef |grep -i asm_pmon |awk {'print $8'} |sed "s/asm_pmon_//g" |egrep -v "sed|grep")
[ "$ASM_RUNNING" != "" ] && ASM_INSTANCE=$(echo $ASM_RUNNING |sed '$s/.$//')

LISTENER_LOG=$ORACLE_BASE/diag/tnslsnr/$(hostname -s)/listener/trace/listener.log

function diag {
export DIAG_DEST=$(
echo "
$SET
select value from v\$diag_info where name='Diag Trace';" |$SQLP )
}

# -- Determine ASM Log
export ORACLE_SID=$ASM_RUNNING
export ORAENV_ASK=NO
. oraenv -s
export GRID_HOME=$ORACLE_HOME
diag;

ASM_LOG=$DIAG_DEST/alert_${ORACLE_SID}.log

ls -l  $ASM_LOG
ls -l  $LISTENER_LOG

function rotate {
export LOGFILE=$1
export CONFIG_FILE=$2
export PATH=$PATH:/usr/sbin
export CONF_DIR=$SH/logrotate
[ ! -d "$CONF_DIR" ] && ( echo $CONF_DIR does not exist .. issuing mkdir; mkdir -p $CONF_DIR )

export CONF=$CONF_DIR/$CONFIG_FILE

cat <<!! >$CONF
$LOGFILE {
weekly
copytruncate
rotate 2
compress
}
!!

echo logrotate -s $CONF_DIR/log_rotate_status.$CONFIG_FILE -f $CONF
}

for DATABASES in $(ps -ef |grep -i pmon |grep -v ASM |awk {'print $8'} |sed "s/ora_pmon_//g" |egrep -v "sed|grep")
do
  export DB=$(echo $DATABASES |sed '$s/.$//')
  export ORACLE_SID=$DATABASES
  export ORAENV_ASK=NO
  . oraenv -s
  diag;
  export DB_LOG=$DIAG_DEST/alert_${ORACLE_SID}.log
  ls -l $DB_LOG

  rotate $DB_LOG $DATABASES
done

for SCAN in $(ps -ef |grep -i tns |grep SCAN |awk {'print $9'})
do
export LOWER_SCAN_LISTENER=$(echo $SCAN |tr '[A-Z]' '[a-z]')
SCAN_LISTENER_LOG=$GRID_HOME/log/diag/tnslsnr/$(hostname -s)/$LOWER_SCAN_LISTENER/trace/$LOWER_SCAN_LISTENER.log
ls -l  $SCAN_LISTENER_LOG
done

rotate $LISTENER_LOG listener
rotate $SCAN_LISTENER_LOG $LOWER_SCAN_LISTENER
rotate $ASM_LOG $ASM_RUNNING
}

Here’s a sample output of the log rotation script:

logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.test1 -f /u01/app/oracle/general/sh/scripts/logrotate/test1
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.erpqa1 -f /u01/app/oracle/general/sh/scripts/logrotate/erpqa1
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.listener -f /u01/app/oracle/general/sh/scripts/logrotate/listener
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.listener_scan1 -f /u01/app/oracle/general/sh/scripts/logrotate/listener_scan1
logrotate -s /u01/app/oracle/general/sh/scripts/logrotate/log_rotate_status.+ASM1 -f /u01/app/oracle/general/sh/scripts/logrotate/+ASM1

Note the -s option is to specify an alternate state file. Since we are executing logrotate as the oracle or grid user, we must specify the -s option. The default state file is /var/lib/logrotate/status.

As you can see, it create a logrotate script for 2 of our databases, the local ASM instance, database listener and the scan listener. If you drill down into the actual logrotate script, you will notice that it is designed to rotate on a weekly basis, copy the file, truncate the original file, keep 2 copies and compress the copies. Here’s a sample logrotate script:

cat logrotate/listener_scan1
/u01/app/grid/11203/log/diag/tnslsnr/dallinux01/listener_scan1/trace/listener_scan1.log {
weekly
copytruncate
rotate 2
compress
}

Book Title:
Successfully Virtualize Business Critical Oracle Databases

VMware iBook Cover

Here’s the book Description:
Written by VMware vExperts (Charles Kim (VCP), Viscosity North America, and George Trujillo (VCI), HortonWorks) and leading experts within VMware VCI and Learning Specialist (Steve Jones) and Chief Database Architect in EMC’s Global IT organization (Darryl Smith), this book will provide critical instructions for deploying Oracle Standalone and Real Application Cluster (RAC) on VMware enterprise virtualized platforms. You will learn how to setup an Oracle ecosystem within a virtualized infrastructure for enterprise deployments. We will share industry best practices to install and configure Linux, and to deploy Oracle Grid Infrastructure and Databases in a matter of hours. Whether you are deploying a 4 node RAC cluster or deploying a single standalone database, we will lead you to the foundation which will allow you to rapidly provision database-as-a-service. We will disseminate key details on creating golden image templates from the virtual machine to the Oracle binaries and databases. You will learn from industry experts how to troubleshoot production Oracle database servers running in VMware virtual infrastructures.

Audience:
Database Admins/Architects, VMware Admins, System Admins/Architects, Project Architects
This book designed for Oracle DBAs and VMware administrators needing to learn the art of virtualizing Oracle.