Oracle Database 12c Release 2, packs a multitude of new features for Data Guard with high availability, data protection, and disaster recovery. Through the new functionality shared in this paper, DBAs can provide better protection for mission critical production databases from human errors, data corruptions, failures, and disasters. With the new features in Oracle 12.2, DBAs can deliver a robust reporting environment while addressing corporate disaster recovery goals.

Download the Data Guard 12.2 white paper here.

Posted by Charles Kim
Oracle ACE Director

I am happy to announce that I will be presenting Data Guard at the next Arizona Oracle User Group in Phoenix on January 26, 2017. At January’s AZORA meeting we’ll have a total of four presentations; two sessions running at the same time in different rooms.

Bring a partner to make sure your organization does not miss out on any of the content. We’ll have pizza for lunch and cookies later in the day thanks to the generosity of OneNeck IT Solutions and Viscosity North America.

Once again, Republic Services will host (thank you Republic) in two of their brand-new training rooms on the first floor.

When: January 26, 2017 (Thursday) 12:30 pm – 4:00 pm

Where: Republic Services
1st Floor Training Rooms
14400 N 87th St (AZ101 & Raintree)
Scottsdale, AZ


12:30 – 1:00 Registration and Pizza
1:00-1:10 Welcome
1:10-2:10 Presentations

Room 1 Biju Thomas – OneNeck IT Solutions (Oracle ACE Director)
“Oracle Database 12c New Features for 11gR2 DBA”

Room 2 Charles Kim – Viscosity North America (Oracle ACE Director)
“Bullet Proof Your Data Guard Environment”

Here’s the summary to what I will be presenting:

Compliance to industry best practices can easily be achieved. This session will disseminate fundamental Data Guard best practices and reference architectures that DBAs need to know to protect their Oracle ecosystem. The author of the Oracle Data Guard Handbook will demonstrate how DBAs should set up, configure, and monitor mission-critical Data Guard environments (including Active Data Guard).

Come see Data Guard best practices in action. The session concentrates on:
o Building the physical standby
o Monitoring and maintaining the physical standby
o Configuring Data Guard Broker
o Performing backup and recovery with RMAN
o Setting archive retention
o Performing switchovers and failovers
o Integrating Data Guard with OEM 13c
o What’s new in Oracle Data Guard 12.2

2:10-2:25 Break – Coffee & Cookies

2:25-3:25 Presentations

Room 1 Biju Thomas – OneNeck IT Solutions (Oracle ACE Director)
“Introduction to Oracle Databases in the Cloud”

Room 2 Jerry Ward – Viscosity North America
“Building Faceted Search Navigation in APEX with Oracle JET and PL/SQL Pipelines”

3:25-3:30 Wrap Up and Closing

Here’s a simple script that anyone can use to check for lags in Data Guard. Basically, there’s two kind of lags in Data Guard that we want to monitor. The first is the apply_lag which is the amount of time the standby database is lagging behind relative to the primary database due to application of redo data. The second is the transport_lag which provides information about how much redo data is behind (in terms of time) because it is not available or applicable on the standby database. The transport lag can be used to determine bandwidth issues between the primary and stanby database sites.

You can specify the MIN_THRESHOLD parameter which will be the 2nd parameter that is passed into the dg_check_lag.ksh script (below). For lot of our customers who are on Active Data Guard, we change this script to determine seconds that the transport and apply lag is behind by. It is common for us to send alerts when the transport or apply lag is greater than 30 seconds for Active Data Guard customers.

Stay tuned as I will reveal scripts to monitor gaps in archive log sequences. I use the term gap loosely here as it is not determining the gap from v$archive_gap but looking at the number of applied archive logs on the standby database and comparing the applied archive logs to the maximum sequence number based on the thread number.

Come visit Viscosity North America for latest updates to local events, white papers and case studies.

Session Title: Data Guard Attack!!
Session Number: 1580
Speakers: Charles Kim, Oracle ACE Director and Nitin Vengurlekar, Oracle ACE Director
Track: Database
Session Type: Hands-on Lab
Sub-Categorization: High Availability & Data Protection

There is no reason why Data Guard setup, configuration, maintenance and monitoring cannot be setup As A Service. Automation is the crux of any rapid deployment and cloud models. Setting up Data Guard should be a service catalog for any DBaaS deployment.

You will also learn the new Data Guard features available in Oracle Database 12.2

Download the latest and greatest Data Guard automation toolkit from prior to attending this session. We have incorporated industry best practices to the DG Toolkit. This session will disseminate fundamental Data Guard best practices and demonstrate how DBAs can automate setup, configuration, and monitoring of Data Guard environments with assistance from the Date Guard Toolkit (DG Toolkit).

In this hand-on deep dive session, we will go through step by step details of setting up Data Guard with the automation toolkit:
o Building the Physical Standby
o Monitoring and Maintaining the Physical Standby
o Configuring Data Guard Broker
o Performing Backup and Recovery with RMAN
o Setting Archive Retention
o Performing Switchovers and Failovers

Learning Objectives:

1. Most importantly, learn how to build the physical standby with ease and automation using the DG Toolkit
2. Learn how to monitor the physical standby database with DG Toolkit
3. Learn how to monitor the physical standby database with DG Toolkit

Outline / Content Structure:
Perform preliminary check prior to starting the Data Guard Build
1. Perform assessments on the source database
2. Perform assessments on the physical standby

Perform detailed steps to build the physical standby database
1. Look at building the physical standby with easy menu steps
2. Look at duplicating the physical standby database

Configure the Data Guard Broker

Perform monitoring of the Data Guard Environment
1. Monitor the physical standby for performance
2. See how far behind we are

Perform RMAN to disk configuration options

Here’s a simple script to look at the v$recovery_process view:

The following dg_check_lag.ksh script can be leveraged to monitor a Data Guard environment and to send alerts if the apply lag or transport lag exceeds a specified threshold in the dg_check_lag.conf file. In our example, the dg_check_lag.conf file specifies a threshold of three hours. If we encounter a lag in redo transport or apply lag that exceeds 3 hours, we send an alert to our DG DBAs.

Contents of the dg_check_lag.ksh script:


Contents of dg_check_lag.conf:

Contents of .ORACLE_BASE

This file must exist in the $HOME directory for Oracle. We source the .ORACLE_BASE file because some companies have crazy standards to where they place ORACLE_BASE and where they place all their shell scripts. We keep it simple by leveraging our own configuration file which points to where ORACLE_BASE is located and where all the shell scripts reside in.

I am aggressively preparing for demos for my 2 hour deep dive session at IOUG Collaborate 2014: Session 974: 04/08/14 – 01:45 PM – 04:00 PM (Level 3, Lido 3101B) Extreme Oracle DB-Infrastructure-As-A-Service. Co-presenting with me with me will be Nitin Vengurlekar

We will cover all the topics from Linux as a Service to RAC as a Service to ASM as a Service and finish at Database as a Service.

From a RAC perspective, here’s a sample screen of what we will discuss. We have similar screen shots for ASM, Data Guard, RMAN and Linux:

[oracle@rac01 rac]$ ./rac
# ------------------------------------------------------------------------- #
#                RAC Menu System - rac-clust                             
# ------------------------------------------------------------------------- #
#   First Node:	rac01	VIP:       
#  Second Node:	rac02	VIP:       
# ------------------------------------------------------------------------- #
#  00.  Sync DBI Scripts Across All RAC Nodes                               #
# ------------------------------------------------------------------------- #
#  01.  Prepare Source RAC Cluster for Cloning (sudo)                       #
#       Will shutdown the Cluster and Unlock the /u01/app/12.1.0/grid Home 
#  02.  Lock the Grid Home:  /u01/app/12.1.0/grid (sudo) 
# ------------------------------------------------------------------------- #
#  03.  Prepare Oracle Directories (sudo)                                   #
# ------------------------------------------------------------------------- #
#  04.  Extract GI Home from Tarball (sudo)                                 #
#  05.  Extract DB Home from Tarball (sudo)                                 #
# ------------------------------------------------------------------------- #
#  20.  Setup SSH User Equivalence for All RAC Nodes                        #
# ------------------------------------------------------------------------- #
#  30.  Cleanup and Deconfig Submenu (sudo)                                 #
# ------------------------------------------------------------------------- #
#  40.  Clone Grid Infrastructure - /u01/app/12.1.0/grid                                 
#  41.  Run and serially on all RAC nodes            #
# ------------------------------------------------------------------------- #
#  50.  Execute in silent mode                                    #
# ------------------------------------------------------------------------- #
#  60.  Create DATA and FRA diskgroups
# ------------------------------------------------------------------------- #
#  70.  Clone Database Home - /u01/app/oracle/product/12.1.0/dbhome_1                               
# ------------------------------------------------------------------------- #
#  80.  Create RAC Database - VPROD                                
# ------------------------------------------------------------------------- #
# 100.  Post Database Tasks                                                 #
# ------------------------------------------------------------------------- #
#   x.  Exit                                                                #
# ------------------------------------------------------------------------- #
#   Enter Task Number:

You should synchronize your system time between your RAC nodes and even for your primary and standby database server by enabling the NTP daemon. You should enable NTP with the –x option to allow for gradual time changes, also referred to as slewing. This slewonly option is mandatory for Real Application Clusters (RAC), but is also recommended for Data Guard configurations. To setup NTP with the –x option, you need to modify the /etc/sysconfig/ntpd file and add the desired flag to the OPTIONS variable, and restart the service with the “service ntpd restart” command.

# Drop root to id 'ntp:ntp' by default.
#OPTIONS="-u ntp:ntp -p /var/run/ -g"
OPTIONS="-x -u ntp:ntp -p /var/run/"

You can check your current NTP configuration by checking the process status and filtering on the ntp daemon. In the example below, we will start the ntpd service and check to confirm that the settings are correct with the ps command:

[root@rac1 sysconfig]# service ntpd start
Starting ntpd:                                             [  OK  ]

[root@rac1 sysconfig]# ps -ef |grep -i ntp
ntp       3496     1  0 10:38 ?        00:00:00 ntpd -x -u ntp:ntp -p /var/run/
root      3500  2420  0 10:39 pts/1    00:00:00 grep -i ntp

Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups [ID 1389592.1]

What does this mean?  It means we can perform migrations from AIX/Solaris/HP-UX to Linux with significantly reduced downtime.  I can perform incremental backups across endianness.

This was originally tested for the Exadata and now it is available for everyone.

Posted by Charles Kim, Oracle ACE Director