Lot of our customers seem to be confused about the licensing of Oracle management packs. As we install and configure OEM for our customers, we tell them to be careful to leverage only the features that they are licensed for. To clarify for our customers and others, I decided to detail out when you need to pay for certain licenses and what you are allowed to execute for free if you do not want to pay (or cannot afford to pay) for licenses of Oracle management packs. To start, here’s what you are allowed to monitor without paying for licenses:

• Histograms
• Metrics
• OS statistics
• Optimizer statistics
• SQL Traces
• SQL statistics
• Service statistics
• Statspack
• System statistics
• Time model
• Wait model

Starting in Oracle Database 11g, you can cautiously leverage the CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter to control access or disable the packs. This parameters accepts the following parameters:

• NONE
• DIAGNOSTIC
• DIAGNOSTIC+TUNING

In Oracle 12c Release 1 (12.1), the default value for the CONTROL_MANAGEMENT_PACK_ACCESS parameter for Oracle Enterprise Edition is “DIAGNOSTIC+TUNING”. For Standard edition, the default value is set to NONE.

Nowadays, most of Oracle Sales folks are trained to sell the Diagnostic and Tuning pack as part of every database. You should expect this from experienced Oracle Sales folks. You may even get a response for a sales person indicating how can you successfully support or maintain a business critical database without leveraging these features.

Let’s continue our discussion to talk about the Database Diagnostic Pack. Usage of the following feature require licensing the Database Diagnostics Pack:

• Active Session History (ASH)
• Automatic Database Diagnostic Monitor (ADDM)
• Automatic Workload Repository
• Blackouts
• Dynamic metric baselines
• Event history and metric history (database and host)
• Event notifications: notification methods, rules, and schedules
• Monitoring templates
• Performance monitoring (database and host)

You will also have to pay for the database diagnostic pack if you leverage any of the following database stored procedures and packages:

• DBMS_ADDM
• DBMS_ADVISOR
• DBMS_WORKLOAD_REPOSITORY

If you leverage the following view, you will also end up paying for the diagnostic pack:

• V$ACTIVE_SESSION_HISTORY view
• Views with DBA_ADVISOR_ prefix
• Views with DBA_ADDM_ prefix
• Views with DBA_HIST_ prefix

If you leverage any of the SQL scripts from the $ORACLE_HOME/rdbms/admin directory, you will be liable for the diagnostic pack:

• aawrload.sql
• addmrpti.sql
• addmrtp.sql
• ashrpt.sql
• ashrpti.sql
• awrddrpi.sql
• awrddrpt.sql
• awrextr.sql
• awrextr.sql
• awrinfo.sql
• awrload.sql
• awrrpt.sql
• awrrpti.sql
• awrsqrpi.sql
• awrsqrpt.sql
• spawrrac.sql

Usage of the following feature require the Database Tuning Pack:

• Automatic Plan Evolution of SQL Plan Management
• Automatic SQL Tuning
• Reorganize objects
• SQL Access Advisor
• SQL Monitoring
• SQL Profiles
• SQL Tuning Advisor
• SQL Tuning Sets

The Tuning Pack has a direct dependency on the Diagnostic Pack. Stated in another way, if you license the Tuning Pack, you will have to also license the Diagnostic Pack.

Similarly, the usage of any of the stored procedures or packages, views or SQL scripts will incur license of the Database Tuning Pack:

• DBMS_ADVISOR package
• DBMS_SQLTUNE package
• V$SQL_MONITOR view
• V$SQL_PLAN_MONITOR view
• sqltrpt.sql report found in the $ORACLE_HOME/rdbms/admin/ directory

Usage of the following feature require the Database Configuration Management Pack:

• Clone Database
• Clone Oracle Home
• Compare configuration
• Database and Host Configuration
• Deployments
• Patch Database and View Patch Cache
• Patch staging
• Policies
• Search configuration

If you are concerned about a potential Oracle license violation or wandering if you may be using a feature/option that you should not be, you can execute the following SQL code against your database and confirm that you have or have not used licensable features or options:


col name for a45
col description for a40 word wrapped
col feature_info for a35
set lines 266
set pages 66
select name, description, version, detected_usages,
       currently_used, first_usage_date, last_usage_date,
       feature_info
from DBA_FEATURE_USAGE_STATISTICS
where detected_usages > 0
/


Lot of my customers migrate databases from Solaris or AIX to Red Hat or Oracle Linux. I see more AIX databases being migrated to Linux than Solaris but this is probably just a reflection of the customers that I am involved with. Here’s a simple diagram that I created for a customer in the financial sector (of course, all confidential information is removed) who migrated from AIX to Red Hat Linux.

Shareplex Zero Downtime Database Migration Strategy

This same strategy can be leveraged to migrate customers from AIX/Solaris to Linux on a virtualized infrastructure or even AIX/Solaris to Exadata depending on the target platform. We do see more VMware customers than Oracle VM customers who want to migrate from a big endianness platform to a little endianness platform. I’ve got this entire transportable tablespace (TTS) migration almost automated. It is definitely scripted all the way through and have thoroughly tested the scripts in several customers. I guess I need to “put that lipstick on the pig” and GUI-ize it and productize the scripts to provide an additional value to my customers.

In this blog, everything starts with Shareplex. We need to plan for Shareplex installation on the production database servers (both source and target) couple of weeks prior to the final production cut-over date. We ask for couple of weeks as we are likely to encounter firewall ports that need to be opened between the AIX/Solaris database server to the new Linux servers. We will install Shareplex on both AIX and Linux and start Shareplex on both environments. On the Linux side, the skeleton database should also be pre-created and all the Oracle software installed and patched. Also on the Linux side, we will need to stop the post process (we will define what the post process is later).

On the source system (in our example AIX database), we will define the Sharplex configuration which identifies all the schemas or schema.tables that need to be replicated from the source database to the target database (in our example Linux database). I have a script that I can share which will generate the configuration file depending on which approach you choose. Once we define and activate the configuration, the capture process will start reading the redo logs or archive logs on the source system for changes to objects listed in the configuration. The Export process runs on the source system and reads data from the export queue and sends it across the network to the target system. The import process works directly with the export process. The import process runs on the target system to receive data and build a post queue. We may have more than one export and import process; they are always paired so if we have 2 export processes, we will have 2 import processes. By default, we have one of each. The post process also runs on the target system and reads the post queue, constructs SQL statements, and applies the SQL statements to replicated objects. We may have one or more post processes depending on performance design and considerations.

Depending on the size of the database and the approach that we take (RMAN image copy, datapump, export/import, CTAS over network, etc), the database cloning process can take 1 hours, 1/2 day, 1 day, 1 week or longer. We need to architect our zero downtime migration so that with any of these cloning options, the business perceives a zero downtime or a near zero downtime database migration. So how do we do that? We defined all the processes involved with Shareplex at a high-level. Let’s see how we can leverage our knowledge to start the zero downtime migration efforts. Earlier we discussed that we have a configuration file which defines the objects that need to be replicated. We need to activate our configuration so that the capture process will start reading redo logs/archivelogs and generating Shareplex queues. Once we activate our configuration, changes on the source system will be captures, exported and imported to the target system. Remember earlier, we stopped our post process as part of our high-level installation overview. All the changes from the source system will be sent to the target system (as we stopped the post process) and will accumulate for the entire duration of the migration window until we start the post process. We will need to size the target Shareplex file system with proper design considerations so that the file system can house all the Shareplex transaction queue files.

If you look at the top left corner of the diagram, we start with the RMAN image copy of the database to a file system. If you are on AIX, this can be a Veritas file system. If you cannot afford Veritas, you can perform a RMAN backup to a NFS file system. For VLDB databases, you can perceive the performance differences between a locally mounted file system versus a NFS file system. If you happen to have 10GigE available, you may not notice much performance differences.

The RMAN image copy strategy involves performing incremental update. We will perform an initial level 0 image copy backup of the database and take a incremental level 1 backup numerous times with the intention of updating the image copy with the incremental updates (aka Forever Incremental or Incrementally Updated Backups). Make sure to have block change tracking enabled before you start this process.

In this diagram, we also introduce an AIX staging server near the production database server. If we look at the transportable tables architecture, we must put the tablespaces in read-only mode to perform the TTS metadata datapump export. If you introduce the staging server, you simplify your approach and can eliminate any of the migration activity (such as putting the database in read-only mode) on the production database.

We need to go through the steps to synchronize the production database and the image copy database on the staging server. We can perform the final incremental level 1 backup update and/or even apply archivelogs to the database on the staging server as necessary depending on your approach.

  • This is where we need to decide if we want to work with SCNs and perform a zero downtime migration or take a little outage and have some flexibility. Some of our customers can afford the little downtime and some of our customers have told us that it must be zero downtime.
  • The staging server is needed so that you do not have to put the production database in read only mode for the duration that the TTS export is running

Next, we open the copied database with the resetlog option. Once the database is open, we issue the commands to put the entire database in read-only mode and copy the database files (in the absence of NFS or Veritas) to the Linux server. If we have Veritas in the equation, we can simply swing the file system to the Linux server and mount the volume. If we are using NFS, we simply present the NFS share to the Linux OS and mount the NFS share. For Solaris folks, we can mount a Solaris file system on Linux in read only mode and Veritas is not needed.

For the next step, this is where your datapump expertise starts to pay off. We need to perform a TTS export of the tablespaces that we are migrating over from AIX to Linux. The TTS datapump export is relatively simple for those who have done this before but can be a little intimidating to some who are new to this process. Once we are complete with the TTS metadata export, we need to SFTP the metadata export and log to the Linux server. After this step, we no longer need the staging server and can be shutdown. We want to the TTS export log so that we can parse the log to generate our RMAN endian conversion script. In our example, we are going to ASM so the RMAN endianness conversion will place the datafilee inside of ASM. The amount of time to migrate the database from file system to ASM will vary on the source and target storage array and wether we are talking 10gigE, bonded 1gigE, 4gig HBAs, 8gig HBAs or IB. Even for the slower HBA on older storage arrays, we can effectively drive 1 TB of endianness conversion per hour.