Lot of my customers migrate databases from Solaris or AIX to Red Hat or Oracle Linux. I see more AIX databases being migrated to Linux than Solaris but this is probably just a reflection of the customers that I am involved with. Here’s a simple diagram that I created for a customer in the financial sector (of course, all confidential information is removed) who migrated from AIX to Red Hat Linux.

Shareplex Zero Downtime Database Migration Strategy

This same strategy can be leveraged to migrate customers from AIX/Solaris to Linux on a virtualized infrastructure or even AIX/Solaris to Exadata depending on the target platform. We do see more VMware customers than Oracle VM customers who want to migrate from a big endianness platform to a little endianness platform. I’ve got this entire transportable tablespace (TTS) migration almost automated. It is definitely scripted all the way through and have thoroughly tested the scripts in several customers. I guess I need to “put that lipstick on the pig” and GUI-ize it and productize the scripts to provide an additional value to my customers.

In this blog, everything starts with Shareplex. We need to plan for Shareplex installation on the production database servers (both source and target) couple of weeks prior to the final production cut-over date. We ask for couple of weeks as we are likely to encounter firewall ports that need to be opened between the AIX/Solaris database server to the new Linux servers. We will install Shareplex on both AIX and Linux and start Shareplex on both environments. On the Linux side, the skeleton database should also be pre-created and all the Oracle software installed and patched. Also on the Linux side, we will need to stop the post process (we will define what the post process is later).

On the source system (in our example AIX database), we will define the Sharplex configuration which identifies all the schemas or schema.tables that need to be replicated from the source database to the target database (in our example Linux database). I have a script that I can share which will generate the configuration file depending on which approach you choose. Once we define and activate the configuration, the capture process will start reading the redo logs or archive logs on the source system for changes to objects listed in the configuration. The Export process runs on the source system and reads data from the export queue and sends it across the network to the target system. The import process works directly with the export process. The import process runs on the target system to receive data and build a post queue. We may have more than one export and import process; they are always paired so if we have 2 export processes, we will have 2 import processes. By default, we have one of each. The post process also runs on the target system and reads the post queue, constructs SQL statements, and applies the SQL statements to replicated objects. We may have one or more post processes depending on performance design and considerations.

Depending on the size of the database and the approach that we take (RMAN image copy, datapump, export/import, CTAS over network, etc), the database cloning process can take 1 hours, 1/2 day, 1 day, 1 week or longer. We need to architect our zero downtime migration so that with any of these cloning options, the business perceives a zero downtime or a near zero downtime database migration. So how do we do that? We defined all the processes involved with Shareplex at a high-level. Let’s see how we can leverage our knowledge to start the zero downtime migration efforts. Earlier we discussed that we have a configuration file which defines the objects that need to be replicated. We need to activate our configuration so that the capture process will start reading redo logs/archivelogs and generating Shareplex queues. Once we activate our configuration, changes on the source system will be captures, exported and imported to the target system. Remember earlier, we stopped our post process as part of our high-level installation overview. All the changes from the source system will be sent to the target system (as we stopped the post process) and will accumulate for the entire duration of the migration window until we start the post process. We will need to size the target Shareplex file system with proper design considerations so that the file system can house all the Shareplex transaction queue files.

If you look at the top left corner of the diagram, we start with the RMAN image copy of the database to a file system. If you are on AIX, this can be a Veritas file system. If you cannot afford Veritas, you can perform a RMAN backup to a NFS file system. For VLDB databases, you can perceive the performance differences between a locally mounted file system versus a NFS file system. If you happen to have 10GigE available, you may not notice much performance differences.

The RMAN image copy strategy involves performing incremental update. We will perform an initial level 0 image copy backup of the database and take a incremental level 1 backup numerous times with the intention of updating the image copy with the incremental updates (aka Forever Incremental or Incrementally Updated Backups). Make sure to have block change tracking enabled before you start this process.

In this diagram, we also introduce an AIX staging server near the production database server. If we look at the transportable tables architecture, we must put the tablespaces in read-only mode to perform the TTS metadata datapump export. If you introduce the staging server, you simplify your approach and can eliminate any of the migration activity (such as putting the database in read-only mode) on the production database.

We need to go through the steps to synchronize the production database and the image copy database on the staging server. We can perform the final incremental level 1 backup update and/or even apply archivelogs to the database on the staging server as necessary depending on your approach.

  • This is where we need to decide if we want to work with SCNs and perform a zero downtime migration or take a little outage and have some flexibility. Some of our customers can afford the little downtime and some of our customers have told us that it must be zero downtime.
  • The staging server is needed so that you do not have to put the production database in read only mode for the duration that the TTS export is running

Next, we open the copied database with the resetlog option. Once the database is open, we issue the commands to put the entire database in read-only mode and copy the database files (in the absence of NFS or Veritas) to the Linux server. If we have Veritas in the equation, we can simply swing the file system to the Linux server and mount the volume. If we are using NFS, we simply present the NFS share to the Linux OS and mount the NFS share. For Solaris folks, we can mount a Solaris file system on Linux in read only mode and Veritas is not needed.

For the next step, this is where your datapump expertise starts to pay off. We need to perform a TTS export of the tablespaces that we are migrating over from AIX to Linux. The TTS datapump export is relatively simple for those who have done this before but can be a little intimidating to some who are new to this process. Once we are complete with the TTS metadata export, we need to SFTP the metadata export and log to the Linux server. After this step, we no longer need the staging server and can be shutdown. We want to the TTS export log so that we can parse the log to generate our RMAN endian conversion script. In our example, we are going to ASM so the RMAN endianness conversion will place the datafilee inside of ASM. The amount of time to migrate the database from file system to ASM will vary on the source and target storage array and wether we are talking 10gigE, bonded 1gigE, 4gig HBAs, 8gig HBAs or IB. Even for the slower HBA on older storage arrays, we can effectively drive 1 TB of endianness conversion per hour.


Comments are closed