Lot of my customers migrate databases from Solaris or AIX to Red Hat or Oracle Linux. I see more AIX databases being migrated to Linux than Solaris but this is probably just a reflection of the customers that I am involved with. Here’s a simple diagram that I created for a customer in the financial sector (of course, all confidential information is removed) who migrated from AIX to Red Hat Linux.

Shareplex Zero Downtime Database Migration Strategy

This same strategy can be leveraged to migrate customers from AIX/Solaris to Linux on a virtualized infrastructure or even AIX/Solaris to Exadata depending on the target platform. We do see more VMware customers than Oracle VM customers who want to migrate from a big endianness platform to a little endianness platform. I’ve got this entire transportable tablespace (TTS) migration almost automated. It is definitely scripted all the way through and have thoroughly tested the scripts in several customers. I guess I need to “put that lipstick on the pig” and GUI-ize it and productize the scripts to provide an additional value to my customers.

In this blog, everything starts with Shareplex. We need to plan for Shareplex installation on the production database servers (both source and target) couple of weeks prior to the final production cut-over date. We ask for couple of weeks as we are likely to encounter firewall ports that need to be opened between the AIX/Solaris database server to the new Linux servers. We will install Shareplex on both AIX and Linux and start Shareplex on both environments. On the Linux side, the skeleton database should also be pre-created and all the Oracle software installed and patched. Also on the Linux side, we will need to stop the post process (we will define what the post process is later).

On the source system (in our example AIX database), we will define the Sharplex configuration which identifies all the schemas or schema.tables that need to be replicated from the source database to the target database (in our example Linux database). I have a script that I can share which will generate the configuration file depending on which approach you choose. Once we define and activate the configuration, the capture process will start reading the redo logs or archive logs on the source system for changes to objects listed in the configuration. The Export process runs on the source system and reads data from the export queue and sends it across the network to the target system. The import process works directly with the export process. The import process runs on the target system to receive data and build a post queue. We may have more than one export and import process; they are always paired so if we have 2 export processes, we will have 2 import processes. By default, we have one of each. The post process also runs on the target system and reads the post queue, constructs SQL statements, and applies the SQL statements to replicated objects. We may have one or more post processes depending on performance design and considerations.

Depending on the size of the database and the approach that we take (RMAN image copy, datapump, export/import, CTAS over network, etc), the database cloning process can take 1 hours, 1/2 day, 1 day, 1 week or longer. We need to architect our zero downtime migration so that with any of these cloning options, the business perceives a zero downtime or a near zero downtime database migration. So how do we do that? We defined all the processes involved with Shareplex at a high-level. Let’s see how we can leverage our knowledge to start the zero downtime migration efforts. Earlier we discussed that we have a configuration file which defines the objects that need to be replicated. We need to activate our configuration so that the capture process will start reading redo logs/archivelogs and generating Shareplex queues. Once we activate our configuration, changes on the source system will be captures, exported and imported to the target system. Remember earlier, we stopped our post process as part of our high-level installation overview. All the changes from the source system will be sent to the target system (as we stopped the post process) and will accumulate for the entire duration of the migration window until we start the post process. We will need to size the target Shareplex file system with proper design considerations so that the file system can house all the Shareplex transaction queue files.

If you look at the top left corner of the diagram, we start with the RMAN image copy of the database to a file system. If you are on AIX, this can be a Veritas file system. If you cannot afford Veritas, you can perform a RMAN backup to a NFS file system. For VLDB databases, you can perceive the performance differences between a locally mounted file system versus a NFS file system. If you happen to have 10GigE available, you may not notice much performance differences.

The RMAN image copy strategy involves performing incremental update. We will perform an initial level 0 image copy backup of the database and take a incremental level 1 backup numerous times with the intention of updating the image copy with the incremental updates (aka Forever Incremental or Incrementally Updated Backups). Make sure to have block change tracking enabled before you start this process.

In this diagram, we also introduce an AIX staging server near the production database server. If we look at the transportable tables architecture, we must put the tablespaces in read-only mode to perform the TTS metadata datapump export. If you introduce the staging server, you simplify your approach and can eliminate any of the migration activity (such as putting the database in read-only mode) on the production database.

We need to go through the steps to synchronize the production database and the image copy database on the staging server. We can perform the final incremental level 1 backup update and/or even apply archivelogs to the database on the staging server as necessary depending on your approach.

  • This is where we need to decide if we want to work with SCNs and perform a zero downtime migration or take a little outage and have some flexibility. Some of our customers can afford the little downtime and some of our customers have told us that it must be zero downtime.
  • The staging server is needed so that you do not have to put the production database in read only mode for the duration that the TTS export is running

Next, we open the copied database with the resetlog option. Once the database is open, we issue the commands to put the entire database in read-only mode and copy the database files (in the absence of NFS or Veritas) to the Linux server. If we have Veritas in the equation, we can simply swing the file system to the Linux server and mount the volume. If we are using NFS, we simply present the NFS share to the Linux OS and mount the NFS share. For Solaris folks, we can mount a Solaris file system on Linux in read only mode and Veritas is not needed.

For the next step, this is where your datapump expertise starts to pay off. We need to perform a TTS export of the tablespaces that we are migrating over from AIX to Linux. The TTS datapump export is relatively simple for those who have done this before but can be a little intimidating to some who are new to this process. Once we are complete with the TTS metadata export, we need to SFTP the metadata export and log to the Linux server. After this step, we no longer need the staging server and can be shutdown. We want to the TTS export log so that we can parse the log to generate our RMAN endian conversion script. In our example, we are going to ASM so the RMAN endianness conversion will place the datafilee inside of ASM. The amount of time to migrate the database from file system to ASM will vary on the source and target storage array and wether we are talking 10gigE, bonded 1gigE, 4gig HBAs, 8gig HBAs or IB. Even for the slower HBA on older storage arrays, we can effectively drive 1 TB of endianness conversion per hour.


VExpert2014

Honored to make vExpert for the second consecutive year. Congratulations to all vExpert 2014. I am proud to be part of this group.

We have 754 vExperts this year, which is impressive! Each of these vExperts have demonstrated significant contributions to the community and a willingness to share their expertise with others.


It’s time for the the annual IOUG Collaborate Conference again, April 7-11 in Las Vegas at the Venetian and Sands Expo Center.

We have a line up of great tracks and speakers focused on Cloud Computing, and this is a mini-compilation of the sessions focused on Cloud Tracks.

Enjoy and look forward to meeting everyone at Collaborate (#C14LV).

Best Wishes,

Charles Kim and the Cloud SIG Team (George, Bert, Kai, Ron, Steve).


Let’s look at the step-by-step procedures to create a VMware template from an existing golden image VM. The following URL to the PDF will demonstrate the the steps to templatize a VM and to provision a new VM from the newly created template:

VMware Clone to Template and Deploy Virtual Machine from this Template

Our concept of templatization does not stop at the VM. We have to create a template of the Grid Infrastructure and Database Home binaries. After we install Oracle Database 11.2.0.3 or 11.2.0.4, we will apply the latest (N-1) PSU to both the Grid Infrastructure and Database Homes and required one-off patches as needed. For example, customers who have implemented GoldenGate may have to apply patches for the ACFS and/or for Integrated Extracts. Once we establish what we conceive to be the golden image for the Grid Infrastructure and Database software stack, we will create a Tar archives of both homes.


VMware vCenter Server is the centralized management tool that allows for the management of multiple ESXi servers and virtual machines (VMs) through a single console application. vCenter is a critical component in the VMware deployment. Lot of the feature/functionality are not available without vCenter. All of the well-known features in vSphere such as VMotion, Storage VMotion, Distributed Resource Scheduler (DRS), VMware High Availability (HA) and Fault Tolerance (FT) require vCenter Server.

In simplistic terms, VMware vCenter can be compared to Oracle’s Enterprise Manager 12c Cloud Control in the absence of Oracle VM in a physical server environments. For the DBAs who happen to be working on Oracle VM, vCenter is equivalent to Oracle VM Manager. At the time of writing this blog, the latest release of vCenter that was available was vCenter 5.1. News from the recent VMWorld conference, the next release of vCenter 5.5 will be available to the general public sometime in September 2013.

vCenter Server 5.1 provides a vCenter Server Simple Install option that installs vCenter Single Sign-On, Inventory Service, and vCenter Server for small VMware deployments on the virtual machine. If you prefer to customize the location and setup of each component, you can install the components separately by selecting the individual installation options in the following order:

  • vCenter Single Sign-On
  • Inventory Services
  • vCenter Server

Each of the components listed above can be installed in a different virtual machine.

Vcenter 1
Click on the option to perform the VMware vCenter simple install which will install vCenter Server, Single Sign On Server, and Inventory Service on the Windows server.

Vcenter 2

Click on OK because the vCenter server is not connected to an Active Directory domain

Then click on Next from the Single Sign On screen

Vcenter 3

When the Welcome screen appears, click Next to continue.

Vcenter 4

Select Next to accept the End-User Patent Agreement.

Vcenter 5

Please read the License Agreement; If you agree to the terms, select “I accept to the terms in the license agreement” and click on Next

Vcenter 6

Enter the password for the vCenter Single-Sign-On Admin user account

Click on Next

Vcenter 7

For the test environment, you can accept to Install a local Microsoft SQL Server 2008 R2 Express instance
For a production environment, you should install an Oracle Database with Standard or Enterprise edition

Click on Next

Vcenter 8

Enter the password for the RSA_DBA and RSA_USER database account

Click on Next

Vcenter 9

Click on OK

Click on Next

Vcenter 10

Accept the default directory path to install VMware vCenter

You can change the location if your corporate standards dictate an alternate location

Click on Next

Vcenter 11

Accept the default HTTPS Port

Click on Next

Vcenter 12

Click on the Install button to start the installation of vCenter

Vcenter 13

If you are performing just an evaluation term, click on the next button

Enter the License key for vCenter and click on Next

Vcenter 14

For the ODBC data source for vCenter Server, we will accept the default of 5 hosts and 50 virtual machines in our test environment

Click on Next

Vcenter 15

Either enter the fully qualified host.domain name or enter the IP address

Click on Next

Vcenter 16

Accept the acknowledgement for using an IP address instead of a fully qualified host name.

Click on OK

Vcenter 17

Either accept the default port assignments or modify the ports as needed and defined in the firewall rules

Click on Next

Vcenter 18

Since this is a test vCenter deployment, we will accept the default JVM Memory allocation of 1GB and click on Next

Vcenter 19

Click on the Install button to continue

Vcenter 20

Vcenter 21

Click on the Finish button

Vcenter 22

Click on OK

Login as windows administrator: administrator / password to access vCenter from the vSphere client


Book Title:
Successfully Virtualize Business Critical Oracle Databases

VMware iBook Cover

Here’s the book Description:
Written by VMware vExperts (Charles Kim (VCP), Viscosity North America, and George Trujillo (VCI), HortonWorks) and leading experts within VMware VCI and Learning Specialist (Steve Jones) and Chief Database Architect in EMC’s Global IT organization (Darryl Smith), this book will provide critical instructions for deploying Oracle Standalone and Real Application Cluster (RAC) on VMware enterprise virtualized platforms. You will learn how to setup an Oracle ecosystem within a virtualized infrastructure for enterprise deployments. We will share industry best practices to install and configure Linux, and to deploy Oracle Grid Infrastructure and Databases in a matter of hours. Whether you are deploying a 4 node RAC cluster or deploying a single standalone database, we will lead you to the foundation which will allow you to rapidly provision database-as-a-service. We will disseminate key details on creating golden image templates from the virtual machine to the Oracle binaries and databases. You will learn from industry experts how to troubleshoot production Oracle database servers running in VMware virtual infrastructures.

Audience:
Database Admins/Architects, VMware Admins, System Admins/Architects, Project Architects
This book designed for Oracle DBAs and VMware administrators needing to learn the art of virtualizing Oracle.


There are several options when it comes to configuring parameters for shared storage and access. Today, let’s take some time to cover how to leverage the vSphere 5.x Client to add the required parameters.

From the Options Tab, Click on the General line, and click on the Configuration Parameters button
General configuration

Click on the Add Rows button at the bottom right of the screen:

Multi writer Flag

As part of our RAC configuration for shared storage, we can setup udev rules for device persistency and permissions. You will choose this option if you are opting out of Oracle ASMLIB.

Note:
Lot of customers went forward with setting up udev rules on Red Hat 6 because Oracle and Red Hat released de-support of ASMLIB when Red Hat 6 went GA. When Red Hat 6.4 came out, Red Hat announced support for ASMLIB. This section will assume that the customers have opted out of ASMLIB and chose to stick with udev rules.

Doing this with the GUI and repeating the steps is extremely painful. We passionately promote automating mundane tasks like this. Look for upcoming post on how to do all this with PowerCLI.

Also, by default, the UUID (Unique Device Identifier) of the disks will not be available to the Linux VM when you probe with the scsi_id command. To allow the scsi_id to retrieve the unique SCSI identifier, you must set the following parameter to true on each VM:

disk.EnableUUID = “TRUE”


Here’s a super easy and fast way of setting up a Yum Repository for all your Linux based infrastructure. You can leverage NFS and effectively have a Yum Repository in matter of minutes.

First, mount the Red Hat 6 media distribution from an ISO or from the DVD media. Next, copy the the entire contents of the DVD to a location on the NAS or a local file system that you wish to be exported to other servers.

On the Red Hat media, look for a file called .discinfo on the root directory of the media. In my example, here’s what the contents of .discinfo looks like:

[root@rh64a yum.repos.d]# cat "/media/RHEL_6.4 x86_64 Disc 1"/.discinfo
1359576196.686790
Red Hat Enterprise Linux 6.4
x86_64
1

The line that you are interested in is the first line with all the numbers: 1359576196.686790

We will need that number to build our repository definition file: /etc/yum.repos.d/viscosity.repo
In this file, we will add the full numeric value for our mediaaid line. The other tidbit of information that you need to provide is the root file system location for the Red Hat media where you copied everything to. In our example, we copied the entire media to the /shared/rhel64.dvd directory. We will fill out the baseurl value with this location.

# cat /etc/yum.repos.d/viscosity.repo
[viscosity]
mediaid=1359576196.686790
name=Local Viscosity Repo
baseurl=file:///nfs/rhel64.dvd/
enabled=1
gpgcheck=no

From the server that will act as our Yum Repository, add the entries to /etc/exports:

[root@rh64b yum.repos.d]# cat /etc/exports
/nfs *(rw,sync) 

You can selective qualify the list of servers that you want to present the share to for added security. You may need to start the NFS service with the “service hfs start” or restart the NFS service with the “service nfs restart” command.

On the target server where you wish to mount the /nfs share, you will issue the mount command with the -nfs option. In this example, we will mount the share from the rh64b server on the rh64a server. rh64b will serve as our utility server that will house the Yum Repository, DNS server, etc.

mount -t nfs rh64b:/nfs /nfs

The yum package manager does not discriminate between a network or local file system. From rh64a server, let’s take a test drive and install the screen RPM over NFS.

[root@rh64a Server]# yum install screen
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
viscosity                                                                                  | 3.9 kB     00:00 ... 
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package screen.x86_64 0:4.0.3-16.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================
 Package                 Arch                    Version                         Repository                  Size
==================================================================================================================
Installing:
 screen                  x86_64                  4.0.3-16.el6                    viscosity                  494 k

Transaction Summary
==================================================================================================================
Install       1 Package(s)

Total download size: 494 k
Installed size: 795 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : screen-4.0.3-16.el6.x86_64                                                                     1/1 
  Verifying  : screen-4.0.3-16.el6.x86_64                                                                     1/1 

Installed:
  screen.x86_64 0:4.0.3-16.el6                                                                                    

Complete!

Automation is what we our end goal is. We want to automate RPM package installations. To automatically answer “Y” to the “Is this ok [y/N]:” question, we can pass the -y flag to our yum install command:

[root@rh64d ~]# yum -y install ksh
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
viscosity                                                                                               | 3.9 kB     00:00 ... 
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package ksh.x86_64 0:20100621-19.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================
 Package                 Arch                       Version                                Repository                     Size
===============================================================================================================================
Installing:
 ksh                     x86_64                     20100621-19.el6                        viscosity                     686 k

Transaction Summary
===============================================================================================================================
Install       1 Package(s)

Total download size: 686 k
Installed size: 1.5 M
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : ksh-20100621-19.el6.x86_64                                                                                  1/1 
  Verifying  : ksh-20100621-19.el6.x86_64                                                                                  1/1 

Installed:
  ksh.x86_64 0:20100621-19.el6                                                                                                 

Complete!