This year will be my 10th consecutive year as a speaker at Oracle OpenWorld. I will focus on Oracle Cloud, Oracle Cloud and More Oracle Cloud. I will also be speaking at the 1/2 day Cloud Experience Technical Workshop (formerly known as Cloud Attack at the IOUG – Independent Oracle Users Group conference) with several other ACEs and ACEDs from The Oracle ACE Program on behalf of the Cloud SIG. We have a lot of preparation to do for updating our content for the Cloud Experience Technical Workshop.

OOW Speaker Header

For the Backup to the Cloud and Beyond session, we are thinking outside the box and delivering a solution that will handle both Oracle related files and non-Oracle files. Here’s the abstract to the session:

Let us challenge your thoughts about backing up files to Oracle Cloud. Every customer should be considering RMAN backups of their databases to Oracle Cloud (if not already). Backing up the Oracle database to the cloud is just the tip of the iceberg. You have to think outside the box and consider the entire solution and include the non-database files such as middleware backups, documents, reports, logs, etc.

In this session, we will reveal how Viscosity provides end-to-end backup solutions to Oracle Cloud. We will present a deep-dive content of backing up the database to the Oracle Cloud. We will go into extensive details on backing up non-oracle files to Oracle Cloud. We will demonstrate various methods to uploading and downloading various types files (including RMAN) to the Cloud and how to drive performance.

Stay tuned for additional details to my deep-dive session on Back Up to the Cloud and Beyond and for the Cloud Experience Technical Workshop.

Remember the number 33. I always seem to forget the $33 per TB per month number as I talk with potential customers. For a mere $33.00 / TB / Month, you can backup your Oracle Databases to the Oracle Public Cloud. For this low amount you get:

  • Unlimited Oracle Database backups
  • Automatic three-way data mirroring
  • Regional data isolation
  • Transparent access via Oracle Database Cloud Backup Module and Recovery Manager (RMAN)
  • RMAN encryption and compression

Performing backups to the cloud is a no brainer. Everyone should be considering backing up their database to the cloud.

Posted on July 27, 2016

Here’s an example to create a user account for Charles:

/usr/sbin/groupadd -K GID_MIN=500 -K GID_MAX=10000 ckim
/usr/sbin/useradd -g ckim -K UID_MIN=500 -K UID_MAX=10000 -c "Charles Kim" -d /home/ckim -m -s /bin/bash ckim

Then here is another example of creating a user for David:

# set -o vi
# for i in dknight; do groupadd -K GID_MIN=500 -K GID_MAX=10000 $i; /usr/sbin/useradd -g $i -K UID_MIN=500 -K UID_MAX=10000 -d /home/$i -m -s /bin/bash $i ; chage -d0 $i;echo "newpassword" | passwd --stdin $i;done
Changing password for user dknight

Here’s some of my favorite Oracle Database 12.2 Multitenant new features that I can talk about at this time. Oracle Database 12c Release 2 is slated to go live sometime this year.

Pluggable databases

  • The number of PDB limits per CDB increases to 4096 rom the limit of 252 PDBs in 12.2. Even though we do not have customers who have over 252 PDBs or even venture in this high number of PDBs, Oracle raised this limit in 12.2.
  • Resource manager will have the capability to limit the memory and govern CPU and I/O.  For RAC (or RAC One Node) services associated to a PDB, oracle will not have interconnect overhead
  • In 12.2, we will no longer need to to put the PDB in read-only mode to perform hot clones.  
  • We can refresh PDBs online.  
  • We can relocate PDBs without any downtime.  We can move PDBs from one CDB to another.  Larry Ellison in OOW 2015 did a live demo and moved a PDB from on-premise to Oracle Public Cloud.  12.2 eliminates the need to put the PDB in read-only mode.
  • Proxy PDBs are introduced.  We can have a new kind of a PDB which points to a remote PDB.  The remote PDB is presented as a local PDB and in all practical purposes looks like a local PDB with all functionality is available to the Proxy PDB.
  • All new Application Containers are introduced. Oracle revolutionizes the concept of having a single master application definition for all the tenant containers. We can make changes to just location and changes will sync to all the tenant container.

Are you up on Oracle new features? Are you considering pluggable databases? Let’s take a look at the new features that Oracle introduced in Even though this is not a major release of Oracle, it is still packed with new features, especially in the world on multi-tenancy. The enhancements made to pluggable databases are really attractive. Here’s a brief overview of the new features in pluggable databases:

  • FDA Support for CDBs
  • PDB Containers Clause
  • PDB File Placement in OMF
  • PDB Logging Clause
  • PDB Metadata Clone
  • PDB Remote Clone
  • PDB Snapshot Cloning Additional Platform Support
  • PDB State Management Across CDB Restart
  • PDB Subset Cloning

Every DBA should know how to leverage these commands. DBAs need to add this list of commands in their arsenal.


Additional commands for your toolkit:
free –g (Memory free in Gigabytes)
free –g –s 1 (Display free in Gigabytes, update every second)

sar -u 2 10 (Report CPU utilization for each 2 seconds. 10 lines are displayed.)

Oracle Storage Cloud Service is an Infrastructure as a Service (IaaS) offering, which provides an enterprise scale object storage solution for all size files and unstructured data. You can leverage Oracle Storage Cloud Service to backup content and even Oracle Database(s) to an offsite location. We can programmatically store and retrieve content, and share content with others.

Oracle Storage Cloud Service stores data as objects within a flat hierarchy of containers. An object is most commonly created by uploading a file. It can also be created from ephemeral unstructured data. Objects are created within a container. A single object can hold up to 5 GB of data, but multiple objects can be linked together to hold more than 5 GB of contiguous data.

A container is a user-created resource, which can hold an unlimited number of objects, unless you specify a quota for the container. At a super high-level, you can think of containers like a directory in a file system except it cannot be nested.

Object storage provides an optimal blend of performance, scalability, and manageability when storing large amounts of unstructured data. Multiple storage nodes form a single, shared, horizontally scalable pool in which data is stored as objects (blobs of data) in a flat hierarchy of containers. Each object stores data, the associated metadata, and a unique ID. You can assign custom metadata to containers and objects, making it easier to find, analyze, and manage data. Applications use the unique object IDs to access data directly via REST API calls. Object storage is simple to use, performs well, and scales to a virtually unlimited capacity.

Oracle Storage Cloud Service provides a low cost, reliable, secure, and scalable object-storage solution for storing unstructured data and accessing it anytime from anywhere. It is ideal for data backup, archival, file sharing, and for storing large amounts of unstructured data like logs, sensor-generated data, and VM images.

In this post, I am providing a comprehensive shell script called curl.ksh script that can be used with Oracle Storage Cloud Service to early upload, download, maintain and query the contents of the storage cloud service.

In a nutshell, the korn shell script accepts for the first argument the command to:
1. ls (to list out the directory)
2. get (to download a file)
3. mkdir (to create a container directory)
4. put (to upload a file with the curl command)
5. jput (to upload a file with the java uploadcli Jar file)
6. pjput (to upload a file in parallel and in segments for large files over 5GB leveraging the java uploadcli Jar file)
7. del ( to delete a file )

The second argument will be the container name (or name of the directory. The last argument is the name of the file that we are uploading, downloading, listing or deleting.

The script executed without any arguments will display the authentication token to be used.

$ ./curl.ksh
*   Trying
* Connected to ( port 443 (#0)
* TLS 1.2 connection using TLS_RSA_WITH_AES_128_CBC_SHA
* Server certificate: *
* Server certificate: VeriSign Class 3 Secure Server CA - G3
* Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5
> GET /auth/v1.0 HTTP/1.1
> Host:
> User-Agent: curl/7.43.0
> Accept: */*
> X-Storage-User:
> X-Storage-Pass: xxxxxyyxxzzzz
< HTTP/1.1 200 OK
< date: 1461554947509
< X-Auth-Token: AUTH_tk41223c8f481b0e5e42c47f1617560f54
< X-Storage-Token: AUTH_tk41223c8f481b0e5e42c47f1617560f54
< X-Storage-Url:
< Content-Length: 0
< Server: Oracle-Storage-Cloud-Service
* Connection #0 to host left intact

Behind the scene, curl is logging in with the userid and password that is stored in the curl.conf configuration file. Because we provided the userid and password, Oracle Storage Cloud Service provides a temporary authentication token. Future request to the service must be leveraged through the supplied authentication token. The line that has the key words "X-Auth-Token" and "X-Storage-Token" has the authentication token that we need. This authentication token expires after 30 minutes.

curl.conf configuration file has three parameters:


The DOMAIN parameter is the identity domain. Account creation results in creation of an identity domain (IDM slice) for that customer that is leveraged across all Oracle Public Cloud Services (and across all data centers). During the Oracle Public Cloud Services account setup process, the customer needs to create an account and specify an account name. Using the specified account name, that the identity domain will be implicitly created.

There is one manual step that is required. The parameter named AUTH must be manually specified from the OS. Consideration was made to place the AUTH parameter in the curl.conf configuration file but because the authentication token expires in 30 minutes, I made the choice to make it a manaul process to manually set the AUTH parameter from the OS. Earlier I mentioned that by executing ./curl.ksh without any arguments will display the "X-Auth-Token" and "X-Storage-Token" value. We need to manually export the AUTH parameter to the authentication token as shown below:

export AUTH=AUTH_tk41223c8f481b0e5e42c47f1617560f54

The curl.ksh script leverages the AUTH environment variable for all curl invocation. For calls that are made with java, the userid and password will be required. You will be prompted for the password to your account when invoking the java to load files with the uploadcli.jar. Here's what the curl.ksh script looks like:

 cat curl.ksh
export INMETHOD=$1
export FILE=$3

. curl.conf

export METHOD=$(echo $INMETHOD |tr '[a-z]' '[A-Z]')

if [ "$METHOD" = "" ]; then
  echo "Executing: curl -v -s -X GET -H "X-Storage-User: Storage-${DOMAIN}:$USER" -H "X-Storage-Pass: $PW" https://${DOMAIN} "
  curl -v -s -X GET -H "X-Storage-User: Storage-${DOMAIN}:$USER" -H "X-Storage-Pass: $PW" https://${DOMAIN} 
elif [ "$METHOD" = "LS" ]; then
  echo "Executing:  curl -s -X GET -H "X-Auth-Token: $AUTH"${DOMAIN}/$NEWCONTAINER?limit64"
  curl -s -X GET -H "X-Auth-Token: $AUTH"${DOMAIN}/$NEWCONTAINER?limit64
elif [ "$METHOD" = "GET" ]; then
  echo "curl -v -s -X GET -H "X-Auth-Token: $AUTH" -o $FILE${DOMAIN}/$NEWCONTAINER/$FILE"
  curl -v -s -X GET -H "X-Auth-Token: $AUTH" -o $FILE${DOMAIN}/$NEWCONTAINER/$FILE
elif [ "$METHOD" = "MKDIR" ]; then
  [ "$NEWCONTAINER" = "" ] && ( echo "Directory Name must be provided:  $NEWCONTAINER"; exit; )
  curl -v -s -X PUT -H "X-Auth-Token: $AUTH"$DOMAIN/$NEWCONTAINER
elif [ "$METHOD" = "DEL" ]; then
  [ "$NEWCONTAINER" = "" ] && ( echo "Directory Name must be provided:  $NEWCONTAINER"; exit; )
  echo "curl -v -s -X DELETE -H "X-Auth-Token: $AUTH"$DOMAIN/$NEWCONTAINER/$FILE"
  curl -v -s -X DELETE -H "X-Auth-Token: $AUTH"$DOMAIN/$NEWCONTAINER/$FILE
elif [ "$METHOD" = "PUT" ]; then
  [ "$NEWCONTAINER" = "" ] && ( echo "Directory Name must be provided:  $NEWCONTAINER"; exit; )
  [ "$FILE" = "" ] && ( echo "File Name must be provided:  $FILE"; exit; )
elif [ "$METHOD" = "JAVAPUT" -o "$METHOD" = "JPUT" ]; then
  [ "$NEWCONTAINER" = "" ] && ( echo "Directory Name must be provided:  $NEWCONTAINER"; exit; )
  [ "$FILE" = "" ] && ( echo "File Name must be provided:  $FILE"; exit; )
  java -jar uploadcli.jar -url${DOMAIN} -user $USER -container $NEWCONTAINER $FILE
elif [ "$METHOD" = "PJAVAPUT" -o "$METHOD" = "PJPUT" ]; then
  [ "$NEWCONTAINER" = "" ] && ( echo "Directory Name must be provided:  $NEWCONTAINER"; exit; )
  [ "$FILE" = "" ] && ( echo "File Name must be provided:  $FILE"; exit; )
  java -jar uploadcli.jar -url${DOMAIN} -user $USER -container $NEWCONTAINER -segment-size 1000 -max-threads 16 $FILE

The Upload CLI is a Java-based CLI tool that simplifies uploading to Oracle Storage Cloud Service. To download the java uploadcli.jar file, visit the page. Look for the Oracle Storage Cloud Service Upload CLI section, and download the tool. Here's some of the key features provided by the uploadcli.jar file:

  • Optimized uploads through segmentation and parallelization to maximize network efficiency and reduce overall upload time
  • Support for both object storage and archive storage
  • Automatic checksum verification on upload
  • Upload individual files, groups of files, and entire directories
  • Automatic retry on failures
  • Resume interrupted uploads of large files

To leverage the uploadcli.jar file, the minimum requirements are that you must be at JRE 7 or later.

Please note that this script is evolving and will be updated periodically. The main idea of this script was to make the userid, password, domain, container, and file input to be parameterized leveraging a configuration file and parameters so that we can make the process relatively painless.

In Part 2 of this series, we will go through each of the examples to upload files, delete files, and download files from Oracle Storage Cloud Service. We will fully exploit the curl.ksh shell script and provide screen shot example output of each of the scenarios.

In Part 3 of this series, we will go through step-by-step process of downloading, installing and configuring the OPC Install Jar File. We will configure RMAN for backups to Oracle Public Cloud (OPC) and run through each of the scenarios for backup to OPC and restore/recovery from OPC.