Everyone knows me as an Oracle, Linux and VMware Expert. Few know me as a Certified Microsoft SQL Server expert from days of old. I am venturing into the SQL Server world again and plan on leveraging my expertise from the Oracle database world. I love the fact that Microsoft ported their SQL Server database to Linux. Stay tuned as I write future articles on how to deploy SQL Server on Linux and expose best practices to scale a SQL Server database on Linux.

For the first part of many series on virtualizing Microsoft SQL Server on VMware, let’s focus on the storage aspect of the virtalized infrastructure.

Improper storage configuration is often the culprit with performance issues. Majority of the SQL Server performance issues can be correlated back to storage configuration. Typically, relational databases, especially in the production workloads, produce heavy I/O workloads. When storage is misconfigured, performance degradations and additional latency can be introduced especially during heavy I/O workloads.

Storage is always about understanding throughput (IOPS) and disk latency. Understand your workload I/O usage patterns, thresholds, and times of high activity; benchmark and confirm you are achieving the true throughput of your hardware. Bad settings and incorrect configurations will keep the true throughput of the system from being achieved. It is important to understand the total IOPS your disk system can handle from the following formulas.

  • Total Raw IOPS = disk IOPS x number of disks
  • Functional IOPS = (disk IOPS x write%)/ (RAID overhead) + (Raw IOPS x Read%)

You need to find a balance between performance and capacity. Larger drives typically correlates to less performance. The more spindles you have, the more IOPS you can generate. Keep in mind that ESXi host demand is an aggregate demand of all VM’s residing on that host at that time. Low latency high I/O SQL Server databases are very sensitive to the latency of the I/O operations. Storage configurations are very important in achieving an optimal database configuration.

Recommendation for the best performance is always Eager Zeroes thick vmdk’s created in an Independent Persistent mode, to avoid any performance issues. Thick provisioned Lazy Zeroed vmdk’s or Thin provisioned vmdk’s can be used, as long as the Storage array is VAAI capable, improving the performance for first-time-write performance for these two types. Vmdk’s created in an Independent Persistent mode, i.e. Persistent refers to changes persistently written to disks. Independent refers to the vmdk being independent of VM based snapshots.

vAdmins can thinly provision a virtual disk. Thinly provisioned disks equate to storage on demand. Thin provisioning at the storage level and at the virtualization layer is commonly practiced in many companies, as it is a technique used to save space and to over-commit space on the storage array. Make sure how your storage is layed out for your SQL Server environments.

Development databases an be provisioned on thinly provisioned disks and can grow on-demand; however, for production workloads, make sure that you are always leveraging Eager Zeroed Thick VMDK.

This blog post touches on one of the key elements of virtualization to successfully deploy a highly performant SQL Server environment. For more details, sign up for one of my upcoming webinars on “Ten Surprising Performance Killers on Microsoft SQL Server” on Oct 12 at 1:00 PM CST.


Join Oracle’s Andy Rivenes along with three Oracle ACE Directors next week for the IOUG Master Class: Oracle Database 12c Release 2!

Register today for this on-site jammed packed technology day of learning where we will focus on two of the most compelling features – Multitenancy and Database In-Memory (DBIM) – with ton of discussions on all the new features. Come see:

Jim Czuprynski, Oracle Enterprise Architect, ViON Corporation
Rich Niemiec, Chief Innovation Officer, Viscosity North America
Charles Kim, Founder and President, Viscosity North America

You’ll also have the opportunity to win some great prizes from our event sponsors, ViON Corporation and Viscosity North America. And the best part – there is no cost to you! We have our sponsors to thank for covering all attendee costs for the day. All attendees need to do is register and attend! After registering, you will receive a confirmation of your registration. Location details and the full agenda can be found here.

Here’s the detailed agenda:
8:00 a.m – 9:00 a.m.: Breakfast, registration, networking
9:00 a.m. – 9:15 a.m.: Welcome
9:15 a.m. – 10:00 a.m.: Keynote on Oracle Future: 12cR2, Multitenant, Database In-Memory, Cloud
10:00 a.m. – 12:30 p.m.: Track 1 – PDB Me, ASAP! Oracle 12cR2 Multitenant HOL, Pt 1

Track 2 – Oracle Cloud On the Horizon and Oracle Database In-Memory Deep Dive
12:30 p.m. – 1:30 p.m.: Lunch and networking
1:30 p.m. – 4:00 p.m.: Track 1 – PDB Me, ASAP! Oracle 12cR2 Multitenant HOL, Pt 2

Track 2 – Oracle Database In-Memory By Example and Analytic Views
4:00 p.m. – 4:45 p.m.: 12cR2 Experts Panel Discussion
4:45 p.m. – 5:00 p.m.: Closing Remarks and Prize Giveaways

Click here for details and to register for this incredible event

I have been proud for the past 4 years to be the only person in the world who held the highest designation from both Oracle and VMware as the Oracle ACE Director and VMware vExpert at the same time.  This week, I am proud to announce that Nitin Vengurlekar (@dbcloudshifu) joins me in this designation.  We are known in both industries for our technical aptitude, authoring books, blogging and speaking at national/international conferences.  

I am also excited to announce that I was accepted into the VMware vExpert program for the 5th consecutive year for the 2017 calendar year.  This year will be Nitin’s first year as a VMware vExpert.  We, at Viscosity North America, are very excited and honored to be the only company in the world who house 2 individuals that hold titles of Oracle ACE Directors and VMware vExperts.

Posted by Charles Kim, Oracle ACE Director and VMware vExpert

Twitter: @racdba



I am happy to announce that I will be presenting Data Guard Best Practices and Oracle Database 12c Release 2 New Features at the next Rocky Mountain Oracle User Group Technology Day in Denver on February 9, 2017. Come learn how to bullet proof your Data Guard configurations and what’s new in Oracle 12.2. My sessions agenda will be:

Session 8 Thursday 11:15 am to 12:15 pm
What’s New in 12.2: Oracle Database 12.2 New Features

Session 10 Thursday 2:45 pm to 3:45 pm
Bulletproof Your Data Guard Environment

The Oracle Database 12.2 New Features will be a continuation of Viscosity’s 12 Days of Oracle 12.2 but will more in-depth content and code examples.

Charles Kim, Oracle ACE Director
President, Viscosity North America

The Oracle Cloud Experience Technology Hands-on-lab Workshop is sponsored by the Cloud Computing SIG of IOUG. We are proud to announce our participation at the Georgia Oracle User Group Tech Days on March 15-16, Atlanta, Georgia. Every attendee will be given a Kindle version of the Oracle Cloud Pocket Solutions Guide:


The Cloud Experience Technology Hands-on-lab Workshop will start with creating a database in Oracle Cloud. We will also focus on backing up an on-premise database to the cloud and various solutions that’s available from Oracle to backup both Oracle databases and non-Oracle databases. We will discuss complete solution options that you can implement as you make your journey to Oracle Cloud.

Attendees will :
• Learn how to set up OS secure authentication, generating private and public keys
• Learn about various kinds of containers in Oracle Cloud and create a storage container to leverage for the hands-on lab
• Learn when and how to use the cloud for business projects
• Learn methods to backup databases to the cloud, configure Oracle Recovery Manager to backup to and restore from the Oracle Cloud.
• Learn methods to migrate databases to the cloud.
• Create a database in Oracle Cloud
• Communicate from on-premise to the database that resides in Oracle Cloud
• General Q & A with community experts

We will also discuss real-life solutions addressing security concerns, opening ports, creating database links/communicating with databases in Oracle Cloud, and innovations that Oracle has made to Oracle Cloud in the past 2 years.

Installing Docker for Mac is super easy. It is a simple as downloading and copying the program to the application folder.

To download Docker for Mac visit the docker site:

Please download the stable release.

Simply Drag and Drop the Docker icon to the Application folder. It is that easy to install Docker on the Mac.

Launch Docker from the applications folder. Since this is the first time you are invoking Docker, you will get the standard message about Docker being an “… application downloaded from the Internet. Are you sure that you want to open it?” Click on the Open button.


At the Welcome screen, click on the Next button

Click on the OK button to allow for privileged access.

You will be asked to provide your password. Enter your password to finish the installation process. Now, you will see docker on your task bar on top of the screen.


Optionally un-select the Send diagnostics & usage data check box; click on “Got it” button to complete the installation.

Now from any terminal, type the following commands:

Dobbys-MacBook-Pro:~ Dobby$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
Dobbys-MacBook-Pro:~ Dobby$ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 0
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
 Volume: local
 Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.27-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.951 GiB
Name: moby
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 15
 Goroutines: 27
 System Time: 2016-11-19T15:45:11.894167372Z
 EventsListeners: 1
No Proxy: *.local, 169.254/16
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
Insecure Registries:

Next, click on the Docker logo on the top status bar and select Preferences. Move the memory indicator to 4GB or more and click the restart button at the button of the screen.

Remember the number 33. I always seem to forget the $33 per TB per month number as I talk with potential customers. For a mere $33.00 / TB / Month, you can backup your Oracle Databases to the Oracle Public Cloud. For this low amount you get:

  • Unlimited Oracle Database backups
  • Automatic three-way data mirroring
  • Regional data isolation
  • Transparent access via Oracle Database Cloud Backup Module and Recovery Manager (RMAN)
  • RMAN encryption and compression

Performing backups to the cloud is a no brainer. Everyone should be considering backing up their database to the cloud.

Posted on July 27, 2016

In this post, I will walk through the steps to apply the January 2016 PSU to the latest release of Oracle ( In this post, we will apply Patch 22191349 – Oracle Grid Infrastructure Patch Set Update (Jan2016) to both the Grid Infrastructure and Oracle Database Home.

1. From CPUJan2016 onwards, the 5th digit of the version number will be changed to reflect the release date in the format YYMMDD. See My Oracle Support Document 2061926.1 for more information.
2. The GI System patch includes updates for both the Clusterware home and Database home that can be applied in a rolling fashion.

This patch is Data Guard Standby First Installable – See Section 2.5, “Installing Database PSU in Standby-First Mode” for more information.

In our example, we did not create any databases prior to applying the PSU. This will a typical installation method for Viscosity consultants where we will create a custom database leveraging dbca.

In general, when we invoke opatchauto, opatch will patch both the GI stack and the database software stack. Since we do not have a database running, patch will skip the database software stack and only apply the PSU to the GI Home. Before we invoke the opatchauto command, let’s create the ocm.rsp response file by executing the OCM Installation Response Generator (emocmrsp).

[root@vnarac01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@vnarac01 ~]# 
[root@vnarac01 ~]# 
[root@vnarac01 ~]# export PATH=$PATH:/u01/app/12.1.0/grid/OPatch
[root@vnarac01 ~]# opatchauto apply /u01/app/oracle/soft/22191349 -ocmrf /tmp/ocm.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version :
OUI Version        :
Running from       : /u01/app/12.1.0/grid

opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/22191349/opatch_gi_2016-03-25_15-13-49_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u01/app/oracle/soft/22191349
Grid Infrastructure Patch(es): 21436941 21948341 21948344 21948354 
DB Patch(es): 21948344 21948354 

Patch Validation: Successful
Grid Infrastructure home:

Performing prepatch operations on CRS Home... Successful

Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/u01/app/oracle/soft/22191349/21436941" successfully applied to "/u01/app/12.1.0/grid".
Patch "/u01/app/oracle/soft/22191349/21948341" successfully applied to "/u01/app/12.1.0/grid".
Patch "/u01/app/oracle/soft/22191349/21948344" successfully applied to "/u01/app/12.1.0/grid".
Patch "/u01/app/oracle/soft/22191349/21948354" successfully applied to "/u01/app/12.1.0/grid".

Performing postpatch operations on CRS Home...  Successful

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 21436941,21948341,21948344,21948354

opatchauto succeeded.

Now we are going to attempt to apply the PSU to the database software home. You will encounter an error message indicating that the opatch that you are invoking must be from the same $ORACLE_HOME as the target HOME that is being patched.

[root@vnarac01 ~]# opatchauto apply /u01/app/oracle/soft/22191349 -oh /u01/app/oracle/product/12.1.0/dbhome_1 -ocmrf /tmp/ocm.rsp 
opatchauto must run from one of the homes specified
opatchauto returns with error code 2
[root@vnarac01 ~]# exit

Next we will apply the PSU to the database home. Again as root, first set the PATH to include the OPatch directory to the $ORACLE_HOME/OPatch directory. This time when we invoke the opatchauto command, we will pass another parameter -oh to specifically patch the database Oracle Home.

+ASM1 > su - root
[root@vnarac01 ~]# cd /u01/app/oracle/product/12.1.0/dbhome_1/OPatch
[root@vnarac01 OPatch]# export PATH=$PATH:/u01/app/oracle/product/12.1.0/dbhome_1/OPatch
[root@vnarac01 OPatch]# opatchauto apply /u01/app/oracle/soft/22191349 -oh /u01/app/oracle/product/12.1.0/dbhome_1 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version :
OUI Version        :
Running from       : /u01/app/oracle/product/12.1.0/dbhome_1

opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/22191349/opatch_gi_2016-03-25_15-32-22_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u01/app/oracle/soft/22191349
Grid Infrastructure Patch(es): 21436941 21948341 21948344 21948354 
DB Patch(es): 21948344 21948354 

Patch Validation: Successful
User specified the following DB home(s) for this session:

Performing prepatch operations on RAC Home (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful

Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/u01/app/oracle/soft/22191349/21948344" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Patch "/u01/app/oracle/soft/22191349/21948354" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".

Performing postpatch operations on RAC Home (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful

[WARNING] The local database(s) on "/u01/app/oracle/product/12.1.0/dbhome_1" is not running. SQL changes, if any, cannot be applied.

Apply Summary:
Following patch(es) are successfully installed:
DB Home: /u01/app/oracle/product/12.1.0/dbhome_1: 21948344,21948354

opatchauto succeeded.

First, let’s install the RPM with yum. The RPM that we want to install is called btrfs-progs.

[root@dal66a yum.repos.d]# yum install btrfs*
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package btrfs-progs.x86_64 0:0.20-1.8.git7854c8b.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                    Arch                  Version                                Repository                Size
 btrfs-progs                x86_64                0.20-1.8.git7854c8b.el6                viscosity                396 k

Transaction Summary
Install       1 Package(s)

Total download size: 396 k
Installed size: 2.8 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : btrfs-progs-0.20-1.8.git7854c8b.el6.x86_64                                                           1/1 
  Verifying  : btrfs-progs-0.20-1.8.git7854c8b.el6.x86_64                                                           1/1 

  btrfs-progs.x86_64 0:0.20-1.8.git7854c8b.el6                                                                          

[root@dal66a ~]# mkfs -t btrfs -d raid10 -m raid10 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 -f
failed to open /dev/fd0: No such device or address

WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/sdc1 id 2
adding device /dev/sdd1 id 3
adding device /dev/sde1 id 4
fs created label (null) on /dev/sdb1
	nodesize 4096 leafsize 4096 sectorsize 4096 size 127.99GB
Btrfs v0.20-rc1

The -f option is not needed. The only reason why I had to specify the -f was because I tried to create a btrfs file system earlier with just 2 drives in a mirrored and striped configuration..

Pass the -V option to determine the version of make.btrfs.

[root@dal66a ~]# mkfs.btrfs -V
mkfs.btrfs, part of Btrfs v0.20-rc1