In this example, we are setting up automatic snapshot maintenance for the backup01 share. We are instructing the ZFS Storage Appliance to perform a daily snapshot of the backup01 share on a nightly basis at 1AM. We are also instructing the ZFS Storage Appliance to keep 21 days of rolling snapshots.

zfs1:> shares
zfs1:shares (zpool1)> select default
zfs1:shares (zpool1) default> select backup01
zfs1:shares (zpool1) default/backup01> snapshots
zfs1:shares (zpool1) default/backup01 snapshots> automatic

zfs1:shares (zpool1) default/backup01 snapshots automatic> create
zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> show
Properties:
                     frequency = (unset)
                           day = (unset)
                          hour = (unset)
                        minute = (unset)
                          keep = 0

zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> set frequency=day
                     frequency = day (uncommitted)
zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> set hour=03
                          hour = 01 (uncommitted)
zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> set minute=00
                        minute = 00 (uncommitted)
zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> set keep=21
                          keep = 21 (uncommitted)
zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> get
                     frequency = day (uncommitted)
                           day = (unset)
                          hour = 01 (uncommitted)
                        minute = 00 (uncommitted)
                          keep = 21 (uncommitted)
zfs1:shares (zpool1) default/backup01 snapshots automatic (uncommitted)> commit
zfs1:shares (zpool1) default/backup01 snapshots automatic> show
Properties:
                       convert = true

Automatics:

NAME                 FREQUENCY            DAY                  HH:MM KEEP
automatic-000        day                  -                    03:00   21

zfs1:shares (zpool1) default/backup01 snapshots automatic> done
zfs1:shares (zpool1) default/backup01 snapshots> show
Snapshots:
                       oct2014
                      sept2014

Children:
                        automatic => Configure automatic snapshots

I had the pleasure of being involved in couple of Exadata implementations where Oracle delivered the wrong ASM redundancy type to a customer. The customer expected single mirroring (normal redundancy) and lot more TBs of useable storage than what was delivered. I have a general rule; if you have to do it more than once, you better script it and automate it. Check out the script the I used to migrate an Exadata customer from high redundancy to normal redundancy.

define DG='&1'
set pages 0
set lines 200 trims on feed off  echo off echo off ver off
spool cr_&DG..sql
prompt CREATE DISKGROUP &DG NORMAL REDUNDANCY

set serveroutput on size unlimited

declare
v_failgroup v$asm_disk.failgroup%TYPE;

cursor c1 is
select chr(39)||path||chr(39) path, name
from v$asm_disk
where group_number = (select group_number from v$asm_diskgroup
                      where name=upper('&DG'))
and failgroup=v_failgroup
order by path;

cursor c2 is
select distinct failgroup 
from v$asm_disk
order by failgroup;

cursor c3 is
select allocation_unit_size, compatibility, database_compatibility
from v$asm_diskgroup;
r3 c3%ROWTYPE;

begin
for r2 in c2 loop
v_failgroup := r2.failgroup;
dbms_output.put_line('FAILGROUP '||r2.failgroup||' DISK');

for r1 in c1 loop
if c1%rowcount = 1 then
   dbms_output.put_line(r1.path);
else
   dbms_output.put_line(','||r1.path);
end if;

end loop;

end loop;

open c3; fetch c3 into r3;
dbms_output.put_line('ATTRIBUTE');
dbms_output.put_line(chr(39)||'compatible.asm'||chr(39)||'='||chr(39)||r3.compatibility||chr(39)||',');
dbms_output.put_line(chr(39)||'compatible.rdbms'||chr(39)||'='||chr(39)||r3.database_compatibility||chr(39)||',');
dbms_output.put_line(chr(39)||'au_size'||chr(39)||'='||chr(39)||r3.allocation_unit_size||chr(39)||',');
dbms_output.put_line(chr(39)||'cell.smart_scan_capable'||chr(39)||'='||chr(39)||'TRUE'||chr(39)||';');
close c3;

end;
/
spool off

Here’s a sample of the generated script for the DATA disk group.

CREATE DISKGROUP DATA_EXAD NORMAL REDUNDANCY  
FAILGROUP EXADCEL01 DISK 
'o/10.0.0.3/DATA_EXAD_CD_00_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_01_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_02_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_03_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_04_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_05_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_06_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_07_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_08_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_09_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_10_exadcel01',
'o/10.0.0.3/DATA_EXAD_CD_11_exadcel01'
FAILGROUP EXADCEL02 DISK 
'o/10.0.0.4/DATA_EXAD_CD_00_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_01_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_02_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_03_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_04_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_05_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_06_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_07_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_08_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_09_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_10_exadcel02',
'o/10.0.0.4/DATA_EXAD_CD_11_exadcel02'
FAILGROUP EXADCEL03 DISK 
'o/10.0.0.5/DATA_EXAD_CD_00_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_01_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_02_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_03_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_04_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_05_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_06_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_07_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_08_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_09_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_10_exadcel03',
'o/10.0.0.5/DATA_EXAD_CD_11_exadcel03' 
ATTRIBUTE 
  'compatible.asm'='11.2.0.4',
  'compatible.rdbms'='11.2.0.4',
  'au_size'='4M',
  'cell.smart_scan_capable'='TRUE';



To work on VMware Fusion, set the following on the .vmx file; without this entry, the scsi_id command does not return any values by default.
disk.EnableUUID = “TRUE”

Retrieve and generate a unique SCSI identifier with the scsi_id command:

[root@rhel59dra ~]# /sbin/scsi_id -g -u -s /block/sdc

36000c29b80c12910ca4e6a95a1949d8b
[root@rhel59dra ~]# /sbin/scsi_id -g -u -s /block/sdd
36000c29344da4eab5b78409de3706424
[root@rhel59dra ~]# /sbin/scsi_id -g -u -s /block/sde
36000c291cd542d388fdee223fa90ca69
[root@rhel59dra ~]# /sbin/scsi_id -g -u -s /block/sdf
36000c296666187fd5223c0a34ca52f71

Add entries to a custom udev rules file

[root@rhel59dra ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="36000c29b80c12910ca4e6a95a1949d8b", NAME="ASMOCR01", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="36000c29344da4eab5b78409de3706424", NAME="ASMOCR02", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s /block/$parent", RESULT=="36000c291cd542d388fdee223fa90ca69", NAME="ASMOCR03", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf[0-9]", BUS=="scsi", PROGRAM=="/usr/bin/udevinfo -q name -p %p", RESULT=="%k", PROGRAM=="scsi_id -g -u -d /dev/$parent", RESULT=="36000c296666187fd5223c0a34ca52f71", NAME="ASMDATA0%n", OWNER="oracle", GROUP="dba", MODE="0660"

Note:
For disks with multiple partitions, the syntax in the udev rules are different.

KERNEL==”sd[c-z]1″, BUS==”scsi”, PROGRAM=”/sbin/scsi_id -g -u -s /block/%P”, RESULT==”3*”, NAME=”ASM%c”, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

To make sure that udev rules work:

[root@rhel59dra ~]# udevtest /block/sdc/sdc1
[root@rhel59dra ~]# udevtest /block/sdd/sdd1
[root@rhel59dra ~]# udevtest /block/sde/sde1
[root@rhel59dra ~]# udevtest /block/sdf/sdf1    
[root@rhel59dra ~]# udevtest /block/sdf/sdf2    
[root@rhel59dra ~]# udevtest /block/sdf/sdf3    
[root@rhel59dra ~]# udevtest /block/sdf/sdf4    

Restart udev rules:
RHEL 5: /sbin/udevcontrol reload_rules
RHEL 6: /sbin/udevadm control –reload-rules
/sbin/start_udev

Verify that proper devices are created

[root@rhel59dra ~]# ls -l /dev/ASM*

brw-rw---- 1 oracle dba 8, 81 May 15 23:45 /dev/ASMDATA01
brw-rw---- 1 oracle dba 8, 82 May 15 23:45 /dev/ASMDATA02
brw-rw---- 1 oracle dba 8, 83 May 15 23:45 /dev/ASMDATA03
brw-rw---- 1 oracle dba 8, 84 May 15 23:45 /dev/ASMDATA04
brw-rw---- 1 oracle dba 8, 33 May 15 23:45 /dev/ASMOCR01
brw-rw---- 1 oracle dba 8, 49 May 15 23:45 /dev/ASMOCR02
brw-rw---- 1 oracle dba 8, 65 May 15 23:45 /dev/ASMOCR03 

Written by Charles Kim, Oracle ACE Director



Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups [ID 1389592.1]

What does this mean?  It means we can perform migrations from AIX/Solaris/HP-UX to Linux with significantly reduced downtime.  I can perform incremental backups across endianness.

This was originally tested for the Exadata and now it is available for everyone.


Posted by Charles Kim, Oracle ACE Director


We will post the 10th session shortly …

Session #

Title

Room Assignment and Time

604

Rolling your own Database Operations Center (DOC) using Oracle Technology you already own

Mile High Ballroom 2A => Mon, Apr 08, 2013 (09:45 AM – 10:45 AM)

344

Performance Tuning your DB Cloud in OEM 12c Cloud Control – 360 Degrees

Mile High Ballroom 4A  => Mon, Apr 08, 2013 (09:45 AM – 10:45 AM)

614

Automate Data Guard Best Practices

Mile High Ballroom 2B => Mon, Apr 08, 2013 (05:00 PM – 06:00 PM)

 

 

 

477

Why Every Database Needs to be Virtualized

Mile High Ballroom 4A => Tue, Apr 09, 2013 (12:00 PM – 12:30 PM)

343

Oracle VM, OEM 12c and Cloud Computing:  Panel of Experts

C#13 vSIG Meeting => Tue, Apr 09, 2013 (4:15 PM – 5:15 PM)

 

 

 

441

ASM New Features – The New ASM Frontier

Mile High Ballroom 2C => Wed, Apr 10, 2013 (08:15 AM – 09:15 AM)

414

Engineered Systems Curriculum: The Perfect Marriage: ZFS Storage Appliance with Exadata

Mile High Ballroom 1C => Wed, Apr 10, 2013 (11:00 AM – 12:00 PM)

783

Virtualized Oracle Stretched RAC Cluster using VMware vSphere and EMC VPLEX

Mile High Ballroom 2A => Wed, Apr 10, 2013 (04:15 PM – 05:15 PM)

 

 

 

757

From Big Data to Exadata: The Best of Both Worlds for Business Analytics

Mile High Ballroom 1B => Thu, Apr 11, 2013 (09:45 AM – 10:45 AM)

 

 

 


For RMAN Backups to disk (D2D) with ZFS Storage Appliance, you need to review MOS Note:  1072545.1: RMAN Performance Tuning Using Buffer Memory Parameters.  
 
You can effectively optimize high-bandwidth low latency backups and restores using Oracle RMAN and the Sun ZFS Storage Appliance by adjusting the init.ora parameters that control I/O buffering.

For Oracle Exadata, you can tune the following four parameters:
• _backup_disk_bufcnt – Number of buffers used to process backup sets
• _backup_disk_bufsz – Size of the buffers used to process backup sets
• _backup_file_bufcnt – Number of buffers used to process image copies 
• _backup_file_bufsz – Size of the buffers used to process image copies

For backup operations, set the buffer size to 1 MB for backup sets and 4 MB for image copies.  Form RMAN image copies, use the following settings: 

For database restores, the buffer size should be set to 128 kB as shown below:

For RMAN database backup sets and image copies, set the number of buffers for backup sets and image copies to 64:



If you are going to be at Oracle OpenWorld this year, please stop by and say Hi ..

Here’s all the sessions that I will be a presenter or a panelist at:

UGF4410
The Perfect Marriage: Oracle Exadata with Sun ZFS Storage Appliance
Session ID: UGF4410
Sunday 30-SEP-2012 9:00AM
Moscone West – 2018

UGF7700
Session Title: Oracle on Oracle VM: Expert Panel
Venue / Room: Moscone West – 2012
Date and Time: Sunday – 9/30/12, 12:30 – 14:00

UGF6511
Database Performance Tuning: Get the Best out of Oracle Enterprise Manager 12c Cloud Control
Sunday, Sep 30, 2:15 PM – 3:15 PM
Moscone West – 2011

CON8435
Expert Customer Panel:
Exadata Data Protection Best Practices
Session ID: CON8435
10/1/12 (Monday) 12:15 PM
Moscone South – 252

 

Charles Kim Presentation at OOW 2012

 



Often, the Exadata arrives at the customer site with High redundancy disk groups when the customer wants to exploit as much of the disk space available on the Normal redundancy.   Here’s a simple script to convert the high redundancy disk group to a normal redundancy disk group:
 
cat gen_dg.sql
define DG=’&1′
set pages 0
set lines 200 trims on feed off  echo off echo off ver off
spool cr_&DG..sql
prompt CREATE DISKGROUP &DG NORMAL REDUNDANCY
 
 
set serveroutput on size unlimited
 
 
declare
v_failgroup v$asm_disk.failgroup%TYPE;
 
 
cursor c1 is
select chr(39)||path||chr(39) path, name
from v$asm_disk
where group_number = (select group_number from v$asm_diskgroup
                      where name=upper(‘&DG’))
and failgroup=v_failgroup
order by path;
 
 
cursor c2 is
select distinct failgroup
from v$asm_disk
order by failgroup;
 
 
cursor c3 is
select allocation_unit_size, compatibility, database_compatibility
from v$asm_diskgroup;
r3 c3%ROWTYPE;
 
 
begin
for r2 in c2 loop
v_failgroup := r2.failgroup;
dbms_output.put_line(‘FAILGROUP ‘||r2.failgroup||’ DISK’);
 
 
for r1 in c1 loop
if c1%rowcount = 1 then
   dbms_output.put_line(r1.path);
else
   dbms_output.put_line(‘,’||r1.path);
end if;
 
 
end loop;
 
 
end loop;
 
 
open c3; fetch c3 into r3;
dbms_output.put_line(‘ATTRIBUTE’);
dbms_output.put_line(chr(39)||’compatible.asm’||chr(39)||’=’||chr(39)||r3.compatibility||chr(39)||’,’);
dbms_output.put_line(chr(39)||’compatible.rdbms’||chr(39)||’=’||chr(39)||r3.database_compatibility||chr(39)||’,’);
dbms_output.put_line(chr(39)||’au_size’||chr(39)||’=’||chr(39)||r3.allocation_unit_size||chr(39)||’,’);
dbms_output.put_line(chr(39)||’cell.smart_scan_capable’||chr(39)||’=’||chr(39)||’TRUE’||chr(39)||’;’);
close c3;
 
 
end;
/

spool off