Oracle spends 4.5 billion on R&D.  ODA is a 4u box Marketed as HA in a box.

You can license 2 cores up to 24 –> 3.06Ghz, 12 mb L3 of cache per socket (Intel XEON 5674)
Bottom node is Server node SN#0 –> have to run the install on the bottom node
Top node is Server node SN#1

Once you go to 8 cores, you cannot go back down
You have to generate a key and copy the paste

ODA Only runs enterprise edition
Triple (High Redundancy) only.  Does not support Normal Redundancy
Will support TDE hardware accelerator (CPU has the capability)

Public network (bond 0) with gigabit ports (bottom right corner)
ILOM is above the public gigE ports
Just behind here are the 2 X 500GB drives

Bottom left corner
*  Has 2 SAS ports that are intentionally dead
*  Right above it has 2 x 10gigE Ports
*  Then have 4 x 1gigE ports

96gb of RAM (12x8GB RAM) on each node
4 x 73GB SAS2 SSDs for redo
20 x 600GB SAS2 15k RPM disks
2 x 500GB SATA boot disks

Dual Intel 82571 GigE as Cluster interconnect
2 x onboard GigE per node
1 x Intel Quad GigE Northstar per node
1 x Intel 10GigE Niantic dual-ports per node

Oracle Appliance Manager software utility
Simple UI to manage the appliance
Also used to patch the software and diagnostic software
Can create ASR or call home
Can patch firmware
The command line interface is called ocli

Greate news is that ACFS is supported on ODA.

Tidbit about Licensing:
You have to lose 2 cores at a time and increment by 2 cores

Posted in RAC

VM World 2012 Call for papers is now open until May 18.

VM World will be at San Francisco this year at the Moscone Center, August 27 – 30

Education.
Choose from more than 200 Breakout Sessions and Hands-on Labs covering topics such as the hybrid cloud, enabling IT as a service and delivering end-user freedom while maintaining IT control.
Collaboration.
Attend group discussions or meet one-on-one with Knowledge Experts to learn and share experiences about deploying virtualization and enabling your cloud.
Networking.
Engage with more than 200 technology partners to discuss new and innovative solutions to benefit your business.


ASM Disk Group Configuration

Everyone should be leveraging ASMLIB instead of using block devices to create our ASM disk groups    

Proper ASM configuration and standardization and following best practices is just as important in a virtualized environment as it is in a bare metal environment            

First, create ASMLIB disks with oracleasm

  • sudo to root
  • cd /etc/init.d 
  • ./oracleasm createdisk DATA101_DISK000 /dev/oracle/DATA101_disk000p1
    • Repeat for each disk
  • On other RAC nodes
    • ./oracleasm scandisks
    • ./oracleasm listdisks


List of available disks on April 29, 2012
cd /dev/oracle
lrwxrwxrwx 1 root root 8 Apr 28 16:22 DATA501_disk009p1 -> ../dm-85

lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk003p1 -> ../dm-105
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA101_disk003p1 -> ../dm-100
lrwxrwxrwx 1 root root 8 Apr 28 16:22 DATA101_disk001p1 -> ../dm-99
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA101_disk002p1 -> ../dm-102
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk005p1 -> ../dm-110
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA101_disk004p1 -> ../dm-101
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA101_disk000p1 -> ../dm-104
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk006p1 -> ../dm-111
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk008p1 -> ../dm-107
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk004p1 -> ../dm-112
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk000p1 -> ../dm-108
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk002p1 -> ../dm-109
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk001p1 -> ../dm-103
lrwxrwxrwx 1 root root 9 Apr 28 16:22 DATA501_disk007p1 -> ../dm-106

Naming Convention Legend for Disk Groups

  • Diskgroup names will be DATA101 or PF101 for RAID 10 disk groups
  • Diskgruop names will DATA501 or PF501 for RAID 5 disk groups
Naming Convention Legend for Disks
  • pd = production data
  • pf = production fast recovery area(fra)
  • dd = would be development data
  • df = would be development fra
  • 101 = raid 10 first disk group
  • 501 = raid 05 first disk group
  • And _diskxxx can be disk000 to disk999


Modify /etc/sysconfig/oracleasm (on each node)

As root:  Make changes to the following lines:
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning

ORACLEASM_SCANORDER=”dm-“

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=”sd”
  Important Notes:

  • Only use the partitioned disk when creating ASMLIB disks
  • The partitioned disk will have p1, p2, etc. at the end of the device name
  • After you scan the disk, you should see an entry in /proc/partitions
  • Do NOT use /dev/oracle devices
  • Instead use /dev/mapper devices

RAID 10
——-
[root@dllprdorl01 tmp]# cat ora_asm.txt

/etc/init.d/oracleasm createdisk DATA101_DISK000 /dev/mapper/DATA101_disk000p1
/etc/init.d/oracleasm createdisk DATA101_DISK001 /dev/mapper/DATA101_disk001p1
/etc/init.d/oracleasm createdisk DATA101_DISK002 /dev/mapper/DATA101_disk002p1
/etc/init.d/oracleasm createdisk DATA101_DISK003 /dev/mapper/DATA101_disk003p1
/etc/init.d/oracleasm createdisk DATA101_DISK004 /dev/mapper/DATA101_disk004p1

RAID 5
——
/etc/init.d/oracleasm createdisk DATA501_DISK000 /dev/mapper/DATA501_disk000p1
/etc/init.d/oracleasm createdisk DATA501_DISK001 /dev/mapper/DATA501_disk001p1
/etc/init.d/oracleasm createdisk DATA501_DISK002 /dev/mapper/DATA501_disk002p1
/etc/init.d/oracleasm createdisk DATA501_DISK003 /dev/mapper/DATA501_disk003p1
/etc/init.d/oracleasm createdisk DATA501_DISK004 /dev/mapper/DATA501_disk004p1
/etc/init.d/oracleasm createdisk DATA501_DISK005 /dev/mapper/DATA501_disk005p1
/etc/init.d/oracleasm createdisk DATA501_DISK006 /dev/mapper/DATA501_disk006p1
/etc/init.d/oracleasm createdisk DATA501_DISK007 /dev/mapper/DATA501_disk007p1
/etc/init.d/oracleasm createdisk DATA501_DISK008 /dev/mapper/DATA501_disk008p1
/etc/init.d/oracleasm createdisk DATA501_DISK009 /dev/mapper/DATA501_disk009p1



ASM Disk Group Information

  • First, we will set our Allocation Unit (AU) to 4MB in size
  • Second, we will use ‘ORCL:*’ disks instead of block devices when creating our new disk groups
SQL> alter system set asm_diskstring=’/dev/oracle’,’ORCL:PD*’;

 
System altered.
 
Add the following to the init+ASM1.ora on each node
For automatic mount of diskgroups
asm_diskgroups=’DATA03′,’DATA60′,’FRA03′,’FRA60′,’DATA101′,’DATA501′ 


#asm_diskstring=’/dev/oracle’
asm_diskstring=’/dev/oracle’,’ORCL:PD*’
 
For the time being, manually mount the diskgroups on each node:
SQL> alter system set asm_diskstring=’/dev/oracle’,’ORCL:PD*’;

System altered.

SQL> alter diskgroup DATA101 mount;
Diskgroup altered.

SQL> alter diskgroup DATA501 mount;
Diskgroup altered.
 
 
Creating ASM Disk Groups

RAID 10 DATA Disk Group

+ASM1 > cat cr_DATA101.sql

create diskgroup DATA101 external redundancy disk ‘ORCL:DATA101_DISK000’,
‘ORCL:DATA101_DISK001’,
‘ORCL:DATA101_DISK002’,
‘ORCL:DATA101_DISK003’,
‘ORCL:DATA101_DISK004’
ATTRIBUTE ‘au_size’ = ‘4M’,
‘compatible.rdbms’ = ‘11.1’,
‘compatible.asm’ = ‘11.1’;

RAID 5 DATA Disk Group
+ASM1 > cat cr_DATA501.sql

create diskgroup DATA501 external redundancy disk ‘ORCL:DATA501_DISK000’,
‘ORCL:DATA501_DISK001’,
‘ORCL:DATA501_DISK002’,
‘ORCL:DATA501_DISK003’,
‘ORCL:DATA501_DISK004’,
‘ORCL:DATA501_DISK005’,
‘ORCL:DATA501_DISK006’,
‘ORCL:DATA501_DISK007’,
‘ORCL:DATA501_DISK008’,
‘ORCL:DATA501_DISK009’
ATTRIBUTE ‘au_size’ = ‘4M’,
‘compatible.rdbms’ = ‘11.1’,
‘compatible.asm’ = ‘11.1’;

On March 14, 2012, Oracle opened their call for paper for this year’s Oracle OpenWorld conference. The call for paper expired on April 9, 2012.  Here’s the abstract that I submitted for Exadata with ZFS Storage Appliance.  

Come learn how the industry’s leading supply contracting company, delivers unmatched savings to their customers leveraging Oracle engineered systems.  This session will demonstrate how the ZFS Storage Appliance (ZFSSA) was coupled with the Exadata 1/2 Rack to achieve ultra high-availability and reduced capital and operational expenses for their enterprise backups, external tables, archiving, and data staging.  We will reveal additional industry use cases for the ZFSSA.  For Exadata customers, we will share tips and tricks to increase your space utilization.  Lastly, we will share our lessons learned, RMAN backup strategies, direct NFS settings, and scripts used to drive throughput on the ZFSSA across all the compute nodes.