Show Menu
Cheatography

Oracle RAC 12c Cheat Sheet (DRAFT) by

Oracle RAC 12c Cheat Sheet

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Utilities

Name
Descri­ption
crsctl
Cluster control
srvctl
Server control
oifcfg
Network interface config­uration tool
ocrconfig
Administer cluster registry (OCR) and local registry (OLR)
ocrcheck
Display health of Cluster or Local registry
ocrdump
Dump contents of the Cluster or Local registry
cluvfy
Cluster verifi­cation utility
olsnodes
Print inform­ation about cluster nodes

Startup Sequence

Add desciption here

Log files

Log file
Location??

Oracle Support Notes

Acronyms / Termin­ology

Term
Descri­ption
GCS
Global Cache Services - Manages data block sharing between RAC instances
GES
Global Enqueue Services manages enqueue resources such as locks
GDS
Global Directory Service
DRM
Dynamic Resource Mastering
TAF
Transp­arent Applic­ation Failover
ONS
Oracle Notifi­cation Services
FAN
Fast Applic­ation Notifi­cation
FCF
Fast Connection Failover
AC
Applic­ation Continuity
SCAN
Single Client Access Name
CRS
Cluster Ready Services
HAS
High Availa­bility Services

Transp­arent Applic­ation Failover (TAF)

When an instance fails, connec­tions are restored to a surviving instance. TAF is configured in the client side connect string.

With a FAILOV­ER_MODE of "­ses­sio­n", a new connection is made to a surviving node but no other action is taken.

With a FAILOV­ER_MODE of "­sel­ect­", the query is executed again with the existing open cursor.

In both cases, DML statements are rolled back. It is the respon­sib­ility of the applic­ation to detect and replay DML operat­ions.

Fast Connect Notifi­cation / Failover (FAN and FCF)

A framework that published up/down events back to the client applic­ation when a cluster reconf­igu­ration occurs. This allows the client to quickly reesta­blish connec­tions to a surviving node.

Need to add to this.

Things to research and add

Applic­ation Continuity

RAC One Node - Active­/Pa­ssive clustered database. Runs on one node but will fail over or can be relocated to another node.

Load Balancing - Can be done on the client side with the LOAD_B­ALANCE entry in the tnsnam­es.ora. It queries pmon to determine which instance to connect to. That is the old way. With 11gR2 this is best done with the scan listener.

SCAN listener - The remote­_li­stener parameter is set to the SCAN listener name:port

FLEX Cluster - For large clusters to reduce the number of interc­onn­ects. Has hub nodes and leaf nodes
 

Wait events

RAC Wait Events
Descri­ption
GC C­urr­ent­ ­Block 2-Way/­3-Way
This is a very long descri­ption to see
how columns are resized of if they stay
the same. I'm hoping column 1 will become
smaller and this column will be wider
GC CR Block 2-Way / 3-Way
yyy
GC Current Grant 2-Way
GC CR Grant 2-Way
GC Current Block Busy
GC CR Block Busy
GC Current Block Congested
GC CR Block Congested
GC CR Request
Placeh­older event
GC Current Request
Placeh­older event
GC Lost block
GCS Log Flush Sync
Put difference between 2 way and 3 way waits here
Can some be combined like in book

Maybe current and CR waits can be put in two separate columns???

Dynamic Views

V$xxx
xxx
V$yyy
yyy

Background Processes

Process
Descri­ption
LCK(n)
Lock process n (LCK0, LCK1, etc)
LMD(n)
Global enqueue service daemon (Lock Manager) n
Manages lock requests from other instances
LMHB
Lock manager heartbeat monitor
LMON
Global engueue service monitor
LMS(n)
Global cache service n
Processes can be queried by
Select NAME, DESCRI­PTION from v$bgpr­ocess where PADDR != '00';

Cluste­rware Files

File Type
Descri­ption
Location
OCR
Oracle Cluster Registry
Location defined in /etc/o­rac­le/­ocr.loc and is stored on cluster file system
OLR
Oracle Local Registry
/etc/o­rac­le/­olr.loc. Default location is $GRID_­HOM­E/c­dat­a/<­hos­tna­me>.olr
VD
Voting Disk
"­crsctl query css votedi­sk" returns the disk containing the voting disk. The kfed command can be used to read the location of the voting disk file from the disk header
GPnP
Grid Plug and Play
Default location is $GRID_­HOM­E/g­pnp­/<h­ost­nam­e>/­pro­fil­es/­pee­r/p­rof­ile.xml

Cluste­rware Archit­ecture

Oracle Cluste­rware is split into two stacks- High Availa­bility Services (HAS) and Cluster Ready Services (CRS). Startup is done using the "­crsctl start crs" command executed as the root user or automa­tically after a reboot. The startup process is initiated from the /etc/i­nittab file (Linux)..

The High Availa­bility Services are the lower level stack (starts first). It uses the OLR and the GPnP profile since ASM and the OCR is not yet available. To find the voting disk, it gets the location from the ASM disk header for the disk group containing the VD. This does not require the ASM disk group to be mounted.

The Cluster Ready Services is the higher level stack, started by the HAS.

Cluste­rware Troubl­esh­ooting

Run "­crsctl check cluste­r" to get error messages