Cheatography
https://cheatography.com
Oracle RAC 12c Cheat Sheet
This is a draft cheat sheet. It is a work in progress and is not finished yet.
Utilities
Name |
Description |
crsctl |
Cluster control |
srvctl |
Server control |
oifcfg |
Network interface configuration tool |
ocrconfig |
Administer cluster registry (OCR) and local registry (OLR) |
ocrcheck |
Display health of Cluster or Local registry |
ocrdump |
Dump contents of the Cluster or Local registry |
cluvfy |
Cluster verification utility |
olsnodes |
Print information about cluster nodes |
Oracle Support Notes
MOS NOte |
Decscription |
1268927.1 |
RACCheck Audit Tool |
Acronyms / Terminology
Term |
Description |
GCS |
Global Cache Services - Manages data block sharing between RAC instances |
GES |
Global Enqueue Services manages enqueue resources such as locks |
GDS |
Global Directory Service |
DRM |
Dynamic Resource Mastering |
TAF |
Transparent Application Failover |
ONS |
Oracle Notification Services |
FAN |
Fast Application Notification |
FCF |
Fast Connection Failover |
AC |
Application Continuity |
SCAN |
Single Client Access Name |
CRS |
Cluster Ready Services |
HAS |
High Availability Services |
Transparent Application Failover (TAF)
When an instance fails, connections are restored to a surviving instance. TAF is configured in the client side connect string.
With a FAILOVER_MODE of "session", a new connection is made to a surviving node but no other action is taken.
With a FAILOVER_MODE of "select", the query is executed again with the existing open cursor.
In both cases, DML statements are rolled back. It is the responsibility of the application to detect and replay DML operations. |
Fast Connect Notification / Failover (FAN and FCF)
A framework that published up/down events back to the client application when a cluster reconfiguration occurs. This allows the client to quickly reestablish connections to a surviving node.
Need to add to this. |
Things to research and add
Application Continuity
RAC One Node - Active/Passive clustered database. Runs on one node but will fail over or can be relocated to another node.
Load Balancing - Can be done on the client side with the LOAD_BALANCE entry in the tnsnames.ora. It queries pmon to determine which instance to connect to. That is the old way. With 11gR2 this is best done with the scan listener.
SCAN listener - The remote_listener parameter is set to the SCAN listener name:port
FLEX Cluster - For large clusters to reduce the number of interconnects. Has hub nodes and leaf nodes |
|
|
Wait events
RAC Wait Events |
Description |
GC Current Block 2-Way/3-Way |
This is a very long description to see how columns are resized of if they stay the same. I'm hoping column 1 will become smaller and this column will be wider |
GC CR Block 2-Way / 3-Way |
yyy |
GC Current Grant 2-Way |
GC CR Grant 2-Way |
GC Current Block Busy |
GC CR Block Busy |
GC Current Block Congested |
GC CR Block Congested |
GC CR Request |
Placeholder event |
GC Current Request |
Placeholder event |
GC Lost block |
GCS Log Flush Sync |
Put difference between 2 way and 3 way waits here
Can some be combined like in book
Maybe current and CR waits can be put in two separate columns???
Background Processes
Process |
Description |
LCK(n) |
Lock process n (LCK0, LCK1, etc) |
LMD(n) |
Global enqueue service daemon (Lock Manager) n Manages lock requests from other instances |
LMHB |
Lock manager heartbeat monitor |
LMON |
Global engueue service monitor |
LMS(n) |
Global cache service n |
Processes can be queried by
Select NAME, DESCRIPTION from v$bgprocess where PADDR != '00';
Clusterware Files
File Type |
Description |
Location |
OCR |
Oracle Cluster Registry |
Location defined in /etc/oracle/ocr.loc and is stored on cluster file system |
OLR |
Oracle Local Registry |
/etc/oracle/olr.loc. Default location is $GRID_HOME/cdata/<hostname>.olr |
VD |
Voting Disk |
"crsctl query css votedisk" returns the disk containing the voting disk. The kfed command can be used to read the location of the voting disk file from the disk header |
GPnP |
Grid Plug and Play |
Default location is $GRID_HOME/gpnp/<hostname>/profiles/peer/profile.xml |
Clusterware Architecture
Oracle Clusterware is split into two stacks- High Availability Services (HAS) and Cluster Ready Services (CRS). Startup is done using the "crsctl start crs" command executed as the root user or automatically after a reboot. The startup process is initiated from the /etc/inittab file (Linux)..
The High Availability Services are the lower level stack (starts first). It uses the OLR and the GPnP profile since ASM and the OCR is not yet available. To find the voting disk, it gets the location from the ASM disk header for the disk group containing the VD. This does not require the ASM disk group to be mounted.
The Cluster Ready Services is the higher level stack, started by the HAS. |
Clusterware Troubleshooting
Run "crsctl check cluster" to get error messages |
|