SQL Query Parsing
Patches CPU/PSU/SPU
CPU, PSU, SPU - Oracle Critical Patch Update Terminology Update
Critical Patch Update (CPU) now refers to the overall release of security fixes each quarter rather than the cumulative database security patch for the quarter. Think of the CPU as the overarching quarterly release and not as a single patch.
Patch Set Updates (PSU) are the same cumulative patches that include both the security fixes and priority fixes. The key with PSUs is they are minor version upgrades (e.g., 11.2.0.1.1 to 11.2.0.1.2). Once a PSU is applied, only PSUs can be applied in future quarters until the database is upgraded to a new base version.
Security Patch Update (SPU) terminology is introduced in the October 2012 Critical Patch Update as the term for the quarterly security patch. SPU patches are the same as previous CPU patches, just a new name. For the database, SPUs can not be applied once PSUs have been applied until the database is upgraded to a new base version.
Bundle Patches are the quarterly patches for Windows and Exadata which include both the quarterly security patches as well as recommended fixes.
OGG Processes
Backgroup Process - 01
SMON - System Monitor process recovers
after instance failure and monitors temporary segments and extents.
SMON in a non-failed instance can also perform failed instance recovery
for other failed RAC instance.
PMON - Process Monitor process
recovers failed process resources. If MTS (also called Shared Server Architecture)
is being utilized.
PMON monitors and restarts any failed dispatcher or server processes.
In RAC, PMON’s role as service registration agent is particularly
important.
DBWR - Database Writer or Dirty Buffer
Writer process is responsible for writing dirty buffers from the database block
cache to the database data files.
Generally, DBWR only writes blocks back to the data files on commit,
or when the cache is full and space has to be made for more blocks.
The possible multiple DBWR processes in RAC must be coordinated through
the locking and global cache processes to ensure efficient processing is
accomplished.
LGWR - Log Writer process is
responsible for writing the log buffers out to the redo logs. I n RAC, each RAC
instance has its own LGWR process that maintains that instance’s thread of redo
logs.
ARCH - (Optional) Archive process
writes filled redo logs to the archive log location(s). In RAC, the various
ARCH processes can be utilized to ensure that copies of the archived redo logs
for each instance are available to the other instances in the RAC setup should
they be needed for recovery.
CKPT - Checkpoint process writes
checkpoint information to control files and data file headers.
Pnnn - (Optional) Parallel Query
Slaves are started and stopped as needed to participate in parallel query
operations.
CQJ0 - Job queue controller process
wakes up periodically and checks the job log.
If a job is due, it spawns Jnnn processes to handle jobs.
Jnnn - (Optional) Job processes used
by the Oracle9i job queues to process internal Oracle9i jobs. The CQJ0 process
controls it automatically.
QMN - (Optional) Advanced Queuing
process is used to control the advanced queuing jobs.
Snnn - (Optional) Pre-spawned shared
server processes are used by the multi-threaded server (MTS) process to handle
connection requests from users, and act as connection pools for user processes.
These user processes also handle disk reads from database datafiles into
the database block buffers.
Dnnn - (Optional) Dispatcher process
for shared server (MTS) - It accepts connection requests and portions them out
to the pre-spawned server processes.
MMON – This process performs various
manageability-related background tasks, for example:
MMNL - This process performs frequent
and lightweight manageability-related tasks, such as session history capture
and metrics computation.
MMAN - is used for internal database
tasks that manage the automatic shared memory. MMAN serves as the SGA
Memory Broker and coordinates the sizing of the memory components.
RBAL - This process coordinates
rebalance activity for disk groups in an Automatic Storage Management instance.
ORBn - performs the actual rebalance
data extent movements in an Automatic Storage Management instance.
There can be many of these at a time, called ORB0, ORB1, and so forth.
OSMB - is present in a database
instance using an Automatic Storage Management disk group.
It communicates with the Automatic Storage Management instance.
FMON - The database communicates with
the mapping libraries provided by storage vendors through an external
non-Oracle Database process that is spawned by a background process called
FMON. FMON is responsible for managing the mapping information.
When you specify the FILE_MAPPING initialization parameter for mapping
data files to physical devices on a storage subsystem, then the FMON process is
spawned.
LMON - The Global Enqueue Service
Monitor (LMON) monitors the entire cluster to manage the global enqueues and
the resources.
LMON manages instance and process failures and the associated recovery
for the Global Cache Service (GCS) and Global Enqueue Service (GES).
In particular, LMON handles the part of recovery associated with global
resources. LMON-provided services are also known as cluster group services (CGS)
LMDx - The Global Enqueue Service
Daemon (LMD) is the lock agent process that manages enqueue manager service
requests for Global Cache Service enqueues to control access to global enqueues
and resources.
The LMD process also handles deadlock detection and remote enqueue
requests.
Remote resource requests are the requests originating from another
instance.
RAC Background Processes
LMSx - The Global Cache Service
Processes (LMSx) are the processes that handle remote Global Cache Service
(GCS) messages. Real Application Clusters software provides for up to 10 Global
Cache Service Processes.
The number of LMSx varies depending on the amount of messaging traffic
among nodes in the cluster. The LMSx handles the acquisition interrupt and
blocking interrupt requests from the remote instances for Global Cache Service
resources. For cross-instance consistent read requests, the LMSx will create a consistent
read version of the block and send it to the requesting instance. The LMSx also
controls the flow of messages
to remote instances.
LMSn - The LMSn processes
handle the blocking interrupts from the remote instance for the Global Cache
Service resources by:
·
Managing the resource requests
and cross-instance call operations for the shared resources.
·
Building
a list of invalid lock elements and validating the lock elements during
recovery.
·
Handling the global lock deadlock
detection and Monitoring for the lock conversion timeouts
LCKx -
This process manages the global enqueue requests and the cross-instance
broadcast. Workload is automatically shared and balanced when there are
multiple Global Cache Service Processes (LMSx).
DIAG: Diagnosability Daemon –
Monitors the health of the instance and captures the data for instance process
failures.
Reclaim DataFile Size
Reclaim datafile size
Now we can reclaim the tablespace size by using below sql query. First check if really there is a free space in the data file and how much we can reclaim the size.
/*
SELECT File_ID, Tablespace_name, file_name, High_Water_Mark, current_size_in_GB,
'ALTER DATABASE DATAFILE '''||file_name||''' resize '|| High_Water_Mark|| 'M;' script_reclaim
FROM
(
WITH v_file_info
AS (SELECT FILE_NAME, FILE_ID, BLOCK_SIZE
FROM dba_tablespaces tbs, dba_data_files df
WHERE tbs.tablespace_name = df.tablespace_name)
SELECT A.FILE_ID,
A.FILE_NAME,
A.TABLESPACE_NAME,
CEIL ( (NVL (hwm, 1) * v_file_info.block_size) / 1024 / 1024) High_Water_Mark,
CEIL (BLOCKS * v_file_info.block_size / 1024 / 1024 /2014) current_size_in_GB
FROM dba_data_files A,
v_file_info,
( SELECT file_id, MAX (block_id + BLOCKS - 1) hwm
FROM dba_extents
GROUP BY file_id) b
WHERE A.file_id = b.file_id(+)
AND A.file_id = v_file_info.file_id
AND tablespace_name='CLAIM_INDEX' -- << change the tablespace name to reclaim the datafile size
)
WHERE High_Water_Mark <> current_size_in_GB;
*/
ALTER DATABASE DATAFILE '+ASMTXNDATA/txnd/datafile/all_temp_tables.284.882569903' RESIZE 6M
Common RAC Commands
* Most Common RAC Commands Used
* Shutdown and Start sequence of Oracle RAC
* STOP
ORACLE RAC (11g)
1. emctl
stop dbconsole
2. srvctl
stop listener -n racnode1
3. srvctl
stop database -d RACDB
4. srvctl
stop asm -n racnode1 -f
5. srvctl
stop asm -n racnode2 -f
6. srvctl
stop nodeapps -n racnode1 -f
7. crsctl
stop crs
* START ORACLE RAC (11g)
1. crsctl
start crs
2. crsctl
start res ora.crsd -init
3. srvctl
start nodeapps -n racnode1
4. srvctl
start nodeapps -n racnode2
5. srvctl
start asm -n racnode1
6. srvctl
start asm -n racnode2
7. srvctl
start database -d RACDB
8. srvctl
start listener -n racnode1
9. emctl
start dbconsole
srvctl
relocate scan_listener -scannumber 3
* To start and stop oracle clusterware (run
as the superuser) :
* on local node
[root@node1
~]# crsctl stop crs
[root@node1
~]# crsctl start crs
* To start and stop oracle cluster
resources running on all nodes :
[root@node1
~]# crsctl stop cluster -all
[root@node1
~]# crsctl start cluster -all
OR
* To start the Oracle Clusterware stack on
two named servers run the following command as root
[root@node1
~]# crsctl start cluster -n node1 node2
* To check the current status of a cluster
:
[oracle@node1~]$
crsctl check cluster
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
* To check the current status of CRS :
[oracle@node1
~]$ crsctl check crs
CRS-4638:
Oracle High Availability Services is online
CRS-4537:
Cluster Ready Services is online
CRS-4529:
Cluster Synchronization Services is online
CRS-4533:
Event Manager is online
* To display the status cluster resources :
[oracle@node1
~]$ crsctl stat res -t
* To check version of Oracle Clusterware :
[oracle@node1
~]$ crsctl query crs softwareversion
Oracle
Clusterware version on node [node1] is [11.2.0.4.0]
[oracle@node1
~]$
[oracle@node1
~]$ crsctl query crs activeversion
Oracle
Clusterware active version on the cluster is [11.2.0.4.0]
[oracle@node1
~]$ crsctl query crs releaseversion
Oracle High
Availability Services release version on the local node is [11.2.0.4.0]
* To check current status of OHASD (Oracle
High Availability Services) daemon :
[oracle@node1
~]$ crsctl check has
CRS-4638:
Oracle High Availability Services is onli
* Forcefully deleting resource :
[oracle@node1
~]$ crsctl delete resource testresource -f
* Enabling and disabling CRS daemons (run
as the superuser) :
[root@node1
~]# crsctl enable crs
CRS-4622:
Oracle High Availability Services autostart is enabled.
[root@node1
~]#
[root@node1
~]# crsctl disable crs
CRS-4621:
Oracle High Availability Services autostart is disabled.
* To check the status of Oracle CRS :
[oracle@node1
~]$ olsnodes
node1
node2
* To print node name with node number :
[oracle@node1
~]$ olsnodes -n
node1 1
node2 2
* To print private interconnect address for
the local node :
[oracle@node1
~]$ olsnodes -l -p
node1 192.168.1.101
* To print virtual IP address with node
name :
[oracle@node1
~]$ olsnodes -i
node1 node1-vip
node2 node2-vip
[oracle@node1
~]$ olsnodes -i node1
node1 node1-vip
* To print information for the local node :
[oracle@node1
~]$ olsnodes -l
node1
pl
* To print node status (active or inactive)
:
[oracle@node1
~]$ olsnodes -s
node1 Active
node2 Active
[oracle@node1
~]$ olsnodes -l -s
node1 Active
* To print node type (pinned or unpinned) :
[oracle@node1
~]$ olsnodes -t
node1 Unpinned
node2 Unpinned
[oracle@node1
~]$ olsnodes -l -t
node1 Unpinned
* To print clusterware name :
[oracle@node1
~]$ olsnodes -c
rac-scan
* To display global public and global
cluster_interconnect :
[oracle@node1
~]$ oifcfg getif
eth0 192.168.100.0
global public
eth1 192.168.1.0
global cluster_interconnect
* To display the database registered in the
repository :
[oracle@gpp4
~]$ srvctl config database
TESTRACDB
* To display the configuration details of
the database :
[oracle@TEST4
~]$ srvctl config database -d TESTRACDB
Database
unique name: TESTRACDB
Database
name: TESTRACDB
Oracle home:
/home/oracle/product/11.2.0/db_home1
Oracle user:
oracle
Spfile:
+DATA/TESTRACDB/spfileTESTRACDB.ora
Domain:
Start
options: open
Stop
options: immediate
Database
role: PRIMARY
Management
policy: AUTOMATIC
Server
pools: TESTRACDB
Database
instances: TESTRACDB1,TESTRACDB2
Disk Groups:
DATA,ARCH
Mount point
paths:
Services:
SRV_TESTRACDB
Type: RAC
Database is
administrator managed
* To change
policy of database from automatic to manual :
[oracle@TEST4
~]$ srvctl modify database -d TESTRACDB -y MANUAL
* To change
the startup option of database from open to mount :
[oracle@TEST4
~]$ srvctl modify database -d TESTDB -s mount
* To start RAC listener :
[oracle@TEST4
~]$ srvctl start listener
* To display the status of the database :
[oracle@TEST4
~]$ srvctl status database -d TESTRACDB
Instance
TESTRACDB1 is running on node TEST4
Instance
TESTRACDB2 is running on node TEST5
* To display the status services running in
the database :
[oracle@TEST4
~]$ srvctl status service -d TESTRACDB
Service
SRV_TESTRACDB is running on instance(s) TESTRACDB1,TESTRACDB2
* To check nodeapps running on a node :
[oracle@TEST4
~]$ srvctl status nodeapps
VIP
TEST4-vip is enabled
VIP
TEST4-vip is running on node: TEST4
VIP
TEST5-vip is enabled
VIP
TEST5-vip is running on node: TEST5
Network is
enabled
Network is
running on node: TEST4
Network is
running on node: TEST5
GSD is
enabled
GSD is not
running on node: TEST4
GSD is not
running on node: TEST5
ONS is
enabled
ONS daemon
is running on node: TEST4
ONS daemon
is running on node: TEST5
[oracle@TEST4
~]$ srvctl status nodeapps -n TEST4
VIP
TEST4-vip is enabled
VIP
TEST4-vip is running on node: TEST4
Network is enabled
Network is
running on node: TEST4
GSD is
enabled
GSD is not
running on node: TEST4
ONS is
enabled
ONS daemon
is running on node: TEST4
* To start or stop all instances associated
with a database. This command also starts services and listeners on each node :
[oracle@TEST4
~]$ srvctl start database -d TESTRACDB
* To shut down instances and services
(listeners not stopped):
[oracle@TEST4
~]$ srvctl stop database -d TESTRACDB
* You can
use -o option to specify startup/shutdown options.
* To shutdown
immediate database – srvctl stop database -d TESTRACDB -o immediate
* To startup
force all instances – srvctl start database -d TESTRACDB -o force
* To perform
normal shutdown – srvctl stop database -d TESTRACDB -i instance racnode1
* To start or stop the ASM instance on
racnode01 cluster node :
[oracle@TEST4
~]$ srvctl start asm -n racnode1
[oracle@TEST4
~]$ srvctl stop asm -n racnode1
* To display current configuration of the
SCAN VIP’s :
[oracle@test4
~]$ srvctl config scan
SCAN name:
vmtestdb.exo.local, Network: 1/192.168.5.0/255.255.255.0/eth0
SCAN VIP
name: scan1, IP: /vmtestdb.exo.local/192.168.5.100
SCAN VIP
name: scan2, IP: /vmtestdb.exo.local/192.168.5.101
SCAN VIP
name: scan3, IP: /vmtestdb.exo.local/192.168.5.102
* Refreshing SCAN VIP’s with new IP addresses from DNS :
[oracle@test4
~]$ srvctl modify scan -n your-scan-name.example.com
* To stop or start SCAN listener and
the SCAN VIP resources :
[oracle@test4
~]$ srvctl stop scan_listener
[oracle@test4
~]$ srvctl start scan_listener
[oracle@test4
~]$ srvctl stop scan
[oracle@test4
~]$ srvctl start scan
* To display the status of SCAN VIP’s and
SCAN listeners :
[oracle@test4
~]$ srvctl status scan
SCAN VIP
scan1 is enabled
SCAN VIP
scan1 is running on node test4
SCAN VIP
scan2 is enabled
SCAN VIP
scan2 is running on node test5
SCAN VIP
scan3 is enabled
SCAN VIP
scan3 is running on node test5
[oracle@test4
~]$ srvctl status scan_listener
SCAN
Listener LISTENER_SCAN1 is enabled
SCAN
listener LISTENER_SCAN1 is running on node test4
SCAN
Listener LISTENER_SCAN2 is enabled
SCAN
listener LISTENER_SCAN2 is running on node test5
SCAN
Listener LISTENER_SCAN3 is enabled
SCAN
listener LISTENER_SCAN3 is running on node test5
* To add/remove/modify SCAN :
[oracle@test4
~]$ srvctl add scan -n your-scan
[oracle@test4
~]$ srvctl remove scan
[oracle@test4
~]$ srvctl modify scan -n new-scan
* To add/remove SCAN listener :
[oracle@test4
~]$ srvctl add scan_listener
[oracle@test4
~]$ srvctl remove scan_listener
* To modify SCAN listener port :
srvctl
modify scan_listener -p <port_number>
srvctl
modify scan_listener -p <port_number>
(reflect changes to the current SCAN listener only)
* To start the ASM instnace in mount state
:
ASMCMD>
startup --mount
* To shut down ASM instance
immediately(database instance must be shut down before the ASM instance is
shut down) :
ASMCMD>
shutdown --immediate
* Use lsop command on ASMCMD to list ASM
operations :
ASMCMD >
lsop
* To perform quick health check of OCR :
[oracle@test4
~]$ ocrcheck
Status of
Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) :
262120
Used space (kbytes) :
3304
Available space (kbytes) : 258816
ID : 1555543155
Device/File Name :
+DATA
Device/File integrity check
succeeded
Device/File Name :
+OCR
Device/File
integrity check succeeded
Device/File
not configured
Device/File
not configured
Device/File
not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to
non-privileged user
* To dump content of OCR file into an xml :
[oracle@test4
~]$ ocrdump testdump.xml -xml
* To add or relocate the OCR mirror file to
the specified location :
[oracle@test4
~]$ ocrconfig -replace ocrmirror ‘+TESTDG’
[oracle@test4
~]$ ocrconfig -replace +CURRENTOCRDG -replacement +NEWOCRDG
* To relocate existing OCR file :
[oracle@test4
~]$ ocrconfig -replce ocr ‘+TESTDG’
* To add mirrod disk group for OCR :
[oracle@test4
~]$ ocrconfig -add +TESTDG
* To remove OCR mirror :
ocrconfig
-delete +TESTDG
* To remove the OCR or the OCR mirror :
[oracle@test4
~]$ ocrconfig -replace ocr
[oracle@test4
~]$ ocrconfig replace ocrmirror
* To list ocrbackup list :
[oracle@test4
~]$ ocrconfig -showbackup
test5 2016/04/16 17:30:29 /home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup00.ocr
test5 2016/04/16 13:30:29
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup01.ocr
test5 2016/04/16 09:30:28
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/backup02.ocr
test5 2016/04/15 13:30:26 /home/oracle/app/11.2.0/grid/cdata/vmtestdb/day.ocr
test5 2016/04/08 09:30:03
/home/oracle/app/11.2.0/grid/cdata/vmtestdb/week.ocr
* Performs OCR backup manually :
[root@testdb1
~]# ocrconfig -manualbackup
testdb1 2016/04/16 17:31:42 /votedisk/backup_20160416_173142.ocr 0
* Changes OCR autobackup directory
[root@testdb1
~]# ocrconfig -backuploc /backups/ocr
* To verify the integrity of all the
cluster nodes:
[oracle@node1]$
cluvfy comp ocr -n all -verbose
Verifying
OCR integrity
Checking OCR
integrity...
Checking the
absence of a non-clustered configuration...
All nodes
free of non-clustered, local-only configurations
* Checking daemon liveness...
Check: Liveness
for "CRS daemon"
Node Name Running?
----------------------------------* ------------------------
node2 yes
node1 yes
Result:
Liveness check passed for "CRS daemon"
Checking OCR
config file "/etc/oracle/ocr.loc"...
OCR config
file "/etc/oracle/ocr.loc" check successful
Disk group
for ocr location "+DATA/testdb-scan/OCRFILE/registry.255.903592771"
is available on all the nodes
Disk group
for ocr location "+CRS/testdb-scan/OCRFILE/registry.255.903735431" is
available on all the nodes
Disk group
for ocr location
"+MULTIPLEX/testdb-scan/OCRFILE/registry.255.903735561" is available
on all the nodes
Checking OCR
backup location "/bkpdisk"
OCR backup
location "/bkpdisk" check passed
Checking OCR
dump functionality
OCR dump
check passed
NOTE:
This check
does not verify the integrity of the OCR contents.
Execute
'ocrcheck' as a privileged user to verify the contents of OCR.
OCR
integrity check passed
Verification
of OCR integrity was successful.
