Proxy instance 12c

 !! Oracle 12c Cluster: ACFS Leverages Flex ASM !!




NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.


 

In Oracle Database Releases earlier than 12c, an Automatic Storage Management (ASM) instance runs on every node in the cluster, and ASM Cluster File System (ACFS) Service on a node connects to the local ASM instance running on the same host. If the ASM instance on a node were to fail, then the shared Disk Groups and hence ACFS file

In Oracle Database Releases earlier than 12c, an Automatic Storage Management (ASM) instance runs on every node in the cluster, and ASM Cluster File System (ACFS) Service on a node connects to the local ASM instance running on the same host. If the ASM instance on a node were to fail, then the shared Disk Groups and hence ACFS file systems can no longer be accessed on that node.

With introduction of Flex ASM in Oracle 12c, the hard dependency between ASM and its clients has been relaxed and a smaller number of ASM instances need run on a subset of servers in a cluster. In this scenario, when there might be nodes without an ASM instance, a new instance type has been introduced by Flex ASM – ASM proxy instance which gets metadata information from ASM instance on behalf of ACFS. If ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network to fetch the metadata. Moreover, if the local ASM instance fails, then ASM proxy instance can failover to another surviving ASM instance on a different server resulting in uninterrupted availability of shared storage and ACFS file systems. However, I/O to the underlying storage does not go through Oracle ASM, but goes directly through the Oracle ASM proxy instance. An ASM proxy instance must be running on a node which needs to provide ACFS service.


*. A new instance type has been introduced by Flex ASM – ASM proxy instance, which gets metadata information from ASM instance on behalf of ACFS.
*. If an ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network resulting in uninterrupted availability of shared storage and ACFS file systems.
*. This provides a much higher degree of flexibility, scalability and availability for file services to clients.







Figure 1 shows that as an ASM instance is not running on Hub Node 2, ADVM / ACFS services on Hub Node 2l utilize the ASM proxy instance (+APX2) to access the metadata from the remote ASM instance (+ASM1) running on Hub Node 1.




Figure 1.
Flex ASM can be configured on either a standard cluster or a Flex Cluster. When Flex ASM runs on a standard cluster, ASM services can run on a subset of cluster nodes servicing clients across the cluster. When Flex ASM runs on a Flex Cluster, ASM services can run on a subset of Hub Nodes servicing clients across all of the Hub Nodes in the Flex Cluster. Besides, in a flex cluster, only hub nodes can host the ACFS services because hub nodes have direct access to storage.
A Demonstration

To explore how ADVM / ACFS can leverage Flex ASM features, we will use an ASM Flex Cluster that is configured with two hub nodes (rac1 and rac2). Our first set of tasks is to create an ACFS file system resource.
Check Prerequisites

First, let’s verify that all kernel modules needed for ACFS and ADVM are loaded on both the nodes.

 [root@rac1 ~]# lsmod |grep oracle
oracleacfs           2837904  1
oracleadvm            342512  1
oracleoks             409560  2 oracleacfs,oracleadvm
oracleasm              84136  1

[root@rac2 ~]# lsmod |grep oracle
oracleacfs           2837904  1
oracleadvm            342512  1
oracleoks             409560  2 oracleacfs,oracleadvm
oracleasm              84136  1

The ASM Dynamic Volume Manager (ADVM) proxy instance is a special Oracle instance which enables ADVM to connect to Flex ASM and is required to run on the same node as ADVM and ACFS. For a volume device to be visible on a node, an ASM proxy instance must be running on that node. Let’s verify that an ASM proxy instance is running on both the nodes.

 [root@rac2 ~]# crsctl stat res ora.proxy_advm -t
----------------------------------------------------------------
Name           Target  State        Server                   State details
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.proxy_advm
               ONLINE  ONLINE       rac1                   STABLE
               ONLINE  ONLINE       rac2                   STABLE

Create the ADVM Volume

We’ll next modify the compatible.advm attribute of the DATA ASM disk group to enable all the new ASM Dynamic Volume (ADVM) features included in release 12.1, and then create a new volume – VOL1 within the DATA disk group with a volume size of 300 MB.

 [grid@rac1 root]$ asmcmd setattr -G DATA compatible.advm 12.2.0.0.0

[grid@rac1 root]$ asmcmd volcreate -G DATA -s 300m VOL1

[grid@rac1 root]$ asmcmd volinfo -G DATA VOL1

Diskgroup Name: DATA
         Volume Name: VOL1
         Volume Device: /dev/asm/vol1-114
         State: ENABLED
         Size (MB): 320
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath:

Create ACFS File System and Corresponding Mount Point

Now let’s construct an ACFS file system on the newly-created volume VOL1 and also create a mount point on both the nodes to mount the ACFS file system.

[root@rac1 ~]# mkfs -t acfs /dev/asm/vol1-114
[root@rac1 ~]# mkdir -p /acfs1
[root@rac2 ~]# mkdir -p /acfs1

Configure Cloud File System Resource For ACFS File System

Now using the srvctl commands, we will create an Oracle Cloud File System resource on the volume device VOL1 with the mount point /acfs1:

[root@rac1 ~]# srvctl add filesystem -m /acfs1 -d /dev/asm/vol1-114

[root@rac1 ~]# srvctl status filesystem -d /dev/asm/vol1-114

ACFS file system //acfs2 is not mounted

[root@rac1 ~]# mount | grep vol1

[root@rac2 ~]# mount | grep vol1

We need to start the file system resource to mount the ACFS file system.

[root@rac1 ~]# srvctl start  filesystem -d /dev/asm/vol1-114

[root@rac1 ~]# srvctl status  filesystem -d /dev/asm/vol1-114

ACFS file system //acfs2 is mounted on nodes rac1,rac2

[root@rac1 ~]# mount | grep vol1

/dev/asm/vol1-114 on /acfs1 type acfs (rw)

[root@rac2 ~]# mount | grep vol1

/dev/asm/vol1-114 on /acfs1 type acfs (rw)

Verify The Cloud File System Resource

It can be verified that the new Cloud File System is indeed working properly as a small text file created on it from rac1 can be successfully accessed from rac2.

[root@rac1 ~]# echo "Test File on ACFS" > /acfs1/testfile.txt

[root@rac2 asm]#  cat /acfs1/testfile.txt

Test File on ACFS

Leveraging Flex ASM

To demonstrate how ACFS / ADVM leverages these Flex ASM capabilities, let us first have a look at the current state of various cluster resources in our two node cluster.
View Baseline Status of Various Cluster Resources

It can be seen that currently:

    An ASM instance is running on both the nodes
    DATA disk group, volume VOL1 on DATA diskgroup and ACFS file system on VOL1 are all online on both the nodes.

 [root@rac1 ~]# crsctl stat res ora.asm -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1          STABLE
      1        ONLINE  ONLINE       rac2          STABLE
----------------------------------------------------------------

[root@rac1 ~]# crsctl stat res ora.DATA.dg -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1         STABLE
               ONLINE  ONLINE       rac2         STABLE
----------------------------------------------------------------


[root@rac1 ~]# crsctl stat res ora.DATA.VOL1.advm -t
----------------------------------------------------------------
Name           Target  State        Server   State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.VOL1.advm
               ONLINE  ONLINE       rac1  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE
               ONLINE  ONLINE       rac2  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE


[root@rac1 ~]# crsctl stat res ora.data.vol1.acfs

NAME=ora.data.vol1.acfs
TYPE=ora.acfs.type
TARGET=ONLINE          , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2

Confirm Flex ASM Proxy Instances

Let’s also verify that ASM proxy instances +APX1 running on rac1 and +APX2 running on rac2 are using ASM instances +ASM1 and +ASM2 running locally on their corresponding nodes.

SQL> col client_instance_name format a20
     col asm_host_name for a20

SELECT DISTINCT
    i.instance_name asm_instance_name,
    i.host_name asm_host_name,
    c.instance_name client_instance_name,
    c.status
  FROM gv$instance i, gv$asm_client c
 WHERE i.inst_id = c.inst_id;

ASM_INSTANCE_NAM ASM_HOST_NAME        CLIENT_INSTANCE_NAME STATUS
---------------- -------------------- -------------------- ------------
+ASM1            rac1.vbtech.com   +APX1                CONNECTED
+ASM1            rac1.vbtech.com   +ASM1                CONNECTED
+ASM2            rac2.vbtech.com   +APX2                CONNECTED

To make ACFS / ADVM leverage Flex ASM capabilities, we can perform a simple test: On stopping the ASM instance on node rac2 we should observe that ACFS / ADVM services will continue to run on rac2 while utilizing an ASM proxy instance to satisfy metadata requests to a remote ASM instance – in this case, +ASM1 running on rac1.
Halt ASM Instance On rac2

Let’s halt ASM instance +ASM2 running on node rac2.

[root@rac1 ~]# srvctl stop asm -n rac2

PRCR-1014 : Failed to stop resource ora.asm
PRCR-1145 : Failed to stop resource ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified

[root@rac1 ~]# srvctl stop asm -n rac2 -f

[root@rac1 ~]# srvctl status asm
ASM is running on rac1

Verify Availability of VOL1 On rac2

If we now check we’ll find that even though the +ASM2 instance and the DATA disk group are not available on rac2, volume VOL1 created on the DATA disk group is still mounted on rac2 because of Flex ASM. Node rac2 connects to the remote ASM instance +ASM1 running on rac1 using its ASM proxy instance +APX2 to access the metadata, and that keeps volume VOL1 online.

 [root@rac1 ~]# crsctl stat res ora.asm -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1          STABLE
      2        ONLINE  OFFLINE      rac2          STABLE
----------------------------------------------------------------

[root@rac1 ~]# crsctl stat res ora.DATA.dg -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1         STABLE
               OFFLINE OFFLINE      rac2         STABLE
----------------------------------------------------------------


[root@rac1 ~]# crsctl stat res ora.DATA.VOL1.advm -t
----------------------------------------------------------------
Name           Target  State        Server   State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.VOL1.advm
               ONLINE  ONLINE       rac1  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE
               ONLINE  ONLINE       rac2  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE

Verify Availability of Cloud File System acfs1 On rac2

We can also verify that the file on cloud file system acfs1 is visible from both rac1 and rac2.

[root@rac1 acfs1]# cat /acfs1/testfile.txt

“Test File on ACFS”

[root@rac2 asm]# cat /acfs1/testfile.txt

“Test File on ACFS”

Confirm Flex ASM Proxy Instances

We can also verify that since ASM instance +ASM2 running on rac2 is no longer available, ACFS / ADVM leverages Flex ASM and ASM proxy instance +APX2 running on rac2 accesses the metadata from remote ASM instance +ASM1 that is running on rac1 to access volume VOL1 present on the DATA disk group.

SQL> col client_instance_name format a20
     col asm_host_name for a20

SELECT DISTINCT
    i.instance_name asm_instance_name,
    i.host_name asm_host_name,
    c.instance_name client_instance_name,
    c.status
  FROM gv$instance i, gv$asm_client c
 WHERE i.inst_id = c.inst_id;


ASM_INSTANCE_NAM ASM_HOST_NAME        CLIENT_INSTANCE_NAME STATUS
---------------- -------------------- -------------------- ------------
+ASM1            rac1.vbtech.com   +APX1                CONNECTED
+ASM1            rac1.vbtech.com   +ASM1                CONNECTED
+ASM1            rac1.vbtech.com   +APX2                CONNECTED

   

TFA

!! TFA - Trace File Analyzer Collector !!



Trace File Analyzer Collector (tfactl commands)


./tfactl -h



./tfactl diagcollect -h

./tfactl diagcollect

./tfactl diagcollect -crs

./tfactl diagcollect -asm -tag ASM_COLLECTIONS

./tfactl diagcollect -since 1h

./tfactl diagcollect -all -since 4h

./tfactl diagcollect -database hrdb,findb -since 1d

./tfactl diagcollect -crs -os -node node2,node3 -since 8h

./tfactl diagcollect -asm -node node7 -from Aug/11/2017 -to "Aug/11/2017 22:10:00"

./tfactl diagcollect -for Aug/11/2017

./tfactl diagcollect -for "Aug/11/2017 07:30:00"



./tfactl print -h

./tfactl print config

./tfactl print status

./tfactl print hosts

./tfactl print actions

./tfactl print repository



./tfactl analyze

./tfactl analyze -since 4h

./tfactl analyze -comp os -since 1d

./tfactl analyze -comp osw -since 5h

./tfactl analyze -search "ORA-" -since 3d

./tfactl analyze -search "/Starting/c"

./tfactl analyze -comp os -for "Aug/11/2017 02" -search "." 

./tfactl analyze -since 2h -type generic



./tfactl access -h

CRS-4639 CRS-4124 Oracle High Availability Services startup failed in 11gR2 RAC

!! CRS-4639 CRS-4124 Oracle High Availability Services startup failed in 11gR2 RAC !!
 
NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production. 


Today I have faced an issue with cluster OHASD service has not started automatically when server started. Then I tried to start cluster manually I received "CRS-4124: Oracle High Availability Services startup failed" error.

1.  All oracleasm Disk are verified and are available .
     /dev/oracleasm/disks
     oracleasm lisdisks 




[oracle@red1 ~]$ cd /u01/app/11.2.0/grid_1/bin
[oracle@red1 bin]$ ./crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services
[oracle@red1 bin]$
[oracle@red1 bin]$ su - root
Password:
[root@red1 ~]# cd /u01/app/11.2.0/grid_1/bin
[root@red1 bin]# ./crsctl start crs
CRS-4124: Oracle High Availability Services startup failed
CRS-4000: Command Start failed, or completed with errors
[root@red1 bin]#


  Then I have verified cluster Oracle High Availability auto start-up is configured or not?

[root@red1 bin]# ./crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@red1 bin]# 





[root@red1 ~]# nohup /etc/init.d/init.ohasd run &
[1] 765
[root@red1 ~]#

[root@red1 bin]# ./crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@red1 bin]#
[root@red1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online  



if cluster service not properly started the run below command 


crsctl start resource -all 




Upgrade RMAN Catalog

!! How to Upgrade RMAN Catalog in 11gR2 after applying a PSU Patch !!

NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production. 




 This post discuss about how to upgrade RMAN catalog in 11gR2 after applying a PSU Patch

Note: This is not the database upgrade, this post provides information on the RMAN Catalog upgrade for an database after a PSU patch has been applied in the database server.

“The recovery catalog schema version must be greater than or equal to the RMAN client version”.


For your 11.2 database, if you are running RMAN from the same ORACLE_HOME, your RMAN Client is 11.2. The catalog schema must be 11.2. Although it can be in an 11.1 database, it would have been preferable to have it in 11.2. The better course is to upgrade the RMAN Catalog database to the highest of all the target databases and the Catalog schema to the highest RMAN client.

-- Connect to the target database on the server

oracle > rman target /
Recovery Manager: Release 11.2.0.2.0 - Production on Sat Aug 18 02:09:16 CDT 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: BHUVAN (DBID=3984568789)

-- Connect to the Recover catalog database

RMAN> connect catalog rman/xxxxx@RMAN_CATALOG
connected to recovery catalog database

-- upgrade the catalog by connecting to the target database and catalog database.

RMAN> UPGRADE CATALOG;

recovery catalog owner is RMAN
enter UPGRADE CATALOG command again to confirm catalog upgrade


RMAN> UPGRADE CATALOG;

recovery catalog upgraded to version 11.02.00.02
DBMS_RCVMAN package upgraded to version 11.02.00.02
DBMS_RCVCAT package upgraded to version 11.02.00.02

RMAN>

TO DETERMINE THE CURRENT RELEASE OF THE CATALOG SCHEMA, YOU MUST RUN A SQL QUERY.


oracle> sqlplus rman/xxxx@RMAN_CATALOG

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 18 02:09:16 CDT 2018

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> SELECT * FROM rcver;

VERSION
------------
11.02.00.02


Note:
1)  If multiple versions are listed, then the last row is the current version, and the rows before it are prior versions.
2) For releases 11.2 and later, the last two digits in the rcver output indicate patch level. For earlier releases, they are always zeros.

-MGMTDB -Management Repository

!! 12c feature: Using Management Repository (-MGMTDB) !!
 NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.


Overview
.Management Repository is a single instance database that’s managed by Oracle Clusterware in 12c.
.In 11g this was being stored in berkley database but starting Oracle database 12c it is configured as  Oracle Database Instance
.If the option is selected during GI installation, the database will be configured and managed by GI.
.As it’s a single instance database, it will be up and running on one node in the cluster;
.As it’s managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.
.Management Database will be the central repository to store Cluster Health Monitor (aka CHM/OS, ora.crf) and other data in 12c
.Management database uses the same shared storage as OCR/Voting File (  voting disk size >5G : ( 3.2G+ is being used by MGMTDB – 2G for CHM )
.If Management Database is not selected to be configured during installation/upgrade OUI, all features (Cluster Health Monitor (CHM/OS) etc)  that depend on it will be disabled. Note: there’s no supported procedure to enable Management Database once the GI stack is configured


How to start Management Database
Management Database is managed by GI and should be up and running all the time automatically. In case it’s down for some reason,  the following srvctl command can be used to start it:
Usage: srvctl start mgmtdb [-startoption <start_option>] [-node <node_name>]
Usage: srvctl start mgmtlsnr [-node <node_name>]


[oracle@RAC1]~% ps -ef|grep mdb_pmon
oracle    7580     1  0 04:57 ?        00:00:00 mdb_pmon_-MGMTDB


This is a Oracle single instance which is being managed by Grid Infrastructure and fails over to surviving node if existing node crashes.You can identify the current master using below command

-bash-4.1$ oclumon manage -get MASTER

Master = RAC1


This DB instance can be managed using srvctl commands. Current master can also be identified using status command

$srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node RAC1







We can look at mgmtdb config using

$srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /home/oragrid
Oracle user: oracle
Spfile: +OCR/_mgmtdb/spfile-MGMTDB.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management


Replace config with start/stop to start/stop database.
Databases files for repository database are stored in same location as OCR/Voting disk

SQL> select file_name from dba_data_files union select member file_name from V$logfile;

FILE_NAME
------------------------------------------------------------
+OCR/_MGMTDB/DATAFILE/sysaux.258.819384615
+OCR/_MGMTDB/DATAFILE/sysgridhomedata.261.819384761
+OCR/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687
+OCR/_MGMTDB/DATAFILE/system.259.819384641
+OCR/_MGMTDB/DATAFILE/undotbs1.257.819384613
+OCR/_MGMTDB/ONLINELOG/group_1.263.819384803
+OCR/_MGMTDB/ONLINELOG/group_2.264.819384805
+OCR/_MGMTDB/ONLINELOG/group_3.265.819384807



We can verify the same using oclumon command

-bash-4.1$ oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687




Since this is stored at same location as Voting disk, if you have opted for configuring Management database, you will need to use voting disk with size >5G (3.2G+ is being used by MGMTDB). During GI Installation ,I had tried adding voting disk of 2G but it failed saying that it is of insufficient size. Error didnot indicate that its needed for Management repository but now I think this is because of repository sharing same location as OCR/Voting disk.
Default (also Minimum) size for CHM repository is 2048 M . We can increase respository size by issuing following command

-bash-4.1$ oclumon manage -repos changerepossize 4000
The Cluster Health Monitor repository was successfully resized.The new retention is 266160 seconds.

Move the central Inventory

 !! Move the central Inventory (oraInventory) to another location !!


 NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.



Solution On Unix:

Find the current location of the central inventory (normally $ORACLE_BASE/oraInventory):

For example:

find /home/oracle -name oraInventory -print

 /u01/app/oracle/oraInventory



Open the oraInst.loc file in /etc and check the value of inventory_loc

cat /etc/oraInst.loc

inventory_loc=/u01/app/oracle/oraInventory inst_group=oinstall


Remark: The oraInst.loc file is simply a pointer to the location of the central inventory (oraInventory)



Copy the oraInventory directory to the destination directory

cp -Rp /home/oracle/oraInventory /app/oracle



Edit the oraInst.loc file to point to the new location

For example:

vi /etc/oraInst.loc

inventory_loc=/app/oracle/oraInventory inst_group=dba
 

OSWatcher

!! Oracle Tool: setup OSWatcher !!


NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.



OSWatcher (Includes: [Video]) (Doc ID 301137.1)


/stage:(+ASM1)$ tar xvf oswbb4.0.tar
oswbb/
oswbb/htop.sh
oswbb/docs/
oswbb/docs/OSWbba_README.txt
oswbb/docs/OSW_bb_README.txt
oswbb/docs/OSW_Black_Box_Analyzer_Overview.pdf
oswbb/docs/OSWatcher_Black_Box_UG.pdf
oswbb/docs/OSWatcher_Black_Box_Analyzer_UG.pdf
oswbb/Exampleprivate.net
oswbb/stopOSWbb.sh
oswbb/iosub.sh
oswbb/profile/
oswbb/OSWatcherFM.sh
oswbb/mpsub.sh
oswbb/gif/
oswbb/pssub.sh
oswbb/oswnet.sh
oswbb/vmsub.sh
oswbb/oswlnxio.sh
oswbb/oswib.sh
oswbb/startOSWbb.sh
oswbb/oswsub.sh
oswbb/analysis/
oswbb/oswbba.jar
oswbb/locks/
oswbb/tmp/
oswbb/OSWatcher.sh
oswbb/topaix.sh
oswbb/tarupfiles.sh
oswbb/xtop.sh
oswbb/src/
oswbb/src/Thumbs.db
oswbb/src/OSW_profile.htm
oswbb/src/tombody.gif
oswbb/src/missing_graphic.gif
oswbb/src/coe_logo.gif
oswbb/src/watch.gif
oswbb/src/oswbba_input.txt
oswbb/oswrds.sh


/stage/oswbb:(+ASM1)$ nohup ./startOSWbb.sh SnapshotInterval ArchiveInterval &


With SnapshotInterval = 60

With ArchiveInterval = 10

/stage/oswbb:(+ASM1)$ nohup ./startOSWbb.sh 60 10 &



/stage/oswbb/archive:(+ASM1)$ pwd;file *
/stage/oswbb/archive
oswiostat:   directory
oswmeminfo:  directory
oswmpstat:   directory
oswnetstat:  directory
oswprvtnet:  directory
oswps:       directory
oswslabinfo: directory
oswtop:      directory
oswvmstat:   directory
/stage/oswbb/archive:(+ASM1)$




/stage/oswbb/archive:(+ASM1)$ ps -ef|grep OSW
oracle    8100     1  0 12:20 pts/3    00:00:00 /usr/bin/ksh ./OSWatcher.sh 60 10
oracle   10263  8100  0 12:21 pts/3    00:00:00 /usr/bin/ksh ./OSWatcherFM.sh 10
oracle   25568 21378  0 14:33 pts/3    00:00:00 grep OSW
/stage/oswbb/archive:(+ASM1)$

Moving schema (11g to 12c)


 !! Oracle 12C: Moving schema from Oracle 11G to 12C. !!

 

 

 

NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.


From source server with Oracle 11G
expdp ORAVB/password dumpfile=ORAVBdexport.dmp  directory=ORAVB_DIR schemas=ORAVB logfile=ORAVB.log


On the source db - Oracle 12C

Setting up the dump directory.

SQL> !mkdir /u01/backup/ORAVB_DIR

SQL> create or replace directory ORAVB_DIR as '/u01/backup/ORAVB_DIR';

Directory created.


SQL> GRANT read, write ON DIRECTORY ORAVB_DIR TO public;

SQL> SELECT directory_path FROM dba_directories WHERE directory_name = 'ORAVB_DIR';

DIRECTORY_PATH
----------------------------------------------------------------------------------------------------
/u01/backup/ORAVB_DIR


Verify if the database is opened.


SQL> SELECT directory_path FROM dba_directories WHERE directory_name = 'DMPDIR';

DIRECTORY_PATH
------------------------
/u01/backup/ORAVB_DIR

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBORCL                        MOUNTED
         4 PDBORAVB                       MOUNTED

SQL> col pdb_name format a20
SQL> col status format a20
SQL> select pdb_name, status from dba_pdbs;

PDB_NAME             STATUS
-------------------- --------------------
PDBORCL              NORMAL
PDB$SEED             NORMAL
PDBORAVB             NEW

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBORCL                        MOUNTED
         4 PDBORAVB                       MOUNTED

SQL> alter session set container=PDBORAVB;

Session altered.
The PDBORAVB is in MOUNTED mode. It needs to be opened.


SQL> alter session set container=PDBORAVB;

Session altered.

SQL> startup;

Pluggable Database opened.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDBORAVB                        READ WRITE NO
SQL>


SQL> select count(*) from user_objects;
  COUNT(*)
----------
      900
     

SQL> !lsnrctl status|grep PDBORAVB
Service "PDBORAVB" has 1 instance(s).


The datapump import into Oracle 12C



User/Schema already pre-setup
ORAVB

NOTE:Make sure the PDBORAVB service block exist in tnsnames.ora

Here we are using the PDBORAVB service for the impdp. Since,we know that there can be many databases in a single instance and we could have many of the same schemas exist in many of the databases from within.

Note: No schema mapping or tablespace mapping needed .. moving a schema to schema in Oracle 12C.

impdp ORAVB/password@PDBORAVB DIRECTORY=ORAVB_DIR dumpfile=ORAVBexport.dmp schemas=ORAVB logfile=11g_to_12c_ORAVB_impdp.log

Import: Release 12.1.0.1.0 - Production on Tue Aug 14 16:01:25 2018

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ORAVB"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "ORAVB"."SYS_IMPORT_SCHEMA_01":  ORAVB/********@PDBORAVB DIRECTORY=ORAVB_DIR dumpfile=ORAVBexport.dmp schemas=ORAVB logfile=11g_to_12c_ORAVB_impdp.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"ORAVB" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "ORAVB"."IMP_DATA"                       3.418 GB 1167153 rows
. . imported "ORAVB"."SCHOOL_DATA"                    239.8 MB 2891749 rows
......
..................... and so on ...................


[oracle@oracle12c
ORAVB_DIR]$ sqlplus ORAVB/password@PDBORAVB

SQL*Plus: Release 12.1.0.1.0 Production on Tue Aug 14 16:11:25 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Tue Aug 14 16:13:25 2018 -09:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "ORAVB"

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDBORAVB                        READ WRITE NO

SQL> select count(*) from user_objects;

  COUNT(*)
----------
      900

SQL>

Create disk & add asm disk




!! How to CREATE DISK and add asm disk on Diskgroup !!



1 . Check The disk (LUNS)

[root@oravb ~]# hostname
oravb.vbtech.com
[root@oravb ~]# fdisk -l

Disk /dev/sdc: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@oravb ~]#


2. create partitions On " /dev/sdc "

        [root@oravb ~]# fdisk /dev/sdc
        Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
        Building a new DOS disklabel with disk identifier 0x60993f20.
        Changes will remain in memory only, until you decide to write them.
        After that, of course, the previous content won't be recoverable.
       
        Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
       
        WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
                switch off the mode (command 'c') and change display units to
                sectors (command 'u').
       
        Command (m for help): n
        Command action
        e   extended
        p   primary partition (1-4)
        p
        Partition number (1-4): 1
        First cylinder (1-5221, default 1):
        Using default value 1
        Last cylinder, +cylinders or +size{K,M,G} (1-5221, default 5221): 1000
       
        Command (m for help): n
        Command action
        e   extended
        p   primary partition (1-4)
        p
        Partition number (1-4): 2
        First cylinder (1001-5221, default 1001):
        Using default value 1001
        Last cylinder, +cylinders or +size{K,M,G} (1001-5221, default 5221): 2000
       
        Command (m for help): n
        Command action
        e   extended
        p   primary partition (1-4)
        p
        Partition number (1-4): 3
        First cylinder (2001-5221, default 2001):
        Using default value 2001
        Last cylinder, +cylinders or +size{K,M,G} (2001-5221, default 5221): 3000
       
        Command (m for help): n
        Command action
        e   extended
        p   primary partition (1-4)
        p
        Selected partition 4
       
        First cylinder (3001-5221, default 3001):
        Using default value 3001
        Last cylinder, +cylinders or +size{K,M,G} (3001-5221, default 5221):
        Using default value 5221
       
        Command (m for help): p
       
        Disk /dev/sdc: 42.9 GB, 42949672960 bytes
        255 heads, 63 sectors/track, 5221 cylinders
        Units = cylinders of 16065 * 512 = 8225280 bytes
        Sector size (logical/physical): 512 bytes / 512 bytes
        I/O size (minimum/optimal): 512 bytes / 512 bytes
        Disk identifier: 0x60993f20
       
        Device Boot      Start         End      Blocks   Id  System
        /dev/sdc1               1        1000     8032468+  83  Linux
        /dev/sdc2            1001        2000     8032500   83  Linux
        /dev/sdc3            2001        3000     8032500   83  Linux
        /dev/sdc4            3001        5221    17840182+  83  Linux
       
        Command (m for help): w
        The partition table has been altered!
       
        Calling ioctl() to re-read partition table.
        Syncing disks.
        [root@oravb ~]# partprobe
        Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
        Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.
        [root@oravb ~]#

       
       
3. CREATE ASM DISK FROM os DISKS :

            [root@oravb ~]# oracleasm createdisk  test1 /dev/sdc1
            Writing disk header: done
            Instantiating disk: done
            [root@oravb ~]# oracleasm createdisk  test2 /dev/sdc2
            Writing disk header: done
            Instantiating disk: done
            [root@oravb ~]# oracleasm createdisk  test3 /dev/sdc3
            Writing disk header: done
            Instantiating disk: done
            [root@oravb ~]# oracleasm createdisk  test4 /dev/sdc4
            Writing disk header: done
            Instantiating disk: done
           
            Check disks :
           
            [root@oravb ~]# oracleasm listdisks
            ACFS1
            ACFS2
            DATA
            OCR
            TEST1
            TEST2
            TEST3
            TEST4
           
            Check disks on Os level
           
            [root@oravb ~]# ls -altr /dev/oracleasm/disks
            total 0
            drwxr-xr-x. 4 root   root     0 Jul  5 22:52 ..
            drwxr-xr-x. 1 root   root     0 Jul  5 22:52 .
            brw-rw----. 1 oracle dba  8, 33 Jul  5 23:01 TEST1
            brw-rw----. 1 oracle dba  8, 34 Jul  5 23:01 TEST2
            brw-rw----. 1 oracle dba  8, 35 Jul  5 23:01 TEST3
            brw-rw----. 1 oracle dba  8, 36 Jul  5 23:01 TEST4
            brw-rw----. 1 oracle dba  8, 17 Jul  5 23:01 OCR
            brw-rw----. 1 oracle dba  8, 19 Jul  5 23:01 ACFS2
            brw-rw----. 1 oracle dba  8, 18 Jul  5 23:01 ACFS1
            brw-rw----. 1 oracle dba  8, 20 Jul  5 23:01 DATA
            [root@oravb ~]#
           
           
            CHECK FROM DB LEVEL :
           
            SQL> select disk_number, header_status, mode_status, path, voting_file
                    from v$asm_disk;  2

            DISK_NUMBER HEADER_STATU MODE_ST PATH                                     V
            ----------- ------------ ------- ---------------------------------------- -
                    0 PROVISIONED  ONLINE  /dev/oracleasm/disks/TEST3               N
                    1 PROVISIONED  ONLINE  /dev/oracleasm/disks/TEST2               N
                    2 PROVISIONED  ONLINE  /dev/oracleasm/disks/TEST4               N
                    3 PROVISIONED  ONLINE  /dev/oracleasm/disks/TEST1               N
           
                       

4. CREATE ASM DISKGREOUP :

       
        CREATE DISKGROUP TEST EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/TEST1';


        Check if disk added or not :
       
        SQL>col PATH for a40
        set lines 150
        set pages 100
        col NAME for a20
        col ASM_GROUP for a10

        select adg.name ASM_GROUP, ad.disk_number, ad.name,ad.header_status, ad.mode_status,ad.REDUNDANCY, ad.path, ad.voting_file
        from v$asm_diskgroup adg, v$asm_disk ad where   adg.group_number = ad.group_number and adg.name='TEST' order by adg.name,ad.disk_number;
        SQL> SQL> SQL> SQL> SQL> SQL>   2
        ASM_GROUP  DISK_NUMBER NAME                 HEADER_STATU MODE_ST REDUNDA PATH                                     V
        ---------- ----------- -------------------- ------------ ------- ------- ---------------------------------------- -
        TEST                 0 TEST_0000            MEMBER       ONLINE  UNKNOWN /dev/oracleasm/disks/TEST1               N



       
       
5. Add disk on diskgroup :

        Check the os disks state if it "PROVISIONED" then procced .
       
        --where header_status = 'FORMER' order by disk_number; ---- which was the part of diskgroup
        --where header_status = 'MEMBER' order by disk_number; -----wchich is the part of diskgroup currently uning
        --where header_status = 'CANDIDATE' order by disk_number;---fresh disk
        --where header_status = 'PROVISIONED' order by disk_number;----Disk is not part of a disk group and may be added to a disk group with the ALTER
   
   
       
        SQL> ALTER DISKGROUP TEST ADD DISK '/dev/oracleasm/disks/TEST2';

                Diskgroup altered.

        SQL>

        ASM_GROUP  DISK_NUMBER NAME                 HEADER_STATU MODE_ST REDUNDA PATH                                     V
        ---------- ----------- -------------------- ------------ ------- ------- ---------------------------------------- -
        TEST                 0 TEST_0000            MEMBER       ONLINE  UNKNOWN /dev/oracleasm/disks/TEST1               N
        TEST                 1 TEST_0001            MEMBER       ONLINE  UNKNOWN /dev/oracleasm/disks/TEST2               N
       
        SQL>

       
6 . CHECK THE RBALCING STATE :


    SQL> select operation, state,power, actual, sofar, est_work, est_minutes from v$asm_operation;

            no rows selected
    SQL>

   
    CHANGE POWER LIMIT OF DISK :
   
        SQL> ALTER DISKGROUP TEST REBALANCE POWER 4;

            Diskgroup altered.
           
            SQL> select operation, state,power, actual, sofar, est_work, est_minutes from v$asm_operation;
           
            OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK EST_MINUTES
            ----- ---- ---------- ---------- ---------- ---------- -----------
            REBAL REAP          4          4          0          0           0
            REBAL DONE          4          4          0          0           0
            REBAL DONE          4          4          0          0           0
           
            SQL>
           

7 . DROP DISK FROM DISKGREOUP :

        we can not drop disk on

    SQL> ALTER DISKGROUP TEST  DROP  DISK 'TEST_0001' ;
           
       
    SQL> ALTER DISKGROUP TEST  DROP  DISK 'TEST_0001' ;

    Diskgroup altered.

    SQL>
   
    SQL> ^Ccol PATH for a40
        set lines 150
        set pages 100
        col NAME for a20
        col ASM_GROUP for a10

        select adg.name ASM_GROUP, ad.disk_number, ad.name,ad.header_status, ad.mode_status,ad.REDUNDANCY, ad.path, ad.voting_file
        from v$asm_diskgroup adg, v$asm_disk ad where   adg.group_number = ad.group_number and adg.name='TEST' order by adg.name,ad.disk_number;

SQL> SQL> SQL> SQL> SQL> SQL>   2
ASM_GROUP  DISK_NUMBER NAME                 HEADER_STATU MODE_ST REDUNDA PATH                                     V
---------- ----------- -------------------- ------------ ------- ------- ---------------------------------------- -
TEST                 0 TEST_0000            MEMBER       ONLINE  UNKNOWN /dev/oracleasm/disks/TEST1               N
TEST                 2 TEST_0002            MEMBER       ONLINE  UNKNOWN /dev/oracleasm/disks/TEST3               N
TEST                 3 TEST_0003            MEMBER       ONLINE  UNKNOWN /dev/oracleasm/disks/TEST4               N

SQL>

   
    SQL> ALTER DISKGROUP TEST REBALANCE POWER 4;

        Diskgroup altered.

        SQL>  select operation, state,power, actual, sofar, est_work, est_minutes from v$asm_operation;
       
        OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK EST_MINUTES
        ----- ---- ---------- ---------- ---------- ---------- -----------
        REBAL REAP          4          4          0          0           0
        REBAL DONE          4          4          0          0           0
        REBAL DONE          4          4          0          0           0
       
        SQL>
       
       
       

   

Table 4-2 Mirroring options for Oracle ASM disk group types
Disk Group TypeSupported Mirroring LevelsDefault Mirroring Level
External redundancy
Unprotected (none)
Unprotected
Normal redundancy
Two-way, three-way, unprotected (none)
Two-way
High redundancy
Three-way
Three-way
Flex redundancy
Two-way, three-way, unprotected (none)
Two-way (newly-created)
Extended redundancy
Two-way, three-way, unprotected (none)
Two-way
For normal and high disk group types, the redundancy level controls how many disk failures are tolerated without dismounting the disk group or losing data. Each file is allocated based on its own redundancy, but the default comes from the disk group.
For the flex group type, the number of failures tolerated before dismount demands on the number of failure groups. For five or more failure groups, two disk failures are tolerated. For three or four failure groups, one disk failure is tolerated.
For the extended disk group type, each site is similar to a flex disk group. If the site has five failure groups or more, two disk failures with in a site can be tolerated before the site becomes compromised. If the site has three or four failure groups, the site can tolerate one disk failure before the site is compromised. When two sites are compromised, the disk group dismounts. An extended disk group requires a minimum of three failure groups for each data site.
For flex and extended disk groups, mirroring describes the availability of the files within a disk group, not the disk group itself. For example: If a file is unprotected in a flex disk group that has five failure groups, then after one failure the disk group is still mounted, but the file becomes unavailable.
The redundancy levels are:
  • External redundancy
    Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error causes a forced dismount of the disk group. All disks must be located to successfully mount the disk group.
  • Normal redundancy
    Oracle ASM provides two-way mirroring by default, which means that all files are mirrored so that there are two copies of every extent. A loss of one Oracle ASM disk is tolerated. You can optionally choose three-way or unprotected mirroring.
    A file specified with HIGH redundancy (three-way mirroring) in a NORMAL redundancy disk group provides additional protection from a bad disk sector in one disk, plus the failure of another disk. However, this scenario does not protect against the failure of two disks.
  • High redundancy
    Oracle ASM provides three-way (triple) mirroring by default. A loss of two Oracle ASM disks in different failure groups is tolerated.
  • Flex redundancy
    Oracle ASM provides two-way mirroring by default for newly-created flex disk groups. For migrated flex disk groups, the default values are obtained from the template values in the normal or high redundancy disk groups before migration. For migration from normal redundancy, if the template defaults were not changed, then the flex defaults are two-way mirroring. For migration from high redundancy, if the template defaults were not changed, then the flex defaults are three-way mirroring.
  • Extended redundancy
    Oracle ASM provides two-way mirroring by default. The redundancy setting describes redundancy within a data site. For example: If there is a two-way mirrored file in a two-data-site extended disk group, then there are four copies of the file, two in each data site.
Oracle ASM file groups in a flex or extended disk group can have different redundancy levels.
If there are not enough online failure groups to satisfy the file mirroring (redundancy attribute value) specified in the disk group file type template, Oracle ASM allocates as many mirrors copies as possible and subsequently allocates the remaining mirrors when sufficient online failure groups are available.
Failure groups enable the mirroring of metadata and user data. System reliability can diminish if your environment has an insufficient number of failure groups.