OHASD error : When cleanup no done well

!! OHASD error : When cleanup not done .... !!




[root@uscaav ~]#
[root@uscaav ~]# /etc/init.d/ohasd start
Starting ohasd:

Waiting 10 minutes for filesystem containing /u01/app/oracle/software/bin/crsctl.

Waiting 9 minutes for filesystem containing /u01/app/oracle/software/bin/crsctl.
^C



solution :

1 .check the crs home on /etc/init.d/ohasd and chenge as per the current home


[root@uscaav ~]# cat /etc/init.d/ohasd
#!/bin/sh
#
# Copyright (c) 2001, 2013, Oracle and/or its affiliates. All rights reserved.
#
# ohasd.sbs  - Control script for the Oracle HA Services daemon
# This script is invoked by the rc system.
#
# Note:
#   For security reason,  all cli tools shipped with Clusterware should be
# executed as HAS_USER in init.ohasd and ohasd rc script for SIHA. (See bug
# 9216334 for more details)
#


######### Shell functions #########
# Function : Log message to syslog and console
log_console () {
  $ECHO "$*"
  $LOGMSG "$*"
}

tolower_host()
{
  #If the hostname is an IP address, let hostname
  #remain as IP address
  H1=`$HOSTN`
  len1=`$EXPRN "$H1" : '.*'`
  len2=`$EXPRN match $H1 '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*'`

  # Strip off domain name in case /bin/hostname returns
  # FQDN hostname
  if [ $len1 != $len2 ]; then
   H1=`$ECHO $H1 | $CUT -d'.' -f1`
  fi

  $ECHO $H1 | $TR '[:upper:]' '[:lower:]'
}

# Invoke crsctl as root in case of clusterware, and HAS_USER in case of SIHA.
# Note: Argument with space might be problemactic (my_crsctl 'hello world')
my_crsctl()
{
  if [ oracle = "root" ]; then
    $CRSCTL $*
  else
    $SU oracle -c "$CRSCTL $*"
  fi
}
###################################

######### Instantiated Variables #########
ORA_CRS_HOME=/u01/app/grid/product/11.2.0.4
export ORA_CRS_HOME

# Definitions
HAS_USER=oracle
SCRBASE=/etc/oracle/scls_scr
PROG="ohasd"

#limits
CRS_LIMIT_CORE=unlimited
CRS_LIMIT_MEMLOCK=unlimited
CRS_LIMIT_OPENFILE=65536
##########################################



SQL prompt




!! Setting SQL prompt in Oracle !!

The default prompt in SQL*Plus is SQL>, does not provide any information like who the user is and what the user is connected as. Prior to Oracle9i, we have to do elaborate coding to get the information as the SQL prompt, from Oracle 9.2.0 we can use SET SQLPROMPT along with SQL*Plus predefined variables.

Whenever SQL*PLUS starts up, it looks for a file named glogin.sql under the directory $ORACLE_HOME/sqlplus/admin. If the file is found, it is read and the containing statements executed. This allows to store settings across SQL*PLUS sessions. From Oracle 10g, after reading glogin.sql, SQL*PLUS also looks for a file named login.sql, in the directory from where SQL*PLUS was and in the directory that the environment variable SQLPATH points to, and reads it and executes it. Settings from the login.sql take precedence over settings from glogin.sql.

In Oracle9i, whenever a user connects through SQL*PLUS, Oracle will execute only glogin.sql, from 10g Oracle will execute login.sql as well. From Oracle 10g, the login.sql file is not only executed at SQL*Plus startup time, but also at connect time as well. So SQL prompt will be changed after connect command.

To set SQL prompt permanently, update $ORACLE_HOME/sqlplus/admin/glogin.sql or login.sql

SQLP[ROMPT] {SQL>|text}

TEXT can be predefined substitution variables which are prefixed with an underscore.


_connect_identifier ======>     display connection identifier.
_date               ======>     display date.
_editor             ======>     display editor name used by the EDIT command.
_o_version          ======>     display Oracle version.
_o_release          ======>     display Oracle release.
_privilege          ======>     display privilege such as SYSDBA, SYSOPER, SYSASM
_sqlplus_release    ======>     display SQL*PLUS release.
_user               ======>     display current user name.



The variable _CONNECT_IDENTIFIER was introduced in SQL*Plus 9.2 and _DATE, _PRIVILEGE and _USER were introduced in SQL*Plus 10.1.

_USER
The variable _USER contains the current user name given by SHOW USER. If SQL*Plus is not connected, the variable is defined as an empty string.

SQL> set sqlprompt "_user>"

The SQL*Plus prompt will shows
SYSTEM>


_PRIVILEGE
When SQL*Plus is connected as a privileged user the variable _PRIVILEGE contains the connection privilege "AS SYSBDA" or "AS SYSOPER" or "AS SYSASM". If SQL*Plus is connected as a normal user the variable is defined as an empty string.
SQL> set sqlprompt "_user _privilege>"

The SQL*Plus prompt will shows
SYS AS SYSDBA>
ASMADM AS SYSASM>

_CONNECT_IDENTIFIER
The variable _CONNECT_IDENTIFIER contains the connection identifier used to start SQL*Plus. For example, if the SQL*Plus connection string is "hr/my_password@MYSID" then the variable contains MYSID. If you use a complete Oracle Net connection string like "hr/my_password@(DESCRIPTION=(ADDRESS_LIST=...(SERVICE_NAME=MYSID.MYDOMAIN)))" then _CONNECT_IDENTIFIER will be set to MYSID. If the connect identifier is not explicitly specified then _CONNECT_IDENTIFIER contains the default connect identifier Oracle uses for connection. For example, on UNIX it will contain the value in the environment variable ORACLE_SID or TWO_TASK. If SQL*Plus is not connected then the variable is defined as an empty string.

SQL> set sqlprompt "&_user@&_connect_identifier>"
or
SQL> set sqlprompt "_user'@'_connect_identifier>"
The SQL*Plus prompt will shows
SYS@east>
SYSTEM@east>

_DATE
The variable _DATE can be either dynamic, showing the current date or it can be set to a fixed string. The date is formatted using the value of NLS_DATE_FORMAT and will show time information. By default a DEFINE or dereference using &_DATE will give the date at the time of use. _DATE can be UNDEFINED, or set to a fixed string with an explicit DEFINE command. Dynamic date behavior is re-enabled by defining _DATE to an empty string.If we want to display current date:
SQL> set sqlprompt "_user _privilege 'on' _date>"
SYS AS SYSDBA on 29-AUG-18>


If we want to display the current date & time:
SYS AS SYSDBA on 29-AUG-18>alter session set nls_date_format = 'mm/dd/yyyy hh24:mi:ss';

Session altered.

SYS AS SYSDBA on 08/29/2018 13:13:45>

_EDITOR
The variable _EDITOR contains the external text editor executable name.
set sqlprompt _editor>

The SQL*Plus prompt will shows
SYS AS SYSDBA on 08/29/2018 13:13:45>set sqlprompt _editor>
ed>define_editor=vi
vi>define_editor=notepad
notepad>


_O_VERSION
The variable _O_VERSION contains a text string showing the database version and available options.
set sqlprompt _o_version>
notepad>set sqlprompt _o_version>
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options>



_O_RELEASE
The variable _O_RELEASE contains a string representation of the Oracle database version number. If Oracle database version is 11.1.0.7.0 then the variable contains "1101000700". The Oracle version may be different from the SQL*Plus version if you use Oracle Net to connect to a remote database.




SYS@east> set sqlprompt _o_release>
1102000400>

SQL> set sqlprompt "_user'@'_connect_identifier:SQL> "

SQL> set sqlprompt "_user'@'_connect_identifier:SQL> "
SYS@ptp29:SQL>
SYS@ptp29:SQL>


To reset to the default SQL prompt,
SQL> set sqlprompt 'SQL>'








for 12c  sqlprompt :

edit glogin.sql

define gname=idle
column global_name new_value gname
set heading off
set termout off
col global_name noprint
select upper(sys_context ('userenv', 'con_name') || '@' || sys_context('userenv', 'db_name')) global_name from dual;
set sqlprompt '&gname> '
set heading on
set termout on


test it now


connect root container & container database

sqlplus / as sysdba

CDB$ROOT@CDB1> show con_name
CDB$ROOT

CDB$ROOT@CDB1> sho parameter db_name
db_name                              string      cdb1



now connect pluggable database :

conn sys/*****@pdb1 as sysdba

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1

PDB1@CDB1> sho con_name
PDB1

PDB1@CDB1> sho parameter db_name
db_name                              string      cdb1


SQL> ALTER SESSION SET container = pdb1;

Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1

SQL> sho con_name
PDB1

SQL> sho parameter db_name
db_name                              string      cdb1


Note:  SQL prompt will not change and hence will not reflect current PDB name if Alter Session set container =…. is used to modify current container .









SELECT SYS_CONTEXT('userenv','authenticated_identity') authenticated_identity FROM dual;

SELECT SYS_CONTEXT('userenv','enterprise_identity') enterprise_identity FROM dual;

SELECT SYS_CONTEXT('userenv','authentication_method') authentication_method FROM dual;

SELECT SYS_CONTEXT('userenv','current_schema') CURRENT_SCHEMA FROM dual;
SELECT SYS_CONTEXT('userenv','current_schemaid') current_schema_id FROM dual;

SELECT SYS_CONTEXT('userenv','current_user') CURRENT_USER FROM dual;

SELECT SYS_CONTEXT('userenv','session_user') session_user FROM dual;

-- CURRENT_SQL returns the first 4K bytes of the current SQL that triggered the fine-grained auditing event.
SELECT SYS_CONTEXT('userenv','current_sql') current_sql FROM dual;

-- CURRENT_SQLn attributes return subsequent 4K-byte increments, where n can be an integer from 1 to 7, inclusive
SELECT SYS_CONTEXT('userenv','current_sql1') current_sql1 FROM dual;

-- The length of the current SQL statement that triggers fine-grained audit or row-level security (RLS) policy functions or event handlers
SELECT SYS_CONTEXT('userenv','current_sql_length') current_sql_length FROM dual;

-- role is one of the following: PRIMARY, PHYSICAL STANDBY, LOGICAL STANDBY, SNAPSHOT STANDBY.
SELECT SYS_CONTEXT('userenv','database_role') database_role FROM dual;

SELECT SYS_CONTEXT('userenv','db_name') db_name FROM dual;

SELECT SYS_CONTEXT('userenv','db_unique_name') DB_UNIQUE_NAME FROM dual;

-- Returns the source of a database link session.
SELECT SYS_CONTEXT('userenv','dblink_info') dlink_info FROM dual;


SELECT SYS_CONTEXT('userenv','identification_type') identification_type FROM dual;


-- Domain of the database as specified in the DB_DOMAIN initialization parameter
SELECT SYS_CONTEXT('userenv','db_domain') db_domain FROM dual;

SELECT SYS_CONTEXT('userenv','sid') SID FROM dual;

SELECT SYS_CONTEXT('userenv','terminal') terminal FROM dual;


SELECT SYS_CONTEXT('userenv','instance') INSTANCE FROM dual;

SELECT SYS_CONTEXT('userenv','instance_name') instance_name FROM dual;

SELECT SYS_CONTEXT('userenv','ip_address') ip_address FROM dual;

SELECT SYS_CONTEXT('userenv','isdba') isdba FROM dual;

SELECT SYS_CONTEXT('userenv','language') language FROM dual;

SELECT SYS_CONTEXT('userenv','cdb_name') cdb FROM dual;
SELECT SYS_CONTEXT('userenv','con_id') CON_ID FROM dual;
SELECT SYS_CONTEXT('userenv','con_name') con_name FROM dual;


GIMR 12C

!!  Grid Infrastructure Management Repository (GIMR) !!

Every Oracle Standalone Cluster and Oracle Domain Services Cluster contains a Grid Infrastructure Management Repository (GIMR), or the Management Database (MGMTDB).

The Grid Infrastructure Management Repository (GIMR) is a multitenant database with a pluggable database (PDB) for the GIMR of each cluster. The GIMR stores the following information about the cluster:

    Real time performance data the Cluster Health Monitor collects

    Fault, diagnosis, and metric data the Cluster Health Advisor collects

    Cluster-wide events about all resources that Oracle Clusterware collects

    CPU architecture data for Quality of Service Management (QoS)

    Metadata required for Rapid Home Provisioning

The Oracle Standalone Cluster locally hosts the GIMR on an Oracle ASM disk group; this GIMR is a multitenant database with a single pluggable database (PDB).

The global GIMR runs in an Oracle Domain Services Cluster. Oracle Domain Services Cluster locally hosts the GIMR in a separate Oracle ASM disk group. Client clusters, such as Oracle Member Cluster for Database, use the remote GIMR located on the Domain Services Cluster. For two-node or four-node clusters, hosting the GIMR for a cluster on a remote cluster reduces the overhead of running an extra infrastructure repository on a cluster. The GIMR for an Oracle Domain Services Cluster is a multitenant database with one PDB, and additional PDB for each member cluster that is added.

When you configure an Oracle Domain Services Cluster, the installer prompts to configure a separate Oracle ASM disk group for the GIMR, with the default name as MGMT.



About Oracle Standalone Clusters
   

 An Oracle Standalone Cluster hosts all Oracle Grid Infrastructure services and Oracle ASM locally and requires direct access to shared storage.
           
Oracle Standalone Clusters contain two types of nodes arranged in a hub and spoke architecture: Hub Nodes and Leaf Nodes. The number of Hub Nodes in an Oracle Standalone Cluster can be as many as 64. The number of Leaf Nodes can be many more. Hub Nodes and Leaf Nodes can host different types of applications. Oracle Standalone Cluster Hub Nodes are tightly connected, and have direct access to shared storage. Leaf Nodes do not require direct access to shared storage. Hub Nodes can run in an Oracle Standalone Cluster configuration without having any Leaf Nodes as cluster member nodes, but Leaf Nodes must be members of a cluster with a pool of Hub Nodes. Shared storage is locally mounted on each of the Hub Nodes, with an Oracle ASM instance available to all Hub Nodes.
           
Oracle Standalone Clusters host Grid Infrastructure Management Repository (GIMR) locally. The GIMR is a multitenant database, which stores information about the cluster. This information includes the real time performance data the Cluster Health Monitor collects, and includes metadata required for Rapid Home Provisioning.
           
When you deploy an Oracle Standalone Cluster, you can also choose to configure it as an Oracle Extended cluster. An Oracle Extended Cluster consists of nodes that are located in multiple locations or sites.



   
   
About Oracle Cluster Domain and Oracle Domain Services Cluster
   
An Oracle Cluster Domain is a choice of deployment architecture for new clusters, introduced in Oracle Clusterware 12c Release 2.
           
Oracle Cluster Domain enables you to standardize, centralize, and optimize your Oracle Real Application Clusters (Oracle RAC) deployment for the private database cloud. Multiple cluster configurations are grouped under an Oracle Cluster Domain for management purposes and make use of shared services available within that Oracle Cluster Domain. The cluster configurations within that Oracle Cluster Domain include Oracle Domain Services Cluster and Oracle Member Clusters.
           
The Oracle Domain Services Cluster provides centralized services to other clusters within the Oracle Cluster Domain. These services include:
           
A centralized Grid Infrastructure Management Repository (housing the MGMTDB for each of the clusters within the Oracle Cluster Domain)
           
Trace File Analyzer (TFA) services, for targeted diagnostic data collection for Oracle Clusterware and Oracle Database
           
Consolidated Oracle ASM storage management service, including the use of Oracle ACFS.
           
An optional Rapid Home Provisioning (RHP) Service to install clusters, and provision, patch, and upgrade Oracle Grid Infrastructure and Oracle Database homes. When you configure the Oracle Domain Services Cluster, you can also choose to configure the Rapid Home Provisioning Server.
           
An Oracle Domain Services Cluster provides these centralized services to Oracle Member Clusters. Oracle Member Clusters use these services for centralized management and to reduce their local resource usage.
    


 





About Oracle Member Clusters
   
   
   
   
Oracle Member Clusters use centralized services from the Oracle Domain Services Cluster and can host databases or applications. Oracle Member Clusters can be of two types - Oracle Member Clusters for Oracle Databases or Oracle Member Clusters for applications.
       
Oracle Member Clusters do not need direct connectivity to shared disks. Using the shared Oracle ASM service, they can leverage network connectivity to the IO Service or the ACFS Remote Service to access a centrally managed pool of storage. To use shared Oracle ASM services from the Oracle Domain Services Cluster, the member cluster needs connectivity to the Oracle ASM networks of the Oracle Domain Services Cluster.
       
Oracle Member Clusters cannot provide services to other clusters. For example, you cannot configure and use a member cluster as a GNS server or Rapid Home Provisioning Server.
       
*. Oracle Member Cluster for Oracle Databases
       
An Oracle Member Cluster for Oracle Databases supports Oracle Real Application Clusters (Oracle RAC) or Oracle RAC One Node database instances. This cluster registers with the management repository service and uses the centralized TFA service. It can use additional services as needed. An Oracle Member Cluster for Oracle Databases can be configured with local Oracle ASM storage management or make use of the consolidated Oracle ASM storage management service offered by the Oracle Domain Services Cluster.
       
An Oracle Member Cluster for Oracle Database always uses remote Grid Infrastructure Management Repository (GIMR) from its Oracle Domain Services Cluster. For two-node or four-node clusters, hosting the GIMR on a remote cluster reduces the overhead of running an extra infrastructure repository on a cluster.
       
*. Oracle Member Cluster for Applications
       
Oracle Member Cluster for Applications hosts applications other than Oracle Database, as part of an Oracle Cluster Domain. The Oracle Member Cluster requires connectivity to Oracle Cluster Domain Services for centralized management and resource efficiency. The Oracle Member Cluster uses remote Oracle ASM storage, with any required shared storage provided through the Oracle ACFS Remote service. This cluster configuration enables high availability of any software application.
       
Unlike other cluster configurations that require public and private network interconnects, the Oracle Member Cluster for Application can be configured to use a single public network interface.
       
Parent topic: Configuring Storage for Oracle Automatic Storage Management

Proxy instance 12c

 !! Oracle 12c Cluster: ACFS Leverages Flex ASM !!




NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.


 

In Oracle Database Releases earlier than 12c, an Automatic Storage Management (ASM) instance runs on every node in the cluster, and ASM Cluster File System (ACFS) Service on a node connects to the local ASM instance running on the same host. If the ASM instance on a node were to fail, then the shared Disk Groups and hence ACFS file

In Oracle Database Releases earlier than 12c, an Automatic Storage Management (ASM) instance runs on every node in the cluster, and ASM Cluster File System (ACFS) Service on a node connects to the local ASM instance running on the same host. If the ASM instance on a node were to fail, then the shared Disk Groups and hence ACFS file systems can no longer be accessed on that node.

With introduction of Flex ASM in Oracle 12c, the hard dependency between ASM and its clients has been relaxed and a smaller number of ASM instances need run on a subset of servers in a cluster. In this scenario, when there might be nodes without an ASM instance, a new instance type has been introduced by Flex ASM – ASM proxy instance which gets metadata information from ASM instance on behalf of ACFS. If ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network to fetch the metadata. Moreover, if the local ASM instance fails, then ASM proxy instance can failover to another surviving ASM instance on a different server resulting in uninterrupted availability of shared storage and ACFS file systems. However, I/O to the underlying storage does not go through Oracle ASM, but goes directly through the Oracle ASM proxy instance. An ASM proxy instance must be running on a node which needs to provide ACFS service.


*. A new instance type has been introduced by Flex ASM – ASM proxy instance, which gets metadata information from ASM instance on behalf of ACFS.
*. If an ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network resulting in uninterrupted availability of shared storage and ACFS file systems.
*. This provides a much higher degree of flexibility, scalability and availability for file services to clients.







Figure 1 shows that as an ASM instance is not running on Hub Node 2, ADVM / ACFS services on Hub Node 2l utilize the ASM proxy instance (+APX2) to access the metadata from the remote ASM instance (+ASM1) running on Hub Node 1.




Figure 1.
Flex ASM can be configured on either a standard cluster or a Flex Cluster. When Flex ASM runs on a standard cluster, ASM services can run on a subset of cluster nodes servicing clients across the cluster. When Flex ASM runs on a Flex Cluster, ASM services can run on a subset of Hub Nodes servicing clients across all of the Hub Nodes in the Flex Cluster. Besides, in a flex cluster, only hub nodes can host the ACFS services because hub nodes have direct access to storage.
A Demonstration

To explore how ADVM / ACFS can leverage Flex ASM features, we will use an ASM Flex Cluster that is configured with two hub nodes (rac1 and rac2). Our first set of tasks is to create an ACFS file system resource.
Check Prerequisites

First, let’s verify that all kernel modules needed for ACFS and ADVM are loaded on both the nodes.

 [root@rac1 ~]# lsmod |grep oracle
oracleacfs           2837904  1
oracleadvm            342512  1
oracleoks             409560  2 oracleacfs,oracleadvm
oracleasm              84136  1

[root@rac2 ~]# lsmod |grep oracle
oracleacfs           2837904  1
oracleadvm            342512  1
oracleoks             409560  2 oracleacfs,oracleadvm
oracleasm              84136  1

The ASM Dynamic Volume Manager (ADVM) proxy instance is a special Oracle instance which enables ADVM to connect to Flex ASM and is required to run on the same node as ADVM and ACFS. For a volume device to be visible on a node, an ASM proxy instance must be running on that node. Let’s verify that an ASM proxy instance is running on both the nodes.

 [root@rac2 ~]# crsctl stat res ora.proxy_advm -t
----------------------------------------------------------------
Name           Target  State        Server                   State details
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.proxy_advm
               ONLINE  ONLINE       rac1                   STABLE
               ONLINE  ONLINE       rac2                   STABLE

Create the ADVM Volume

We’ll next modify the compatible.advm attribute of the DATA ASM disk group to enable all the new ASM Dynamic Volume (ADVM) features included in release 12.1, and then create a new volume – VOL1 within the DATA disk group with a volume size of 300 MB.

 [grid@rac1 root]$ asmcmd setattr -G DATA compatible.advm 12.2.0.0.0

[grid@rac1 root]$ asmcmd volcreate -G DATA -s 300m VOL1

[grid@rac1 root]$ asmcmd volinfo -G DATA VOL1

Diskgroup Name: DATA
         Volume Name: VOL1
         Volume Device: /dev/asm/vol1-114
         State: ENABLED
         Size (MB): 320
         Resize Unit (MB): 32
         Redundancy: MIRROR
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath:

Create ACFS File System and Corresponding Mount Point

Now let’s construct an ACFS file system on the newly-created volume VOL1 and also create a mount point on both the nodes to mount the ACFS file system.

[root@rac1 ~]# mkfs -t acfs /dev/asm/vol1-114
[root@rac1 ~]# mkdir -p /acfs1
[root@rac2 ~]# mkdir -p /acfs1

Configure Cloud File System Resource For ACFS File System

Now using the srvctl commands, we will create an Oracle Cloud File System resource on the volume device VOL1 with the mount point /acfs1:

[root@rac1 ~]# srvctl add filesystem -m /acfs1 -d /dev/asm/vol1-114

[root@rac1 ~]# srvctl status filesystem -d /dev/asm/vol1-114

ACFS file system //acfs2 is not mounted

[root@rac1 ~]# mount | grep vol1

[root@rac2 ~]# mount | grep vol1

We need to start the file system resource to mount the ACFS file system.

[root@rac1 ~]# srvctl start  filesystem -d /dev/asm/vol1-114

[root@rac1 ~]# srvctl status  filesystem -d /dev/asm/vol1-114

ACFS file system //acfs2 is mounted on nodes rac1,rac2

[root@rac1 ~]# mount | grep vol1

/dev/asm/vol1-114 on /acfs1 type acfs (rw)

[root@rac2 ~]# mount | grep vol1

/dev/asm/vol1-114 on /acfs1 type acfs (rw)

Verify The Cloud File System Resource

It can be verified that the new Cloud File System is indeed working properly as a small text file created on it from rac1 can be successfully accessed from rac2.

[root@rac1 ~]# echo "Test File on ACFS" > /acfs1/testfile.txt

[root@rac2 asm]#  cat /acfs1/testfile.txt

Test File on ACFS

Leveraging Flex ASM

To demonstrate how ACFS / ADVM leverages these Flex ASM capabilities, let us first have a look at the current state of various cluster resources in our two node cluster.
View Baseline Status of Various Cluster Resources

It can be seen that currently:

    An ASM instance is running on both the nodes
    DATA disk group, volume VOL1 on DATA diskgroup and ACFS file system on VOL1 are all online on both the nodes.

 [root@rac1 ~]# crsctl stat res ora.asm -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1          STABLE
      1        ONLINE  ONLINE       rac2          STABLE
----------------------------------------------------------------

[root@rac1 ~]# crsctl stat res ora.DATA.dg -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1         STABLE
               ONLINE  ONLINE       rac2         STABLE
----------------------------------------------------------------


[root@rac1 ~]# crsctl stat res ora.DATA.VOL1.advm -t
----------------------------------------------------------------
Name           Target  State        Server   State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.VOL1.advm
               ONLINE  ONLINE       rac1  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE
               ONLINE  ONLINE       rac2  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE


[root@rac1 ~]# crsctl stat res ora.data.vol1.acfs

NAME=ora.data.vol1.acfs
TYPE=ora.acfs.type
TARGET=ONLINE          , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2

Confirm Flex ASM Proxy Instances

Let’s also verify that ASM proxy instances +APX1 running on rac1 and +APX2 running on rac2 are using ASM instances +ASM1 and +ASM2 running locally on their corresponding nodes.

SQL> col client_instance_name format a20
     col asm_host_name for a20

SELECT DISTINCT
    i.instance_name asm_instance_name,
    i.host_name asm_host_name,
    c.instance_name client_instance_name,
    c.status
  FROM gv$instance i, gv$asm_client c
 WHERE i.inst_id = c.inst_id;

ASM_INSTANCE_NAM ASM_HOST_NAME        CLIENT_INSTANCE_NAME STATUS
---------------- -------------------- -------------------- ------------
+ASM1            rac1.vbtech.com   +APX1                CONNECTED
+ASM1            rac1.vbtech.com   +ASM1                CONNECTED
+ASM2            rac2.vbtech.com   +APX2                CONNECTED

To make ACFS / ADVM leverage Flex ASM capabilities, we can perform a simple test: On stopping the ASM instance on node rac2 we should observe that ACFS / ADVM services will continue to run on rac2 while utilizing an ASM proxy instance to satisfy metadata requests to a remote ASM instance – in this case, +ASM1 running on rac1.
Halt ASM Instance On rac2

Let’s halt ASM instance +ASM2 running on node rac2.

[root@rac1 ~]# srvctl stop asm -n rac2

PRCR-1014 : Failed to stop resource ora.asm
PRCR-1145 : Failed to stop resource ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified

[root@rac1 ~]# srvctl stop asm -n rac2 -f

[root@rac1 ~]# srvctl status asm
ASM is running on rac1

Verify Availability of VOL1 On rac2

If we now check we’ll find that even though the +ASM2 instance and the DATA disk group are not available on rac2, volume VOL1 created on the DATA disk group is still mounted on rac2 because of Flex ASM. Node rac2 connects to the remote ASM instance +ASM1 running on rac1 using its ASM proxy instance +APX2 to access the metadata, and that keeps volume VOL1 online.

 [root@rac1 ~]# crsctl stat res ora.asm -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1          STABLE
      2        ONLINE  OFFLINE      rac2          STABLE
----------------------------------------------------------------

[root@rac1 ~]# crsctl stat res ora.DATA.dg -t
----------------------------------------------------------------
Name           Target  State        Server         State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1         STABLE
               OFFLINE OFFLINE      rac2         STABLE
----------------------------------------------------------------


[root@rac1 ~]# crsctl stat res ora.DATA.VOL1.advm -t
----------------------------------------------------------------
Name           Target  State        Server   State details      
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.DATA.VOL1.advm
               ONLINE  ONLINE       rac1  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE
               ONLINE  ONLINE       rac2  Volume device /dev/a
                                            sm/vol1-114 is
                                            online,STABLE

Verify Availability of Cloud File System acfs1 On rac2

We can also verify that the file on cloud file system acfs1 is visible from both rac1 and rac2.

[root@rac1 acfs1]# cat /acfs1/testfile.txt

“Test File on ACFS”

[root@rac2 asm]# cat /acfs1/testfile.txt

“Test File on ACFS”

Confirm Flex ASM Proxy Instances

We can also verify that since ASM instance +ASM2 running on rac2 is no longer available, ACFS / ADVM leverages Flex ASM and ASM proxy instance +APX2 running on rac2 accesses the metadata from remote ASM instance +ASM1 that is running on rac1 to access volume VOL1 present on the DATA disk group.

SQL> col client_instance_name format a20
     col asm_host_name for a20

SELECT DISTINCT
    i.instance_name asm_instance_name,
    i.host_name asm_host_name,
    c.instance_name client_instance_name,
    c.status
  FROM gv$instance i, gv$asm_client c
 WHERE i.inst_id = c.inst_id;


ASM_INSTANCE_NAM ASM_HOST_NAME        CLIENT_INSTANCE_NAME STATUS
---------------- -------------------- -------------------- ------------
+ASM1            rac1.vbtech.com   +APX1                CONNECTED
+ASM1            rac1.vbtech.com   +ASM1                CONNECTED
+ASM1            rac1.vbtech.com   +APX2                CONNECTED

   

TFA

!! TFA - Trace File Analyzer Collector !!



Trace File Analyzer Collector (tfactl commands)


./tfactl -h



./tfactl diagcollect -h

./tfactl diagcollect

./tfactl diagcollect -crs

./tfactl diagcollect -asm -tag ASM_COLLECTIONS

./tfactl diagcollect -since 1h

./tfactl diagcollect -all -since 4h

./tfactl diagcollect -database hrdb,findb -since 1d

./tfactl diagcollect -crs -os -node node2,node3 -since 8h

./tfactl diagcollect -asm -node node7 -from Aug/11/2017 -to "Aug/11/2017 22:10:00"

./tfactl diagcollect -for Aug/11/2017

./tfactl diagcollect -for "Aug/11/2017 07:30:00"



./tfactl print -h

./tfactl print config

./tfactl print status

./tfactl print hosts

./tfactl print actions

./tfactl print repository



./tfactl analyze

./tfactl analyze -since 4h

./tfactl analyze -comp os -since 1d

./tfactl analyze -comp osw -since 5h

./tfactl analyze -search "ORA-" -since 3d

./tfactl analyze -search "/Starting/c"

./tfactl analyze -comp os -for "Aug/11/2017 02" -search "." 

./tfactl analyze -since 2h -type generic



./tfactl access -h

CRS-4639 CRS-4124 Oracle High Availability Services startup failed in 11gR2 RAC

!! CRS-4639 CRS-4124 Oracle High Availability Services startup failed in 11gR2 RAC !!
 
NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production. 


Today I have faced an issue with cluster OHASD service has not started automatically when server started. Then I tried to start cluster manually I received "CRS-4124: Oracle High Availability Services startup failed" error.

1.  All oracleasm Disk are verified and are available .
     /dev/oracleasm/disks
     oracleasm lisdisks 




[oracle@red1 ~]$ cd /u01/app/11.2.0/grid_1/bin
[oracle@red1 bin]$ ./crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services
[oracle@red1 bin]$
[oracle@red1 bin]$ su - root
Password:
[root@red1 ~]# cd /u01/app/11.2.0/grid_1/bin
[root@red1 bin]# ./crsctl start crs
CRS-4124: Oracle High Availability Services startup failed
CRS-4000: Command Start failed, or completed with errors
[root@red1 bin]#


  Then I have verified cluster Oracle High Availability auto start-up is configured or not?

[root@red1 bin]# ./crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@red1 bin]# 





[root@red1 ~]# nohup /etc/init.d/init.ohasd run &
[1] 765
[root@red1 ~]#

[root@red1 bin]# ./crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@red1 bin]#
[root@red1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online  



if cluster service not properly started the run below command 


crsctl start resource -all 




Upgrade RMAN Catalog

!! How to Upgrade RMAN Catalog in 11gR2 after applying a PSU Patch !!

NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production. 




 This post discuss about how to upgrade RMAN catalog in 11gR2 after applying a PSU Patch

Note: This is not the database upgrade, this post provides information on the RMAN Catalog upgrade for an database after a PSU patch has been applied in the database server.

“The recovery catalog schema version must be greater than or equal to the RMAN client version”.


For your 11.2 database, if you are running RMAN from the same ORACLE_HOME, your RMAN Client is 11.2. The catalog schema must be 11.2. Although it can be in an 11.1 database, it would have been preferable to have it in 11.2. The better course is to upgrade the RMAN Catalog database to the highest of all the target databases and the Catalog schema to the highest RMAN client.

-- Connect to the target database on the server

oracle > rman target /
Recovery Manager: Release 11.2.0.2.0 - Production on Sat Aug 18 02:09:16 CDT 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: BHUVAN (DBID=3984568789)

-- Connect to the Recover catalog database

RMAN> connect catalog rman/xxxxx@RMAN_CATALOG
connected to recovery catalog database

-- upgrade the catalog by connecting to the target database and catalog database.

RMAN> UPGRADE CATALOG;

recovery catalog owner is RMAN
enter UPGRADE CATALOG command again to confirm catalog upgrade


RMAN> UPGRADE CATALOG;

recovery catalog upgraded to version 11.02.00.02
DBMS_RCVMAN package upgraded to version 11.02.00.02
DBMS_RCVCAT package upgraded to version 11.02.00.02

RMAN>

TO DETERMINE THE CURRENT RELEASE OF THE CATALOG SCHEMA, YOU MUST RUN A SQL QUERY.


oracle> sqlplus rman/xxxx@RMAN_CATALOG

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 18 02:09:16 CDT 2018

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> SELECT * FROM rcver;

VERSION
------------
11.02.00.02


Note:
1)  If multiple versions are listed, then the last row is the current version, and the rows before it are prior versions.
2) For releases 11.2 and later, the last two digits in the rcver output indicate patch level. For earlier releases, they are always zeros.

-MGMTDB -Management Repository

!! 12c feature: Using Management Repository (-MGMTDB) !!
 NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.


Overview
.Management Repository is a single instance database that’s managed by Oracle Clusterware in 12c.
.In 11g this was being stored in berkley database but starting Oracle database 12c it is configured as  Oracle Database Instance
.If the option is selected during GI installation, the database will be configured and managed by GI.
.As it’s a single instance database, it will be up and running on one node in the cluster;
.As it’s managed by GI, in case the hosting node is down, the database will be automatically failed over to other node.
.Management Database will be the central repository to store Cluster Health Monitor (aka CHM/OS, ora.crf) and other data in 12c
.Management database uses the same shared storage as OCR/Voting File (  voting disk size >5G : ( 3.2G+ is being used by MGMTDB – 2G for CHM )
.If Management Database is not selected to be configured during installation/upgrade OUI, all features (Cluster Health Monitor (CHM/OS) etc)  that depend on it will be disabled. Note: there’s no supported procedure to enable Management Database once the GI stack is configured


How to start Management Database
Management Database is managed by GI and should be up and running all the time automatically. In case it’s down for some reason,  the following srvctl command can be used to start it:
Usage: srvctl start mgmtdb [-startoption <start_option>] [-node <node_name>]
Usage: srvctl start mgmtlsnr [-node <node_name>]


[oracle@RAC1]~% ps -ef|grep mdb_pmon
oracle    7580     1  0 04:57 ?        00:00:00 mdb_pmon_-MGMTDB


This is a Oracle single instance which is being managed by Grid Infrastructure and fails over to surviving node if existing node crashes.You can identify the current master using below command

-bash-4.1$ oclumon manage -get MASTER

Master = RAC1


This DB instance can be managed using srvctl commands. Current master can also be identified using status command

$srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node RAC1







We can look at mgmtdb config using

$srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: /home/oragrid
Oracle user: oracle
Spfile: +OCR/_mgmtdb/spfile-MGMTDB.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: -MGMTDB
Type: Management


Replace config with start/stop to start/stop database.
Databases files for repository database are stored in same location as OCR/Voting disk

SQL> select file_name from dba_data_files union select member file_name from V$logfile;

FILE_NAME
------------------------------------------------------------
+OCR/_MGMTDB/DATAFILE/sysaux.258.819384615
+OCR/_MGMTDB/DATAFILE/sysgridhomedata.261.819384761
+OCR/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687
+OCR/_MGMTDB/DATAFILE/system.259.819384641
+OCR/_MGMTDB/DATAFILE/undotbs1.257.819384613
+OCR/_MGMTDB/ONLINELOG/group_1.263.819384803
+OCR/_MGMTDB/ONLINELOG/group_2.264.819384805
+OCR/_MGMTDB/ONLINELOG/group_3.265.819384807



We can verify the same using oclumon command

-bash-4.1$ oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/DATAFILE/sysmgmtdata.260.819384687




Since this is stored at same location as Voting disk, if you have opted for configuring Management database, you will need to use voting disk with size >5G (3.2G+ is being used by MGMTDB). During GI Installation ,I had tried adding voting disk of 2G but it failed saying that it is of insufficient size. Error didnot indicate that its needed for Management repository but now I think this is because of repository sharing same location as OCR/Voting disk.
Default (also Minimum) size for CHM repository is 2048 M . We can increase respository size by issuing following command

-bash-4.1$ oclumon manage -repos changerepossize 4000
The Cluster Health Monitor repository was successfully resized.The new retention is 266160 seconds.

Move the central Inventory

 !! Move the central Inventory (oraInventory) to another location !!


 NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.



Solution On Unix:

Find the current location of the central inventory (normally $ORACLE_BASE/oraInventory):

For example:

find /home/oracle -name oraInventory -print

 /u01/app/oracle/oraInventory



Open the oraInst.loc file in /etc and check the value of inventory_loc

cat /etc/oraInst.loc

inventory_loc=/u01/app/oracle/oraInventory inst_group=oinstall


Remark: The oraInst.loc file is simply a pointer to the location of the central inventory (oraInventory)



Copy the oraInventory directory to the destination directory

cp -Rp /home/oracle/oraInventory /app/oracle



Edit the oraInst.loc file to point to the new location

For example:

vi /etc/oraInst.loc

inventory_loc=/app/oracle/oraInventory inst_group=dba
 

OSWatcher

!! Oracle Tool: setup OSWatcher !!


NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.



OSWatcher (Includes: [Video]) (Doc ID 301137.1)


/stage:(+ASM1)$ tar xvf oswbb4.0.tar
oswbb/
oswbb/htop.sh
oswbb/docs/
oswbb/docs/OSWbba_README.txt
oswbb/docs/OSW_bb_README.txt
oswbb/docs/OSW_Black_Box_Analyzer_Overview.pdf
oswbb/docs/OSWatcher_Black_Box_UG.pdf
oswbb/docs/OSWatcher_Black_Box_Analyzer_UG.pdf
oswbb/Exampleprivate.net
oswbb/stopOSWbb.sh
oswbb/iosub.sh
oswbb/profile/
oswbb/OSWatcherFM.sh
oswbb/mpsub.sh
oswbb/gif/
oswbb/pssub.sh
oswbb/oswnet.sh
oswbb/vmsub.sh
oswbb/oswlnxio.sh
oswbb/oswib.sh
oswbb/startOSWbb.sh
oswbb/oswsub.sh
oswbb/analysis/
oswbb/oswbba.jar
oswbb/locks/
oswbb/tmp/
oswbb/OSWatcher.sh
oswbb/topaix.sh
oswbb/tarupfiles.sh
oswbb/xtop.sh
oswbb/src/
oswbb/src/Thumbs.db
oswbb/src/OSW_profile.htm
oswbb/src/tombody.gif
oswbb/src/missing_graphic.gif
oswbb/src/coe_logo.gif
oswbb/src/watch.gif
oswbb/src/oswbba_input.txt
oswbb/oswrds.sh


/stage/oswbb:(+ASM1)$ nohup ./startOSWbb.sh SnapshotInterval ArchiveInterval &


With SnapshotInterval = 60

With ArchiveInterval = 10

/stage/oswbb:(+ASM1)$ nohup ./startOSWbb.sh 60 10 &



/stage/oswbb/archive:(+ASM1)$ pwd;file *
/stage/oswbb/archive
oswiostat:   directory
oswmeminfo:  directory
oswmpstat:   directory
oswnetstat:  directory
oswprvtnet:  directory
oswps:       directory
oswslabinfo: directory
oswtop:      directory
oswvmstat:   directory
/stage/oswbb/archive:(+ASM1)$




/stage/oswbb/archive:(+ASM1)$ ps -ef|grep OSW
oracle    8100     1  0 12:20 pts/3    00:00:00 /usr/bin/ksh ./OSWatcher.sh 60 10
oracle   10263  8100  0 12:21 pts/3    00:00:00 /usr/bin/ksh ./OSWatcherFM.sh 10
oracle   25568 21378  0 14:33 pts/3    00:00:00 grep OSW
/stage/oswbb/archive:(+ASM1)$

Moving schema (11g to 12c)


 !! Oracle 12C: Moving schema from Oracle 11G to 12C. !!

 

 

 

NOTE     : As with any code, ensure to test this script in a development environment before attempting to run it in production.


From source server with Oracle 11G
expdp ORAVB/password dumpfile=ORAVBdexport.dmp  directory=ORAVB_DIR schemas=ORAVB logfile=ORAVB.log


On the source db - Oracle 12C

Setting up the dump directory.

SQL> !mkdir /u01/backup/ORAVB_DIR

SQL> create or replace directory ORAVB_DIR as '/u01/backup/ORAVB_DIR';

Directory created.


SQL> GRANT read, write ON DIRECTORY ORAVB_DIR TO public;

SQL> SELECT directory_path FROM dba_directories WHERE directory_name = 'ORAVB_DIR';

DIRECTORY_PATH
----------------------------------------------------------------------------------------------------
/u01/backup/ORAVB_DIR


Verify if the database is opened.


SQL> SELECT directory_path FROM dba_directories WHERE directory_name = 'DMPDIR';

DIRECTORY_PATH
------------------------
/u01/backup/ORAVB_DIR

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBORCL                        MOUNTED
         4 PDBORAVB                       MOUNTED

SQL> col pdb_name format a20
SQL> col status format a20
SQL> select pdb_name, status from dba_pdbs;

PDB_NAME             STATUS
-------------------- --------------------
PDBORCL              NORMAL
PDB$SEED             NORMAL
PDBORAVB             NEW

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBORCL                        MOUNTED
         4 PDBORAVB                       MOUNTED

SQL> alter session set container=PDBORAVB;

Session altered.
The PDBORAVB is in MOUNTED mode. It needs to be opened.


SQL> alter session set container=PDBORAVB;

Session altered.

SQL> startup;

Pluggable Database opened.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDBORAVB                        READ WRITE NO
SQL>


SQL> select count(*) from user_objects;
  COUNT(*)
----------
      900
     

SQL> !lsnrctl status|grep PDBORAVB
Service "PDBORAVB" has 1 instance(s).


The datapump import into Oracle 12C



User/Schema already pre-setup
ORAVB

NOTE:Make sure the PDBORAVB service block exist in tnsnames.ora

Here we are using the PDBORAVB service for the impdp. Since,we know that there can be many databases in a single instance and we could have many of the same schemas exist in many of the databases from within.

Note: No schema mapping or tablespace mapping needed .. moving a schema to schema in Oracle 12C.

impdp ORAVB/password@PDBORAVB DIRECTORY=ORAVB_DIR dumpfile=ORAVBexport.dmp schemas=ORAVB logfile=11g_to_12c_ORAVB_impdp.log

Import: Release 12.1.0.1.0 - Production on Tue Aug 14 16:01:25 2018

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ORAVB"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "ORAVB"."SYS_IMPORT_SCHEMA_01":  ORAVB/********@PDBORAVB DIRECTORY=ORAVB_DIR dumpfile=ORAVBexport.dmp schemas=ORAVB logfile=11g_to_12c_ORAVB_impdp.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"ORAVB" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "ORAVB"."IMP_DATA"                       3.418 GB 1167153 rows
. . imported "ORAVB"."SCHOOL_DATA"                    239.8 MB 2891749 rows
......
..................... and so on ...................


[oracle@oracle12c
ORAVB_DIR]$ sqlplus ORAVB/password@PDBORAVB

SQL*Plus: Release 12.1.0.1.0 Production on Tue Aug 14 16:11:25 2018
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Tue Aug 14 16:13:25 2018 -09:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "ORAVB"

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         4 PDBORAVB                        READ WRITE NO

SQL> select count(*) from user_objects;

  COUNT(*)
----------
      900

SQL>