Create N number of Logical partition in Linux

Hi All,
 With help of below script we can create multiple logical(extended) portions
 
 uses : Mk_partion.sh Disk name
    Ex. Mk_partion.sh /dev/sdb
 Param  :
    1."PARTITION_SIZE"  -- Logical Portion size
    2."PARTITION_SIZE_P" --Primary Portion size


click Here to Download : Mk_partion.sh      
   
#----------------------------Start Mk_partion.sh-------------------------------------

#-- +----------------------------------------------------------------------------+
#-- |                               By OraVR                                     |
#-- |                            info@gmail.com                                 |
#-- |                              www.oravr.com                                 |
#-- |----------------------------------------------------------------------------|
#-- |                                                                            |
#-- |----------------------------------------------------------------------------|
#-- | DATABASE : Oracle                                                          |
#-- | FILE     : Mk_partion.sh                                                   |
#-- | CLASS    : Storage                                                         |
#-- | PURPOSE  : Create Multiple logical disk                                    |
#-- | NOTE     : As with any code, ensure to test this script in a development   |
#-- |            environment before attempting to run it in production.          |
#-- +----------------------------------------------------------------------------+
#!/bin/bash
if [ $# -eq 0 ]
then
  echo "input the device"
  exit
fi
NUM_PARTITIONS=50
PARTITION_SIZE="+4096M"   
PARTITION_SIZE_P="+100M"
SED_STRING="o"
TAIL="p
w
q
"
NEW_LINE="
"
LETTER_n="n"
EXTENDED_PART_NUM=4
TGTDEV=$1
SED_STRING="$SED_STRING$NEW_LINE"
for i in $(seq $NUM_PARTITIONS)
do
  if [ $i -lt $EXTENDED_PART_NUM ]
  then
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$NEW_LINE$NEW_LINE$PARTITION_SIZE_P$NEW_LINE"
  fi
  if [ $i -eq $EXTENDED_PART_NUM ]
  then
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$NEW_LINE$NEW_LINE"
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$PARTITION_SIZE$NEW_LINE"
  fi
  if [ $i -gt $EXTENDED_PART_NUM ]
  then
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$PARTITION_SIZE$NEW_LINE"
  fi
done
SED_STRING="$SED_STRING$TAIL"
sed -e 's/\s*\([\+0-9a-zA-Z]*\).*/\1/' << EOF | fdisk ${TGTDEV}
  $SED_STRING
EOF
########## Remove PARTITION if anything goes wrong #########################################
######dd if=/dev/zero of=/dev/sda bs=512 count=1 conv=notrunc######

#---------------------------- End  Mk_partion.sh-------------------------------------












 

 

 

Enable DDL Logging


Enable DDL Logging




DDL logging is really very good feature to monitor changes done in DB design.
One can monitor or audit all DB changes by viewing log file.
One can set enable_ddl_logging=true to enable DDL logging.
Oracle will start recording all DDL statements in log.xml file. This file has the same format and basic behavior as the alert log, but it only contains the DDL statements.
The DDL log file is created in $ADR_HOME/log/ddl directory; it contains DDL statements that are extracted from alertl log file.
Log file Path
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/ddl/log.xml
NOTE:- Oracle license "Oracle Change Management Pack" is require to use this feature.
Parameter enable_ddl_logging is licensed as part of the Change Management Pack.
Following DDL statements are written to the log:

•    ALTER/CREATE/DROP/TRUNCATE CLUSTER
•    ALTER/CREATE/DROP FUNCTION
•    ALTER/CREATE/DROP INDEX
•    ALTER/CREATE/DROP OUTLINE
•    ALTER/CREATE/DROP PACKAGE
•    ALTER/CREATE/DROP PACKAGE BODY
•    ALTER/CREATE/DROP PROCEDURE
•    ALTER/CREATE/DROP PROFILE
•    ALTER/CREATE/DROP SEQUENCE
•    CREATE/DROP SYNONYM
•    ALTER/CREATE/DROP/RENAME/TRUNCATE TABLE
•    ALTER/CREATE/DROP TRIGGER
•    ALTER/CREATE/DROP TYPE
•    ALTER/CREATE/DROP TYPE BODY
•    DROP USER
•    ALTER/CREATE/DROP VIEW

Following is the example of DDL logging,


VRTEST$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 11 04:41:39 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

VB_SQL>
VB_SQL> show parameter enable_ddl_logging
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_ddl_logging                   boolean     FALSE
VB_SQL> alter system set enable_ddl_logging=true;
System altered.
VB_SQL>
VB_SQL> show parameter enable_ddl_logging;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_ddl_logging                   boolean     TRUE
VB_SQL> conn VB/VB
Connected.
VB_SQL>
VB_SQL> create table vb_ddltest (id number, name varchar2(50));
Table created.
VB_SQL>
VB_SQL> alter table vb_ddltest add ( address varchar2(100));
Table altered.
VB_SQL>
VB_SQL> insert into vb_ddltest values (1,'CBT','XYZ');
1 row created.
VB_SQL>
VB_SQL> commit;
Commit complete.
VB_SQL> drop table vb_ddltest;
Table dropped.
VB_SQL>
Check the log file for all DDL commands that was run by user.

VRTEST$ pwd
"/opt/oracle/app/orcl/diag/rdbms/VRTEST/VRTEST/log/ddl"

VRTEST$ cat log.xml
 msg_id='opiexe:4383:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='OraLinuxNode' host_addr='10.184.150.107'
 version='1'>
 create table vb_ddltest (id number, name varchar2(50))

 msg_id='opiexe:4383:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='OraLinuxNode' host_addr='10.184.150.107'>
 alter table vb_ddltest add ( address varchar2(100))

 msg_id='opiexe:4383:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='OraLinuxNode' host_addr='10.184.150.107'>
 drop table vb_ddltest

VRTEST$
If you want to see clear and readable DDL log file, Just invoke ADRCI utility and issue show log command as shown below,
VRTEST$ adrci
ADRCI: Release 19.0.0.0.0 - Production on Tue Feb 11 04:44:53 2020
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
ADR base = "/opt/oracle/app/orcl"

ADR base = "/mnt/Oracle"
adrci> show log -l ddl
ADR Home = /opt/oracle/app/orcl/diag/rdbms/VRTEST/VRTEST:
*************************************************************************
Output the results to file: /tmp/utsout_19321_13991_1.ado
adrci>

When you issue the command "show log", this will open the log.xml file in edtor (i.e. vi in linux/unix) and show the contents, in following format :


2020-02-11 12:23:22.395000 +05:30
create table vb_ddltest (id number, name varchar2(50))
2020-02-11 12:24:05.301000 +05:30
alter table vb_ddltest add ( address varchar2(100))
2020-02-11 12:25:14.003000 +05:30
drop table vb_ddltest
2020-02-11 12:30:33.272000 +05:30
truncate table wri$_adv_addm_pdbs
Also you can see the debug log using adrci show log command i.e. show log -l debug.



SQL Query Parsing




SQL generally goes through

    * PARSE 
    * BIND 
    * EXECUTE 
    * FETCH

You may have multiple fetches per execution (fetch first 10 rows, next 10 rows and etc). Equally some SQLs do not have a fetch (eg an insert). 
A transaction consists of a numbers of SQLs. If you have 20-30 SQLs per transaction, then you got some reasonable complexity. Not every statement is an isolated transaction in its own right.

How Oracle processes a query
1. Parse: In this stage, the user process sends the query to the server process with a request to parse or compile the query. The server process checks the validity of the command and uses the area in the SGA known as the shared pool to compile the statement. At the end of this phase, the server process returns the status—that is, success or failure of the parse phase—to the user process.
2. Execute: During this phase in the processing of a query, the server process prepares to retrieve the data.
3. Fetch: The rows that are retrieved by the query are returned by the server to the user during this phase. Depending on the amount of memory used for the transfer, one or more fetches are required to transfer the results of a query to the user.




The following example describes Oracle database operations at the most basic level. It illustrates an Oracle database configuration in which the user and associated server process are on separate computers, connected through a network.




1. An instance has started on a node where Oracle Database is installed, often called the host or database server.
2. A user starts an application spawning a user process. The application attempts to establish a connection to the server. (The connection may be local, client/server, or a three-tier connection from a middle tier.)
3. The server runs a listener that has the appropriate Oracle Net Services handler. The listener detects the connection request from the application and creates a dedicated server process on behalf of the user process.
4. The user runs a DML-type SQL statement and commits the transaction. For example, the user changes the address of a customer in a table and commits the change.
5. The server process receives the statement and checks the shared pool (an SGA component) for any shared SQL area that contains an identical SQL statement. If a shared SQL area is found, the server process checks the user’s access privileges to the user s requested data, and the existing shared SQL area is used to process the statement. If a shared SQL area is not found, a new shared SQL area is allocated for the statement so that it can be parsed and processed.
6. The server process retrieves any necessary data values, either from the actual data file (table) or from values stored in the database buffer cache.
7. The server process modifies data in the SGA. Because the transaction is committed, the Log Writer process (LGWR) immediately records the transaction in the redo log file. The Database Writer process (DBWn) writes modified blocks permanently to disk when it is efficient to do so.
8. If the transaction is successful, the server process sends a message across the network to the application. If it is not successful, an error message is transmitted.
9. Throughout this entire procedure, the other background processes run, watching for conditions that require intervention. In addition, the database server manages other users’ transactions and prevents contention between transactions that request the same data.


Patches CPU/PSU/SPU


CPU, PSU, SPU - Oracle Critical Patch Update Terminology Update

Critical Patch Update (CPU) now refers to the overall release of security fixes each quarter rather than the cumulative database security patch for the quarter.  Think of the CPU as the overarching quarterly release and not as a single patch.

Patch Set Updates (PSU) are the same cumulative patches that include both the security fixes and priority fixes.  The key with PSUs is they are minor version upgrades (e.g., 11.2.0.1.1 to 11.2.0.1.2).  Once a PSU is applied, only PSUs can be applied in future quarters until the database is upgraded to a new base version.

Security Patch Update (SPU) terminology is introduced in the October 2012 Critical Patch Update as the term for the quarterly security patch.  SPU patches are the same as previous CPU patches, just a new name.  For the database, SPUs can not be applied once PSUs have been applied until the database is upgraded to a new base version.

Bundle Patches are the quarterly patches for Windows and Exadata which include both the quarterly security patches as well as recommended fixes.

OGG Processes


Manager:-
The Manager process must be running on both the source as well as target systems before the Extract or Replicat process can be started and performs several functions including monitoring and starting other GoldenGate processes, managing the trail files and also reporting.

Extract:-
The Extract process runs on the source system and is the data capture mechanism of GoldenGate. It can be configured both for the initial loading of the source data as well as to synchronize the changed data on the source with the target. This can be configured to also propagate any DDL changes on those databases where DDL change support is available.

Replicat:-
The Replicat process runs on the target system and reads transactional data changes as well as DDL changes and replicates then to the target database. Like the Extract process, the Replicat process can also be configured for Initial Load as well as Change Synchronization.

Collector:-
The Collector is a background process which runs on the target system and is started automatically by the Manager (Dynamic Collector) or it can be configured to start manually (Static Collector). It receives extracted data changes that are sent via TCP/IP and writes then to the trail files from where they are processed by the Replicat process.

Trails:-
Trails are series of files that GoldenGate temporarily stores on disks and these files are written to and read from by the Extract and Replicat processes. Depending on the configuration chosen, these trail files can exist on the source as well as on the target systems. If it exists on the local system, it will be known an Extract Trail or as Remote Trail if it exists on the target system.

Data Pumps:-
Data Pumps are secondary extract mechanisms which exist in the source configuration. This is optional component and if Data Pump is not used then Extract sends data via TCP/IP to the remote trail on the target. When Data Pump is configured, the Primary Extract process will write to the Local Trail and then this trail is read by the Data Pump and data is sent over the network to Remote Trails on the target system.

In the absence of Data Pump, the data that the Extract process extracts resides in memory alone and there is no storage of this data anywhere on the source system. In case of network of target failures, there could be cases where the primary extract process can abort or abend. Data Pump can also be useful in those cases where we are doing complex filtering and transformation of data as well as when we are consolidating data from many sources to a central target.

Data source:-
When processing transactional data changes, the Extract process can obtain data directly from the database transaction logs (Oracle, DB2, SQL Server, MySQL etc) or from a GoldenGate Vendor Access Module (VAM) where the database vendor  (for example Teradata) will provide the required components that will be used by Extract to extract the data changes.

Groups:-
To differentiate between the number of different Extract and Replicat groups which can potentially co-exist on a system, we can define processing groups. For instance, if we want to replicate different sets of data in parallel, we can create two Replicat groups.

A processing group consists of a process which could be either a Extract or Replicat process, a corresponding parameter file, checkpoint file or checkpoint table (for Replicat) and other files which could be associated with the process.

Backgroup Process - 01



SMON - System Monitor process recovers after instance failure and monitors temporary segments and extents.  SMON in a non-failed instance can also perform failed instance recovery for other failed RAC instance.



PMON - Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture) is being utilized.  PMON monitors and restarts any failed dispatcher or server processes.  In RAC, PMON’s role as service registration agent is particularly important.



DBWR - Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block cache to the database data files.  Generally, DBWR only writes blocks back to the data files on commit,          or when the cache is full and space has to be made for more blocks.  The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to ensure efficient processing is accomplished.



LGWR - Log Writer process is responsible for writing the log buffers out to the redo logs. I n RAC, each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs.



ARCH - (Optional) Archive process writes filled redo logs to the archive log location(s). In RAC, the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other instances in the RAC setup should they be needed for recovery.



CKPT - Checkpoint process writes checkpoint information to control files and data file headers.



Pnnn - (Optional) Parallel Query Slaves are started and stopped as needed to participate in parallel query operations.



CQJ0 - Job queue controller process wakes up periodically and checks the job log.  If a job is due, it spawns Jnnn processes to handle jobs.



Jnnn - (Optional) Job processes used by the Oracle9i job queues to process internal Oracle9i jobs. The CQJ0 process controls it automatically.



QMN - (Optional) Advanced Queuing process is used to control the advanced queuing jobs.



Snnn - (Optional) Pre-spawned shared server processes are used by the multi-threaded server (MTS) process to handle connection requests from users, and act as connection pools for user processes.  These user processes also handle disk reads from database datafiles into the database block buffers.



Dnnn - (Optional) Dispatcher process for shared server (MTS) - It accepts connection requests and portions them out to the pre-spawned server processes.



MMON – This process performs various manageability-related background tasks, for example:



MMNL - This process performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation.



MMAN - is used for internal database tasks that manage the automatic shared memory. MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components.



RBAL - This process coordinates rebalance activity for disk groups in an Automatic Storage Management instance.



ORBn - performs the actual rebalance data extent movements in an Automatic Storage Management instance.  There can be many of these at a time, called ORB0, ORB1, and so forth.



OSMB - is present in a database instance using an Automatic Storage Management disk group.  It communicates with the Automatic Storage Management instance.



FMON - The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information.  When you specify the FILE_MAPPING initialization parameter for mapping data files to physical devices on a storage subsystem, then the FMON process is spawned.



LMON - The Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage the global enqueues and the resources.  LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES).  In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)



LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources.  The LMD process also handles deadlock detection and remote enqueue requests.  Remote resource requests are the requests originating from another instance.



RAC Background Processes



LMSx - The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Real Application Clusters software provides for up to 10 Global Cache Service Processes.  The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster. The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of  messages to remote instances.



LMSn - The LMSn processes handle the blocking interrupts from the remote instance for the Global Cache Service resources by:

·         Managing the resource requests and cross-instance call operations for the shared resources.

·          Building a list of invalid lock elements and validating the lock elements during recovery.

·         Handling the global lock deadlock detection and Monitoring for the lock conversion timeouts

LCKx - This process manages the global enqueue requests and the cross-instance broadcast. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).



DIAG: Diagnosability Daemon – Monitors the health of the instance and captures the data for instance process failures.



 

Reclaim DataFile Size

 Reclaim datafile size

Now we can reclaim the tablespace size by using below sql query. First check if really there is a free space in the data file and how much we can reclaim the size.

/*


SELECT File_ID, Tablespace_name, file_name, High_Water_Mark, current_size_in_GB,
    'ALTER DATABASE DATAFILE '''||file_name||''' resize '|| High_Water_Mark|| 'M;' script_reclaim
FROM
(
    WITH v_file_info
         AS (SELECT FILE_NAME, FILE_ID, BLOCK_SIZE
               FROM dba_tablespaces tbs, dba_data_files df
              WHERE tbs.tablespace_name = df.tablespace_name)
    SELECT A.FILE_ID,
           A.FILE_NAME,
           A.TABLESPACE_NAME,
           CEIL ( (NVL (hwm, 1) * v_file_info.block_size) / 1024 / 1024) High_Water_Mark,
           CEIL (BLOCKS * v_file_info.block_size / 1024 / 1024 /2014) current_size_in_GB
      FROM dba_data_files A,
           v_file_info,
           (  SELECT file_id, MAX (block_id + BLOCKS - 1) hwm
                FROM dba_extents
            GROUP BY file_id) b
     WHERE A.file_id = b.file_id(+)
       AND A.file_id = v_file_info.file_id
       AND tablespace_name='CLAIM_INDEX' -- << change the tablespace name to reclaim the datafile size
)   
WHERE  High_Water_Mark <> current_size_in_GB;

*/



ALTER DATABASE DATAFILE '+ASMTXNDATA/txnd/datafile/all_temp_tables.284.882569903' RESIZE 6M