HANDLECOLLISIONS - NOHANDLECOLLISIONS

         



HANDLECOLLISIONS was added to the replicat process in order to allow for collisions caused by uncertainty in the initial load to be handled for a short period of time. It is designed to handle changes to rows that were made during the time that the initial load was happening and is not designed to handle collisions that occur because of bi-directional replication or logic or architectural problems.

Never use HANDLECOLLISIONS unless it is absolutely necessary. Even then, use it for the shortest time possible.




If there is an update to a column that is used as a key in GoldenGate the following will occur:

If the row that has the old key is not found in the target database, the update is converted to an insert.
If a row is found with the new key, the row with the old key is deleted and the update overlays the row with the new key.

Both of these cases require that all of the columns are logged. This is done by making sure trandata is complete and by using the extract parameter NOCOMPRESSUPDATES.

If a duplicate record is found, the value in the trail file is used and the record that previously exists is discarded.

CONDITION ACTION

INSERT Collision INSERT Collision INSERT IN SOURCE WHOSE KEY CLUMN EXIST ON TARGET Converted to UPDATES UPDATE Collision UPDATE Collision Updated in source but row not present in target MISSING ROW FROM TARGET IS CONVERTED TO INSERT DELETE Collision DELETE Collision Deleted in source but row not present in target Ignored Discard file : SOURCE TARGET ERROR MESSAGE Duplicate inserts Send to discard file when it comes across – REPERROR (0001 Discard) Unique constraint violation. Updated in source but row not present at target Send to discard file when it comes across – REPERROR (1403 Discard) No data found Deleted in source but row not present at target Send to discard file when it comes across – EPERROR (1403 Discard) No data found


If a missing record is found during an update or delete operation that does not affect the GoldenGate key value, it is simply discarded.











Enabling HANDLECOLLISIONS---->

Globally: Enable global HANDLECOLLISIONS for ALL MAP statements

/*
HANDLECOLLISIONS
MAP soe.emp, TARGET soe.emp;
MAP soe.device, TARGET soe.device;
MAP soe.deviceinfo, TARGET soe.deviceinfo;
MAP soe.orders, TARGET soe.orders;

*/



Group : Enable HANDLECOLLISIONS for some MAP statements

/*
HANDLECOLLISIONS
MAP soe.emp, TARGET soe.emp;
MAP soe.device, TARGET soe.device;

NOHANDLECOLLISIONS -- (from here NOHANDLECOLLISIONS )

MAP soe.deviceinfo, TARGET soe.deviceinfo;
MAP soe.orders, TARGET soe.orders;
*/

/*

Tables: Enable global HANDLECOLLISIONS but disable for specific tables

HANDLECOLLISIONS
MAP soe.emp, TARGET soe.emp;
MAP soe.device, TARGET soe.device;
MAP soe.deviceinfo, TARGET soe.deviceinfo, NOHANDLECOLLISIONS; -- This is imp
MAP soe.orders, TARGET soe.orders, NOHANDLECOLLISIONS; -- This is imp

*/

Detail record count : stats replicate --- redirected

Don't forget to remove the HANDLECOLLISIONS parameter after the Replicat has moved old CSN where it was abending previously.
Also make sure to restart the Replicat after the removing this parameter.


Create N number of Logical partition in Linux

Hi All,
 With help of below script we can create multiple logical(extended) portions
 
 uses : Mk_partion.sh Disk name
    Ex. Mk_partion.sh /dev/sdb
 Param  :
    1."PARTITION_SIZE"  -- Logical Portion size
    2."PARTITION_SIZE_P" --Primary Portion size


click Here to Download : Mk_partion.sh      
   
#----------------------------Start Mk_partion.sh-------------------------------------

#-- +----------------------------------------------------------------------------+
#-- |                               By OraVR                                     |
#-- |                            info@gmail.com                                 |
#-- |                              www.oravr.com                                 |
#-- |----------------------------------------------------------------------------|
#-- |                                                                            |
#-- |----------------------------------------------------------------------------|
#-- | DATABASE : Oracle                                                          |
#-- | FILE     : Mk_partion.sh                                                   |
#-- | CLASS    : Storage                                                         |
#-- | PURPOSE  : Create Multiple logical disk                                    |
#-- | NOTE     : As with any code, ensure to test this script in a development   |
#-- |            environment before attempting to run it in production.          |
#-- +----------------------------------------------------------------------------+
#!/bin/bash
if [ $# -eq 0 ]
then
  echo "input the device"
  exit
fi
NUM_PARTITIONS=50
PARTITION_SIZE="+4096M"   
PARTITION_SIZE_P="+100M"
SED_STRING="o"
TAIL="p
w
q
"
NEW_LINE="
"
LETTER_n="n"
EXTENDED_PART_NUM=4
TGTDEV=$1
SED_STRING="$SED_STRING$NEW_LINE"
for i in $(seq $NUM_PARTITIONS)
do
  if [ $i -lt $EXTENDED_PART_NUM ]
  then
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$NEW_LINE$NEW_LINE$PARTITION_SIZE_P$NEW_LINE"
  fi
  if [ $i -eq $EXTENDED_PART_NUM ]
  then
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$NEW_LINE$NEW_LINE"
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$PARTITION_SIZE$NEW_LINE"
  fi
  if [ $i -gt $EXTENDED_PART_NUM ]
  then
    SED_STRING="$SED_STRING$LETTER_n$NEW_LINE$NEW_LINE$PARTITION_SIZE$NEW_LINE"
  fi
done
SED_STRING="$SED_STRING$TAIL"
sed -e 's/\s*\([\+0-9a-zA-Z]*\).*/\1/' << EOF | fdisk ${TGTDEV}
  $SED_STRING
EOF
########## Remove PARTITION if anything goes wrong #########################################
######dd if=/dev/zero of=/dev/sda bs=512 count=1 conv=notrunc######

#---------------------------- End  Mk_partion.sh-------------------------------------












 

 

 

Enable DDL Logging


Enable DDL Logging




DDL logging is really very good feature to monitor changes done in DB design.
One can monitor or audit all DB changes by viewing log file.
One can set enable_ddl_logging=true to enable DDL logging.
Oracle will start recording all DDL statements in log.xml file. This file has the same format and basic behavior as the alert log, but it only contains the DDL statements.
The DDL log file is created in $ADR_HOME/log/ddl directory; it contains DDL statements that are extracted from alertl log file.
Log file Path
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/log/ddl/log.xml
NOTE:- Oracle license "Oracle Change Management Pack" is require to use this feature.
Parameter enable_ddl_logging is licensed as part of the Change Management Pack.
Following DDL statements are written to the log:

•    ALTER/CREATE/DROP/TRUNCATE CLUSTER
•    ALTER/CREATE/DROP FUNCTION
•    ALTER/CREATE/DROP INDEX
•    ALTER/CREATE/DROP OUTLINE
•    ALTER/CREATE/DROP PACKAGE
•    ALTER/CREATE/DROP PACKAGE BODY
•    ALTER/CREATE/DROP PROCEDURE
•    ALTER/CREATE/DROP PROFILE
•    ALTER/CREATE/DROP SEQUENCE
•    CREATE/DROP SYNONYM
•    ALTER/CREATE/DROP/RENAME/TRUNCATE TABLE
•    ALTER/CREATE/DROP TRIGGER
•    ALTER/CREATE/DROP TYPE
•    ALTER/CREATE/DROP TYPE BODY
•    DROP USER
•    ALTER/CREATE/DROP VIEW

Following is the example of DDL logging,


VRTEST$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 11 04:41:39 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

VB_SQL>
VB_SQL> show parameter enable_ddl_logging
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_ddl_logging                   boolean     FALSE
VB_SQL> alter system set enable_ddl_logging=true;
System altered.
VB_SQL>
VB_SQL> show parameter enable_ddl_logging;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_ddl_logging                   boolean     TRUE
VB_SQL> conn VB/VB
Connected.
VB_SQL>
VB_SQL> create table vb_ddltest (id number, name varchar2(50));
Table created.
VB_SQL>
VB_SQL> alter table vb_ddltest add ( address varchar2(100));
Table altered.
VB_SQL>
VB_SQL> insert into vb_ddltest values (1,'CBT','XYZ');
1 row created.
VB_SQL>
VB_SQL> commit;
Commit complete.
VB_SQL> drop table vb_ddltest;
Table dropped.
VB_SQL>
Check the log file for all DDL commands that was run by user.

VRTEST$ pwd
"/opt/oracle/app/orcl/diag/rdbms/VRTEST/VRTEST/log/ddl"

VRTEST$ cat log.xml
 msg_id='opiexe:4383:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='OraLinuxNode' host_addr='10.184.150.107'
 version='1'>
 create table vb_ddltest (id number, name varchar2(50))

 msg_id='opiexe:4383:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='OraLinuxNode' host_addr='10.184.150.107'>
 alter table vb_ddltest add ( address varchar2(100))

 msg_id='opiexe:4383:2946163730' type='UNKNOWN' group='diag_adl'
 level='16' host_id='OraLinuxNode' host_addr='10.184.150.107'>
 drop table vb_ddltest

VRTEST$
If you want to see clear and readable DDL log file, Just invoke ADRCI utility and issue show log command as shown below,
VRTEST$ adrci
ADRCI: Release 19.0.0.0.0 - Production on Tue Feb 11 04:44:53 2020
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
ADR base = "/opt/oracle/app/orcl"

ADR base = "/mnt/Oracle"
adrci> show log -l ddl
ADR Home = /opt/oracle/app/orcl/diag/rdbms/VRTEST/VRTEST:
*************************************************************************
Output the results to file: /tmp/utsout_19321_13991_1.ado
adrci>

When you issue the command "show log", this will open the log.xml file in edtor (i.e. vi in linux/unix) and show the contents, in following format :


2020-02-11 12:23:22.395000 +05:30
create table vb_ddltest (id number, name varchar2(50))
2020-02-11 12:24:05.301000 +05:30
alter table vb_ddltest add ( address varchar2(100))
2020-02-11 12:25:14.003000 +05:30
drop table vb_ddltest
2020-02-11 12:30:33.272000 +05:30
truncate table wri$_adv_addm_pdbs
Also you can see the debug log using adrci show log command i.e. show log -l debug.



SQL Query Parsing




SQL generally goes through

    * PARSE 
    * BIND 
    * EXECUTE 
    * FETCH

You may have multiple fetches per execution (fetch first 10 rows, next 10 rows and etc). Equally some SQLs do not have a fetch (eg an insert). 
A transaction consists of a numbers of SQLs. If you have 20-30 SQLs per transaction, then you got some reasonable complexity. Not every statement is an isolated transaction in its own right.

How Oracle processes a query
1. Parse: In this stage, the user process sends the query to the server process with a request to parse or compile the query. The server process checks the validity of the command and uses the area in the SGA known as the shared pool to compile the statement. At the end of this phase, the server process returns the status—that is, success or failure of the parse phase—to the user process.
2. Execute: During this phase in the processing of a query, the server process prepares to retrieve the data.
3. Fetch: The rows that are retrieved by the query are returned by the server to the user during this phase. Depending on the amount of memory used for the transfer, one or more fetches are required to transfer the results of a query to the user.




The following example describes Oracle database operations at the most basic level. It illustrates an Oracle database configuration in which the user and associated server process are on separate computers, connected through a network.




1. An instance has started on a node where Oracle Database is installed, often called the host or database server.
2. A user starts an application spawning a user process. The application attempts to establish a connection to the server. (The connection may be local, client/server, or a three-tier connection from a middle tier.)
3. The server runs a listener that has the appropriate Oracle Net Services handler. The listener detects the connection request from the application and creates a dedicated server process on behalf of the user process.
4. The user runs a DML-type SQL statement and commits the transaction. For example, the user changes the address of a customer in a table and commits the change.
5. The server process receives the statement and checks the shared pool (an SGA component) for any shared SQL area that contains an identical SQL statement. If a shared SQL area is found, the server process checks the user’s access privileges to the user s requested data, and the existing shared SQL area is used to process the statement. If a shared SQL area is not found, a new shared SQL area is allocated for the statement so that it can be parsed and processed.
6. The server process retrieves any necessary data values, either from the actual data file (table) or from values stored in the database buffer cache.
7. The server process modifies data in the SGA. Because the transaction is committed, the Log Writer process (LGWR) immediately records the transaction in the redo log file. The Database Writer process (DBWn) writes modified blocks permanently to disk when it is efficient to do so.
8. If the transaction is successful, the server process sends a message across the network to the application. If it is not successful, an error message is transmitted.
9. Throughout this entire procedure, the other background processes run, watching for conditions that require intervention. In addition, the database server manages other users’ transactions and prevents contention between transactions that request the same data.