Showing posts with label performance tuning. Show all posts
Showing posts with label performance tuning. Show all posts

Oracle RAC and MTU

on Rac Environment UDP protocol is used for cache fusion. (BTW, UDP and RDS protocols are supported in UNIX platform --Exadata ; whereas Windows uses TCP protocol).


UDP : is an higher level protocol stack, and it is implemented over IP Protocol ( UDP/IP). Cache Fusion uses UDP protocol to send packets over the wire (Exadata uses RDS protocol though).

UDP protocol, fragmentation, and assembly

The Maximum Transmission Unit (MTU) size is the maximum packet size in bytes that a network interface can send over a network device. This size includes the protocol information, such as the TCP and IP headers. This size does not contain the Ethernet header. This study is concerned with the MTU settings that can be set in the configuration files in SLES, for example, in /etc/sysconfig/network/ifcfg-qeth-bus-ccw-0.0.0xxx for each device. The effective MTU size is also negotiated during the connection creation with the other end point, and the MTU can be reduced to the capacity of that device.

When the MTU is smaller than the packet size, the data must be divided into MTU sized packets. On the receiving side the packets must be reassembled, thus creating overhead that scales with the quantity of packages required. When the MTU is only slightly too small, there can also be wasted bandwidth. For example, when the MTU is slightly smaller than the average packet size the secondary packet can be almost empty.

An MTU size bigger than the packet size should have no impact on the network bandwidth, if all intermediate nodes on the path to the final destination support the MTU size, because only the real data are sent. But the number of packages processed in parallel is calculated with the TPC/IP windows size and the MTU size. When the window size stays constant and the MTU size is increased, this number is decreased, which might lead to a performance degradation, when the smaller MTU size would be appropriate as well.

Let us consider an example of MTU set to 3000 in a network interface. One 8K block transfer can not be performed with just one IP packet  as the IP packet size (3000 bytes) is less than 8K. So, one transfer of UDP packet of 8K size is fragmented to 3 IP packets and sent over the wire. In the receiving side, those 3 packets are reassembled to create one UDP buffer of size 8K. After the assembly, that UDP buffer is delivered to an UDP port of a UNIX process. Usually, a foreground process will listen on that port to receive the UDP buffer.

Consider what happens If MTU is set to 9000 in the network interface:  Then 8K buffer can be transmitted over the wire with just one IP packet. There is no need for fragmentation or reassembly with MTU=9000 as long as the block size is less than 8K. MTU=9000 is also known as jumbo frame configuration.  ( But, if the database block size is greater than jumbo frame then fragmentation and reassembly is still required. For example, for 16KB size, with MTU=9000,  there will three 1K IP packets  and one 7K IP packet to be transmitted).

Fragmentation and reassembly is performed at OS Kernel layer level and hence it is the responsibility of Kernel and the stack below to complete the fragmentation and assembly. Oracle code simply calls the send and receive system calls, passes the buffers to populate.

UDP vs TCP

 From RAC point of view, UDP is the right choice over TCP for cache fusion traffic. With TCP/IP, for every packet transfer has overhead, connection need to be setup, packet sent, and the process must wait for TCP Acknowledgement before considering the packet send as complete. In a busy RAC systems, we are talking about 2-3 milli-seconds for packet transfer and with TCP/IP, we probably may not be able to achieve that level of performance. With UDP, packet transfer is considered complete, as soon as packet is sent and error handling is done by Oracle code itself. As you know, reliable network is a key to RAC stability, if much of packets (closer to 100%) are sent without any packet drops, UDP is a good choice over TCP/IP for performance reasons.

If there are reassembly failures, then it is a function of unreliable network or kernel or something else, but nothing to do with the choice of UDP protocol itself. Of course, RDS is better than UDP as the error handling is offloaded to the fabric, but usually require, infiniband fabric for a proper RDS setup. For that matter, VPN connections use UDP protocol too.

What happens if an IP packet is lost, assuming MTU=3000 bytes?

From the wireshark files with mtu3000, you will see that each of the packet have a Fragment offset. That fragment offset and IP identification is used to reassemble the IP packets to create 8K UDP buffer. Consider that there are 3 puzzle pieces, each puzzle piece with markings, and Kernel uses those markings( offset and IP ID) to reassemble the packets. Let’s consider the case, one of 3 packet never arrived, then the kernel threads will keep those 2 IP packets in memory for 64 seconds( Linux kernel parameter ipfrag_time controls that time) before declaring reassembly failure. Without receiving the missing IP packet, kernel can not reassemble the UDP buffer, and so, reassembly failure is declared.

Oracle foreground process will wait for 30 seconds (it used to be 300 seconds or so in older version of RAC) and if the packet is not arrived within that timeout period, FG process will declare a ("gc cr block lost" and/or "gc current block lost") and re-request the block. Of course, kernel memory allocated for IP fragmentation and assembly is constrained by Kernel parameter ipfrag_high_thres and ipfrag_low_thres and lower values for these kernel parameters can lead to reassembly failures too (and that’s why it is important to follow all best practices from RAC installation guides).

[root@west02 ~]# netstat -s |grep assem

    453433 reassemblies required

    43380 packets reassembled ok

    3180797 packet reassembles failed --- if GC loss occurred  and cluster wait increase 


Oracle Clusterware & Oracle RAC instances rely on heartbeats for node memberships. If network Heartbeats are consistently dropped, Instance/Node eviction may occur. The Symptoms below are therefore relevant for Node/Instance evictions.

1. "gc cr block lost" / "gc current block lost" in top 5 or significant wait event

2. SQL traces report multiple gc cr requests / gc current request /

3. gc cr multiblock requests with long and uniform elapsed times

4. Poor application performance / throughput

5. Packet send/receive errors as displayed in ifconfig or vendor supplied utility

6. Netstat reports errors/retransmits/reassembly failures

SQL SCRIPT : 

SELECT A.VALUE "GC BLOCKS LOST 1",
       B.VALUE "GC BLOCKS CORRUPT 1",
       C.VALUE "GC BLOCKS LOST 2",
       D.VALUE "GC BLOCKS CORRUPT 2"
  FROM GV$SYSSTAT A,
       GV$SYSSTAT B,
       GV$SYSSTAT C,
       GV$SYSSTAT D
 WHERE     A.INST_ID = 1
       AND A.NAME = 'gc blocks lost'
       AND B.INST_ID = 1
       AND B.NAME = 'gc blocks corrupt'
       AND C.INST_ID = 2
       AND C.NAME = 'gc blocks lost'
       AND D.INST_ID = 2
       AND D.NAME = 'gc blocks corrupt';


The parameters in /proc/sys/net/ipv4 control various aspects of the network, including a parameter that controlls the reassembly buffer size.

ipfrag_high_threshold specifies that maximum amount of memory used to reassemble IP fragments. When the memory used by fragments reaches ipfrag_high_threshold, old entries are removed until the memory used declines to ipfrag_low_threshold.

If the output of netstat shows increasing amounts of IP fragment reassembles failing, we recommend to increase ipfrag_high_threshold. The impact can be significant. In some use cases, increasing this buffer space improved throughput from 32MB/sec to 80MB/sec.

To temporarily change the value of ipfrag_high_threshold, use this command as root:

[root@west01 ~]# cat /proc/sys/net/ipv4/ipfrag_high_thresh

4194304

[root@west01 ~]# cat /proc/sys/net/ipv4/ipfrag_low_thresh

3145728

Change Parameter :

[root@west01 ~]#echo "Value" > /proc/sys/net/ipv4/ipfrag_high_threshold

OR 

[root@west02 ~]# sysctl -a|grep ipfrag

net.ipv4.ipfrag_high_thresh = 4194304

net.ipv4.ipfrag_low_thresh = 3145728

net.ipv4.ipfrag_max_dist = 64

net.ipv4.ipfrag_secret_interval = 0

net.ipv4.ipfrag_time = 30


BTW, there are few other reasons for 'gc lost packets' too. High CPU usage also can lead to 'gc lost packets' failures too, as the process may not have enough cpu time to drain the buffers, network buffers allocated for that process becomes full, and so, kernel will drop incoming packets.

It is probably better to explain these concepts visually. So, I created a video. When you watch this video, notice that there is HD button on the top of the video. Play this in HD mode so that you will have better learning experience.

Oracle Enterprise Session Border Controller (E-SBC) : connects disparate Internet Protocol (IP) communications networks while mitigating security threats, curing interoperability problems, and ensuring reliable communications. It protects and controls real-time voice, video, and Unified Communications (UC) as they traverse IP network borders.


Errors : 

Process termination requested for pid 6440 [source = rdbms], [info = 2] [request issued by pid: 52630, uid: 211819489]


Errors in file /opt/oracle/app/orcl/diag/rdbms/regtxndb/regtxndb1/trace/regtxndb1_m000_52909.trc  (incident=121694) (PDBNAME=CDB$ROOT):

ORA-00603: ORACLE server session terminated by fatal error

ORA-27504: IPC error creating OSD context

ORA-27300: OS system dependent operation:sendmsg failed with status: 105

ORA-27301: OS failure message: No buffer space available

ORA-27302: failure occurred at: sskgxpsnd2





Logging/ No logging



The main question about NOLOGGING I hear all the time is: does creating a table with the NOLOGGING option means there is “no generation of redo ever”, or just that the initial creation operation has no redo generation, but that DML down the road generates redo? How and when can the NOLOGGING option be employed?

Redo generation is a vital part of the Oracle recovery mechanism. Without it, an instance will not recover when it crashes and will not start in a consistent state. Excessive redo generation is the result of excessive work on the database.

This paper covers the subject of reducing redo generation using LOGGING and NOLOGGING options, the differences between them, how it happens, how to reduce it and when to use. Also, you will find examples and tips regarding each one of them.

 Space is saved in the redo log files 
 The time it takes to create the table is decreased 
 Performance improves for parallel creation of large tables 

"A very important rule with respect to data is to never put yourself into an unrecoverable situation. The importance of this guideline cannot be stressed enough, but it does not mean that you can never use time saving or performance enhancing options. "









Let’s conduct a brief summary about the redo process. When Oracle blocks are changed, including undo blocks, oracle records the changes in a form of vector changes which are referred to as redo entries or redo records. The changes are written by the server process to the redo log buffer in the SGA. The redo log buffer is then flushed into the online redo logs in near real time fashion by the log writer LGWR. 

The redo logs are written by the LGWR when:

•     When a user issue a commit.
•     When the Log Buffer is 1/3 full.
•     When the amount of redo entries is 1MB.
•     Every three seconds
•     When a database checkpoint takes place. The redo entries are written before the checkpoint to ensure recoverability.

"If the log buffer is too small, then log buffer space waits will be seen during bursts of redo generation. LGWR may not begin to write redo until the _log_io_size threshold (by default, 1/3 of the log buffer or 1M whichever is less) has been exceeded, and the remainder of the log buffer may be filled before LGWR can complete its writes and free some space in the log buffer. 

Redo log files record changes to the database as a result of transactions and internal Oracle server actions. (A transaction is a logical unit of work, consisting of one or more SQL statements run by a user.) Redo log files protect the database from the loss of integrity because of system failures caused by power outages, disk failures, and so on. Redo log files must be multiplexed to ensure that the information stored in them is not lost in the event of a disk failure. The redo log consists of groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of that group, and each group is identified by a number. The Log Writer (LGWR) process writes redo records from the redo log buffer to all members of a redo log group until the file is filled or a log switch operation is requested. Then, it switches and writes to the files in the next group. Redo log groups are used in a circular fashion.



Important points about LOGGING and NOLOGGING

Despite the importance of the redo entries, Oracle gave users the ability to limit redo generation on tables and indexes by setting them in NOLOGGING mode. 

NOLOGGING affect the recoverability. Before going into how to limit the redo generation, it is important to clear the misunderstanding that NOLOGGING is the way out of redo generation, this are some points regarding it:

 NOLOGGING is designed to handle bulk inserts of data which can be easy reproduced.

 Regardless of LOGGING status, writing to undo blocks causes generation of redo.

 LOGGING should not be disabled on a primary database if it has one or more standby databases. For this reason oracle introduced the ALTER DATABASE FORCE LOGGING command in Oracle 9i R2. (Means that the NOLOGGING attribute will not have any effect on the segments) If the database is in FORCE 

LOGGING MODE. NOLOGGING can be also override at tablespace level using ALTER TABLESPACE … FORCE LOGGING.

 Any change to the database dictionary will cause redo generation. This will happen to protect the data dictionary.  An example: if we allocated a space above the HWM for a table, and the system fail in the middle of one INSERT /*+ APPEND */ , the Oracle will need to rollback that data dictionary update. 

There will be redo generated but it is to protect the data dictionary, not your
newly inserted data (Oracle will undo the space allocation if it fails, where as your data will disappear).
 The data which are not logged will not be able to recover. The data should be backed up after the modification.
 Tables and indexes should be set back to LOGGING mode when the NOLOGGING is no longer needed.

 NOLOGGING is not needed for Direct Path Insert if the database is in NO ARCHIVE LOG MODE. (See table 1.1)

                                                  
Table Mode
Insert Mode
ArchiveLog Mode
Result
LOGGING
APPEND
ARCHIVE LOG
REDO GENERATED
NOLOGGING
APPEND
ARCHIVE LOG
NO REDO
LOGGING
NO APPEND
ARCHIVE LOG
REDO GENERATED
NOLOGGING
NO APPEND
ARCHIVE LOG
REDO GENERATED
LOGGING
APPEND
NO ARCHIVE LOG
NO REDO
NOLOGGING
APPEND
NO ARCHIVE LOG
NO REDO
LOGGING
NO APPEND
NO ARCHIVE LOG
REDO GENERATED
NOLOGGING
NO APPEND
NO ARCHIVE LOG
REDO GENERATED


The data which is not able to reproduce should not use the NOLOGGING mode. If data which can not be reloaded was loaded using NOLOGGING. The data cannot be recovered when the database crashes before backing the data. NOLOGGING does not apply to UPDATE, DELETE, and INSERT. NOLOGGING will work during certain situations but subsequent DML will generate redo. Some of these situations are:  o direct load INSERT (using APPEND hint),  o CREATE TABLE ... AS SELECT,  o CREATE INDEX. If the LOGGING or NOLOGGING clause is not specified when creating a table, partition, or index the default to the LOGGING attribute, will be the LOGGING attribute of the tablespace in which it resides.

 Table redefinition cannot be done NOLOGGING.
 Temp files are always set to NOLOGGING mode.

The FORCE LOGGING mode is a persistent attribute of the database.  That is, if the database is shut down and restarted, it remains in the same logging mode state.  FORCE LOGGING must be configured again after recreating the control file.
If the database has a physical standby database, then NOLOGGING operations will render data blocks in the standby “logically corrupt” because of the missing redo log entries. If the standby database ever switches to the primary role, errors will occur when trying to access objects that were previously written with the NOLOGGING option, you will an error like this:

ORA-01578: ORACLE data block corrupted (file # 3, block # 2527)
ORA-01110: data file 1: '/u1/oracle/dbs/stdby/tbs_nologging_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option



Why Oracle generates redo and undo for DML

When you issue an insert, update or delete, Oracle actually makes the change to the data blocks that contain the affected data even though you have not issued a commit.  To ensure database integrity, Oracle must write information necessary to reverse the change (UNDO) into the log to handle transaction failure or rollback.  Recovery from media failure is ensured by writing information necessary to re-play database changes (REDO) into the log.  So, UNDO and REDO information logically MUST be written into the transaction log of the RDBMS (see below regarding temporary tables).

While the RDBMS logically would only need to write UNDO and REDO into the transaction log, the UNDO portion must also be kept online (on disk and accessible to the RDBMS engine) to enable rollback of failed transactions. If UNDO data was only stored in the transaction log, the log could get archived and the RDBMS would have to try and read it from tape.  On some platforms, the tape could be sitting in the DBA's desk drawer, so there are practical problems with this solution.  Every RDBMS must meet the basic requirement of online access to undo data, and Oracle does this by storing UNDO data in what we call Rollback Segments (rollback = undo).

Because Oracle places UNDO data into a rollback segment and also must (logically) place this data into the transaction log, it is simpler to just treat rollback tablespaces like any other tablespace from a log generation perspective.  That is, Oracle generates REDO for a Rollback Segment, which is logically the same as UNDO for a data block (i.e. your table, index, etc.). 

Oracle's transaction log is really called the REDO log because it only contains redo records.  There logically MUST be UNDO records stored in the log, but they are stored in the form of redo for rollback segments.

For temporary tables, Oracle will need to do things like facilitate rollback, but it is not necessary to bring back temporary tables following media failure.

The undo data is also needed for things like rollback to save point and read consistency, not just to reclaim space that was used by that temporary table.

Flashback and NOLOGGING

When using Flashback Database with a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and data files affected by the NOLOGGING operation. For example, if you perform a direct-path INSERT operation in NOLOGGING mode, and that operation runs from 8:00 to 8:15 on April 7, 2008, and you later need to use Flashback Database to return to the target time 08:07 on that date, the objects and data files updated by the direct-path INSERT may be left with block corruption after the Flashback Database operation completes.

If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING operation. Also, perform a full or incremental backup of the affected data files immediately after any NOLOGGING operation to ensure recoverability to points in time after the operation. If you expect to use Flashback Database to return to a point in time during an operation such as a direct-path INSERT, consider performing the operation in LOGGING mode.

Performance and Recovery considerations 

The NOLOGGING mode improves performance because it generates much less log data in the redo log files helping in eliminating the time needed to execute the redo generation (latch acquisition, redolog writing, etc.). The user is responsible for backing up the data after a NOLOGGING insert operation in order to be able to perform media recovery.  

Be aware that this feature disables the recover mechanisms for this transaction: It will      be required to repeat the process from the very beginning in case of a database or     instance failure. 

Disabling Redo Generation (NOLOGGING)

The NOLOGGING attribute tells the Oracle that the operation being performed does not need to be recoverable in the event of a failure. In this case Oracle will generate a minimal number of redo log entries in order to protect the data dictionary, and the operation will probably run faster. Oracle is relying on the user to recover the data manually in the event of a media failure.

Logging can be disabled at the table level or the tablespace level. If it is done at the tablespace level then every newly created index or table in this tablespace will be in NOLOGGING mode (You can have logging tables inside a NOLOGGING tablespace). A table or an index can be created with NOLOGGING mode or it can be altered using ALTER TABLE/INDEX NOLOGGING. It is important to note that just because an index or a table was created with NOLOGGING does not mean that redo generation has been stopped for this table or index. NOLOGGING is active in the following situations and while running one of the following commands but not after that. 

This is a partial list:

•      DIRECT LOAD (SQL*Loader)
•      DIRECT LOAD  INSERT (using APPEND hint) 
•      CREATE TABLE ... AS SELECT
•      CREATE INDEX
•      ALTER TABLE MOVE
•      ALTER TABLE ... MOVE PARTITION  
•      ALTER TABLE ... SPLIT PARTITION 
•      ALTER TABLE … ADD PARTITION (if HASH partition)
•      ALTER TABLE … MERGE PARTITION
•      ALTER TABLE … MODIFY PARTITION  o ADD SUBPARTITON o COALESCE SUBPARTITON o REBUILD UNUSABLE INDEXES
•      ALTER INDEX ... SPLIT PARTITION 
•      ALTER INDEX ... REBUILD 
•      ALTER INDEX ... REBUILD PARTITION 


Logging is stopped only while one of the commands above is running, so if a user runs this:

•      ALTER INDEX new_index NOLOGGING.

The actual rebuild of the index does not generate redo (all data dictionary changes associated with the rebuild will do) but after that any DML on the index will generate redo this includes direct load insert on the table which the index belongs to.

Here is another example to make this point more clear:

•      CREATE TABLE new_table_nolog_test NOLOGGING(….);

All the following statements will generate redo despite the fact the table is in NOLOGGING mode:

•      INSERT INTO new_table_nolog_test ..., 

•      UPDATE new_table_nolog_test SET …, 

•      DELETE FROM new_table_nolog_test ..

The following will not generate redo (except from dictionary changes and indexes):


•      INSERT /*+APPEND+/ …

•      ALTER TABLE new_table_nolog_test MOVE …

•      ALTER TABLE new_table_nolog_test MOVE PARTITION …


Consider the following example:



SQL> select name,value from v$sysstat where name like '%redo size%';

NAME                                                         VALUE 
----------------------------------------------------------- ---------- 
redo size                                                    27556720



SQL> insert into scott.redo1 select * from scott.redotesttab;
50000 rows created.
SQL> select name,value from v$sysstat where name like '%redo size%';

NAME                                                          VALUE 
----------------------------------------------------------- ---------- 
redo size                                                     8536820

SQL> insert /*+ APPEND */ into scott.redo1 select * from scott.redotesttab;
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME                                                        VALUE 
----------------------------------------------------------- ---------- 
redo size                                                    28539944


To activate the NOLOGGING for one of the ALTER commands above add the NOLOGGING clause after the end of the ALTER command. 

For example:

•      ALTER TABLE new_table_nolog_test MOVE PARTITION parti_001 TABLESPACE new_ts_001 NOLOGGING;

The same applies for CREATE INDEX but for CREATE TABLE the NOLOGGING should come after the table name. 

•      CREATE TABLE new_table_nolog_test NOLOGGING AS SELECT * FROM big_table;

"It is a common mistake to add the NOLOGGING option at the end of the SQL


(Because oracle will consider it an alias and the table will generate a lot of logging)."


To user Direct Path Load in SQL*Loader you must run the $ORACLE_HOME/rdbms/admin/catldr.sql script before your first sqlldr is run in direct path mode. To run sqlldr in direct path mode use direct=true. 

Note: Even though direct path load reduces the generation of redo, it is not totally eliminated. That's because those inserts still generate undo which in turn generates redo.







REDUCING REDO GENERATION TIPS

While Backing Up

As mentioned in the redo generation and recoverability section, user managed backups could generate a lot of redo. The best way to eliminate this problem is to use RMAN. RMAN does not need to write the entire block to redo because it knows when the block is being copied. If the user needs to use the user managed backup then they can follow these steps to reduce redo generation:

•     Do not back up all the tablespaces in one go. This will put every tablespace in backup mode for longer than it needs to be and therefore generates redo for longer than it needs to do.
•     Automatic backup on the busy tablespaces
•     Backup a tablespace during a time when it is least busy in terms of DML.


Bulk Inserts

By bulk we mean a large percentage compared to the existing data.

To reduce the amount of redo generation in a bulk data load, the user needs to disable the indexes (when making a direct load to a table that have indexes, the indexes will produce redo) before the load then re-build them again as follow:

o Alter index index_name unusable ; # Do this for every index o Alter session set skip_unusable_indexes=true ; (*) o Insert into table_name select …  o Alter index index_name rebuild;


(*)skip_unusable_indexes is an instance initialization parameter in 10g and it default to true

Bulk Delete

1.      Create table new_table with  logging
2.      Insert into new_table select the records you want to keep from current_table. 3. Create the indexes on the new_table (*)
4.      Create constraints, grants etc.
5.      Drop current_table.
6.      Rename new_table to current.
(*) If the data left is so small or there are a lot of dependencies on the table (views, procedures, functions, etc) the following steps can be used instead of 3-6 above:

6.      Disable constrains on current_table.
7.      Truncate current_table;
8.      Insert into current_table select * from new_table ;
9.      commit;
10.      enable constraints
11.      drop table new_table;

Bulk Update
  
Use this method if indexes are going to be affected by the update. This is because mass updating indexes is more expensive than re-building them. If a small portion of the data is updated then use this method:
1.      Disable constraints.
2.      Alter index index_name unusable ;
3.      Alter session set skip_unusable_indexes=true ;
4.      update the table.
5.      commit;
6.      alter index index_name rebuild ; 7. Enable constraints.


If the update causes a good portion of the data to be updated then follow this method:

1.      Create new_table as select (updating statement)
2.      create indexes on the new_table,
3.      create grants, constraints etc on the new_table
4.      Drop current table
5.      Rename the new_table to current_table.




TIPS USING NOLOGGING MODE


DIRECT PATH INSERT

To use Direct Path Insert use the /*+ APPEND */ hint as follow:

•      INSERT /*+ APPEND */ into … SELECT …


When direct path insert is used oracle does the following:

 Format the data to be inserted as oracle blocks.

 Insert the blocks above the High Water Mark (HWM)

 When commit takes place the HWM is moved to the new place (The process is done bypassing the buffer cache).


It is clear that direct load is useful for bulk inserts. Using it to insert few hundred records at a time can have bad effect on space and performance.

It is very important to understand how Direct Path Inserts affects redo generation. As mentioned above it does not affect indexes but it is affected by the following factors:

•      The database Archivelog mode. • Using the /*+ APPEND */ hint.

•      The LOGGING mode of the table.

•      The FORCE LOGGING mode of the database (from 9i R2).

If the database is in FORCE LOGGING mode then Oracle will treat the table as if it was in LOGGING mode regardless of its mode. To find out if the database is in FORCED LOGGING or not run:
•      select FORCE_LOGGING from v$database ;

If the /*+ APPEND */ Hint is not used then the insertion will generate the normal amount of redo regardless of the other factors.

This table will show the relation between ARCHIVELOG mode and having the table in LOGGING mode when the /*+ APPEND */ hint is used. This does not include index and  data dictionary changes redo generation.

LOGGING MODE
ARCHIVELOG
NOARCHIVELOG
LOGGING
Redo
No Redo
NOLOGGING
No Redo
No Redo


For Bulk DML Bulk Inserts

To load bulk data using Direct Path.
o set table in nologging mode. Alter table table_name nologging; o alter index index_name unusable ; o alter session set skip_unusable_indexes=true ;(*) o Insert /*+ APPEND */ into table_name select … o Alter index index_name rebuild nologging; o Alter table table_name logging ; o Alter index index_name logging ; o Backup the data.
(*)skip_unusable_indexes is an instance initialization parameter in 10g and defaulted to true. 

There is no direct way (at the time of writing this document) of reducing redo generation for bulk update and delete. The user needs to reduce the workload on the database. 

Bulk Delete

1.      Create a new_table with no logging, CREATE TABLE table_name NOLOGGING (….); The NOLOGGING comes after the TABLE_NAME not at the end of the statement.
2.      Insert /*+ Append */ into new_table select the records you want to keep from current_table.
3.      Create the indexes on the new table with NOLOGGING (*) 
4.     Create constraints, grants etc.
5.      Drop current_table.
6.      Rename new_table to current.
7.      Alter new_table and indexes logging.
8.      Backup the data.
        (*) If the data left is so small or there are a lot of dependencies on the table (views, procedures, functions) the following steps can be used instead of 3-6 above:

9.      Disable constrains on current_table;
10.  Truncate current_table;


11.  make indexes unusable;
1.   alter current table NOLOGGING ;
2.  Insert /*+ APPEND */ into current_table select * from new_table ;
3.  commit;
4.  rebuild indexes with NOLOGGING;
5.  enable constraints
6.  Put current table and indexes in LOGGING mode
7.  backup the data
9.  drop table new_table;


Bulk Update
Follow the steps for bulk Delete but integrate the update within the select statement. Lets say that you want to update the value column in the goods table by increasing it by 10% the statement will be like: 

1.      Create a new_table with no logging, CREATE TABLE table_name NOLOGGING (….); (The nologging comes after the table_name, not at the end of the statement.).
2.      Insert /*+ Append */ into new_table select (update statement eg: col1, col2*
1.1,…)
3.      Create the indexes on the new table with NOLOGGING (*) 
4.      Create constraints, grants etc.
5.      Drop current_table.
6.      Rename new_table to current.
7.      Alter new_table and indexes logging.

Redo Logging I/O-Related Wait Events
There are a number of Wait Events that happen during Redo Logging activities and most of them are I/O-related.
The two most important ones are 'log file parallel write' and 'log file sync'. Oracle foreground processes wait for 'log file sync' whereas the LGWR process waits for 'log file parallel write'.
Although we usually find 'log file sync' in the "Top 5 Wait/Timed Events" section of the Statspack report, in order to understand it we will first look at 'log file parallel write':


'log file parallel write'  (See Metalink Note: 34583.1)                  
-------------------------------------------------------------------

The LGWR background process waits for this event while it is copying redo records from the memory Log Buffer cache to the current redo group's member log files on disk.
Asynchronous I/O will be used if available to make the write parallel, otherwise these writes will be done sequentially one member after the other. However, LGWR has to wait until the I/Os to all member log files are complete before the wait is completed. Hence, the factor that determines the length of this wait is the speed with which the I/O subsystem can perform the writes to the log file members.
To reduce the time waited for this event, one approach is to reduce the amount of redo generated by the database:

•      Make use of UNRECOVERABLE/NOLOGGING options.
•      Reduce the number of redo group members to the minimum necessary to ensure  not all members can be lost at the same time.
•      Do not leave tablespaces in BACKUP mode for longer than necessary.
•      Only use the minimal level of Supplemental Logging required to achieve   the required functionality e.g. in LogMiner, Logical Standby or Streams.

Another approach is to tune the I/O itself:

•      Place redo group members on storage locations so that parallel   writes do not contend with each other.
•      Do not uses RAID-5 for redo log files.
•      Use Raw Devices for redo log files.
•      Use faster disks for redo log files.
•      If archiving is being used setup redo storage so that writes for the current redo group members do not contend with reads for the group(s) currently being archived.


'log file sync'   (See Metalink Note: 34592.1)
--------------------------------------------------------
This Wait Event occurs in Oracle foreground processes when they have issued a COMMIT or ROLLBACK operation and are waiting for it to complete.
Part (but not all) of this wait includes waiting for LGWR to copy the redo records for the session's transaction from Log Buffer memory to disk.
So, in the time that a foreground process is waiting for 'log file sync', LGWR will also wait for a portion of this time on 'log file parallel write'.
The key to understanding what is delaying 'log file sync' is to compare average times waited for 'log file sync' and 'log file parallel write':

•      If they are almost similar, then redo logfile I/O is causing the delay   and the guidelines for tuning it should be followed.
•      If 'log file parallel write' is significantly different i.e smaller,   then the delay is caused by the other parts of the Redo Logging mechanism   that occur during a COMMIT/ROLLBACK (and are not I/O-related).   Sometimes there will be latch contention on redo latches, evidenced by   'latch free' or 'LGWR wait for redo copy' wait events. 

Metalink Notes:
285285.1, 373804.1, 272503.1, 212119.1, 199298.1, 188691.1, 94402.1, 160092.1,
268476.1, 427456.1, 223117.1, 427805.1, 115079.1, 432763.1, 116522.1, 49739.1,
147474.1, 269274.1, 269274.1, 4047608.8, 211125.1, 174951.1, 234096.1, 69739.1,
4047608.8, 234096.1, 28814.1, 150694.1, 167492.1, 268395.1, 334878.1, 118300.1
269274.1, 290161.1, 403747.1, 435998.1 






Dynamic sampling (DS)

Dynamic sampling (DS) was introduced to improve the optimizer's ability to generate good execution plans. This feature was enhanced and renamed Dynamic Statistics in Oracle Database 12c. The most common misconception is that DS can be used as a substitute for optimizer statistics, whereas the goal of DS is to augment optimizer statistics; it is used when regular statistics are not sufficient to get good quality cardinality estimates.

So how and when will dynamic statistics be used? During the compilation of a SQL statement, the optimizer decides whether to use DS or not by considering whether the available statistics are sufficient to generate a good execution plan. If the available statistics are not enough, DS will be used. It is typically used to compensate for missing or insufficient statistics that would otherwise lead to a very bad plan. For the case where one or more of the tables in the query does not have statistics, DS is used by the optimizer to gather basic statistics on these tables before optimizing the statement. The statistics gathered in this case are not as high a quality or as complete as the statistics gathered using the DBMS_STATS package. This trade off is made to limit the impact on the compile time of the statement.

DS may be used if a SQL statement contains a complex predicate expression and extended statistics are not available. Extended statistics were introduced in Oracle Database 11g Release 1 with the goal of helping the optimizer get good quality cardinality estimates for complex predicate expressions. For example, if you had a simple query that has where clause predicates on two correlated columns, standard statistics would not be sufficient. Consider the following SQL statement and its execution plan :

EXPLAIN PLAN FOR
SELECT *
FROM   CUSTOMERS
WHERE  cust_city           = 'Los Angeles'
AND    cust_state_province = 'CA';

SELECT * FROM table(dbms_xplan.display(format=>'basic rows'));

-----------------------------------------------
| Id  | Operation         | Name      | Rows  |
-----------------------------------------------
|   0 | SELECT STATEMENT  |           |     8 |
|   1 |  TABLE ACCESS FULL| CUSTOMERS |     8 |
-----------------------------------------------
With standard statistics the optimizer estimates the cardinality as 8 rows. The actual number of rows returned by this query is 916. Without extended statistics, the optimizer does not know that there is a perfect correlation between "Los Angeles" and the state "CA" but by setting optimizer_dynamic_sampling to level 4, the optimizer will use dynamic sampling to gather additional information about this predicate expression. The additional information provided by dynamic sampling allows the optimizer to generate a more accurate cardinality estimate.

ALTER SESSION SET optimizer_dynamic_sampling=4;

EXPLAIN PLAN FOR
SELECT *
FROM   customers
WHERE  cust_city           = 'Los Angeles'
AND    cust_state_province = 'CA';

SELECT * FROM table(dbms_xplan.display(format=>'basic rows'));

-----------------------------------------------
| Id  | Operation         | Name      | Rows  |
-----------------------------------------------
|   0 | SELECT STATEMENT  |           |   916 |
|   1 |  TABLE ACCESS FULL| CUSTOMERS |   916 |
-----------------------------------------------
In this case the improved cardinality estimate has not affected our SQL execution plan, but for more complicated queries a better cardinality estimate will often result in a better SQL execution plan, which will in turn result in a faster query execution time.

You may now be wondering why we had to set the parameter optimizer_dynamic_sampling to 4 .  The dynamic statistics feature is controlled by the parameter optimizer_dynamic_sampling, which can be set to different levels (0-11). These levels control two different things; when dynamic sampling kicks in and how large a sample size will be used to gather the statistics. The greater the sample size the bigger impact DS has on the compilation time of a query.


Level When Dynamic Statistics will be used Sample size (blocks)
0 Switches off dynamic statistics N/A
1 At least one non-partitioned table in the statement has no statistics 32
2 (default) One or more tables in the statement have no statistics
This setting is suitable for the majority of systems
64
3 Any statement that meets level 2 criteria and any statement that has one or more expressions used in the where clause predicates e.g. Where substr(CUSTLASTNAME,1,3) or Where a + b =5 64
4 Any statement that meets level 3 criteria and any statement that has complex predicates. An OR or AND operator between multiple predicates on the same table 64
5 Any statement that meets level 4 criteria 128
6 Any statement that meets level 4 criteria 256
7 Any statement that meets level 4 criteria 512
8 Any statement that meets level 4 criteria 1024
9 Any statement that meets level 4 criteria 4086
10 All statements All Blocks
11 The database determines automatically if dynamic statistics are required Automatically determined

There more information on dynamic sampling levels, refer to the Chapter 12 of the SQL Tuning Guide for Oracle Database 12c Release 2.

In our example we had an AND operator between two simple predicates on the CUSTOMERS table. From the above table you can now see why I chose to set optimizer_dynamic_sampling to level 4 in order to have dynamic sampling kick in for our example.

From Oracle Database 11g Release 2 onwards, the optimizer automatically decides if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates. You can tell if dynamic sampling kicks in by looks in the note section of the execution plan.



For serial SQL statements the dynamic sampling level is controlled by the optimizer_dynamic_sampling parameter but note that from Oracle Database 12c Release 1 the existence of SQL plan directives can also initiate dynamic statistics gathering when a query is compiled. This is a feature of adaptive statistics and is controlled by the database parameter optimizer_adaptive_features (OAF) in Oracle Database 12c Release 1 and  optimizer_adaptive_statistics (OAS) in Oracle Database 12c Release 2.  In other words, from Oracle Database 12c Release 1 onwards, DS will be used if certain adaptive features are enabled by setting the relevant parameter to TRUE. To summarize:
Database Version Name of Database Parameter Controlling Relevant Adaptive Features Default Value
Oracle Database 12c Release 1 optimizer_adaptive_features (OAF) TRUE
Oracle Database 12c Release 2 Onwards optimizer_adaptive_statistics (OAS) FALSE

Database Version Name of Database Parameter Controlling Relevant Adaptive Features Default Value
Oracle Database 12c Release 1 optimizer_adaptive_features (OAF) TRUE
Oracle Database 12c Release 2 Onwards optimizer_adaptive_statistics (OAS) FALSE


Serial statements are typically short running and any DS overhead at compile time can have a large impact on overall system performance (if statements are frequently hard parsed). For systems that match this profile, setting OAF=FALSE is recommended. For Oracle Database 12c Release 2 onwards, using the default OAS=FALSE is recommended.

Parallel statements are generally more resource intensive, so it's often worth investing in additional overhead at compile time to potentially find a better SQL execution plan.

In our original example the SQL statement is serial, which is why we needed to manual set the value for optimizer_dynamic_sampling parameter (and we'll assume that there were no relevant SQL plan directives).  If we were to issue a similar style of query against a larger table that had the parallel attribute set we can see the dynamic sampling kicking in.

When should you use dynamic sampling? DS is typically recommended when you know you are getting a bad execution plan due to complex predicates. However, you should try and use an alter session statement to set the value for optimizer_dynamic_sampling parameter as it can be extremely difficult to come up with a system-wide setting.

When is it not a good idea to use dynamic sampling? If the queries compile times need to be as fast as possible, for example, unrepeated OLTP queries where you can't amortize the additional cost of compilation over many executions.