The main question about NOLOGGING I hear all the time is: does creating a table with the NOLOGGING option means there is “no generation of redo ever”, or just that the initial creation operation has no redo generation, but that DML down the road generates redo? How and when can the NOLOGGING option be employed?
Redo generation is a vital part of the Oracle recovery mechanism. Without it, an instance will not recover when it crashes and will not start in a consistent state. Excessive redo generation is the result of excessive work on the database.
This paper covers the subject of reducing redo generation using LOGGING and NOLOGGING options, the differences between them, how it happens, how to reduce it and when to use. Also, you will find examples and tips regarding each one of them.
Space is saved in the redo log files
The time it takes to create the table is decreased
Performance improves for parallel creation of large tables
"A very important rule with respect to data is to never put yourself into an unrecoverable situation. The importance of this guideline cannot be stressed enough, but it does not mean that you can never use time saving or performance enhancing options. "
Let’s conduct a brief summary about the redo process. When Oracle blocks are changed, including undo blocks, oracle records the changes in a form of vector changes which are referred to as redo entries or redo records. The changes are written by the server process to the redo log buffer in the SGA. The redo log buffer is then flushed into the online redo logs in near real time fashion by the log writer LGWR.
The redo logs are written by the LGWR when:
• When a user issue a commit.
• When the Log Buffer is 1/3 full.
• When the amount of redo entries is 1MB.
• Every three seconds
• When a database checkpoint takes place. The redo entries are written before the checkpoint to ensure recoverability.
"If the log buffer is too small, then log buffer space waits will be seen during bursts of redo generation. LGWR may not begin to write redo until the _log_io_size threshold (by default, 1/3 of the log buffer or 1M whichever is less) has been exceeded, and the remainder of the log buffer may be filled before LGWR can complete its writes and free some space in the log buffer.
Redo log files record changes to the database as a result of transactions and internal Oracle server actions. (A transaction is a logical unit of work, consisting of one or more SQL statements run by a user.) Redo log files protect the database from the loss of integrity because of system failures caused by power outages, disk failures, and so on. Redo log files must be multiplexed to ensure that the information stored in them is not lost in the event of a disk failure. The redo log consists of groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of that group, and each group is identified by a number. The Log Writer (LGWR) process writes redo records from the redo log buffer to all members of a redo log group until the file is filled or a log switch operation is requested. Then, it switches and writes to the files in the next group. Redo log groups are used in a circular fashion.
Important points about LOGGING and NOLOGGING
Despite the importance of the redo entries, Oracle gave users the ability to limit redo generation on tables and indexes by setting them in NOLOGGING mode.
NOLOGGING affect the recoverability. Before going into how to limit the redo generation, it is important to clear the misunderstanding that NOLOGGING is the way out of redo generation, this are some points regarding it:
NOLOGGING is designed to handle bulk inserts of data which can be easy reproduced.
Regardless of LOGGING status, writing to undo blocks causes generation of redo.
LOGGING should not be disabled on a primary database if it has one or more standby databases. For this reason oracle introduced the ALTER DATABASE FORCE LOGGING command in Oracle 9i R2. (Means that the NOLOGGING attribute will not have any effect on the segments) If the database is in FORCE
LOGGING MODE. NOLOGGING can be also override at tablespace level using ALTER TABLESPACE … FORCE LOGGING.
Any change to the database dictionary will cause redo generation. This will happen to protect the data dictionary. An example: if we allocated a space above the HWM for a table, and the system fail in the middle of one INSERT /*+ APPEND */ , the Oracle will need to rollback that data dictionary update.
There will be redo generated but it is to protect the data dictionary, not your
newly inserted data (Oracle will undo the space allocation if it fails, where as your data will disappear).
The data which are not logged will not be able to recover. The data should be backed up after the modification.
Tables and indexes should be set back to LOGGING mode when the NOLOGGING is no longer needed.
NOLOGGING is not needed for Direct Path Insert if the database is in NO ARCHIVE LOG MODE. (See table 1.1)
NOLOGGING affect the recoverability. Before going into how to limit the redo generation, it is important to clear the misunderstanding that NOLOGGING is the way out of redo generation, this are some points regarding it:
NOLOGGING is designed to handle bulk inserts of data which can be easy reproduced.
Regardless of LOGGING status, writing to undo blocks causes generation of redo.
LOGGING should not be disabled on a primary database if it has one or more standby databases. For this reason oracle introduced the ALTER DATABASE FORCE LOGGING command in Oracle 9i R2. (Means that the NOLOGGING attribute will not have any effect on the segments) If the database is in FORCE
LOGGING MODE. NOLOGGING can be also override at tablespace level using ALTER TABLESPACE … FORCE LOGGING.
Any change to the database dictionary will cause redo generation. This will happen to protect the data dictionary. An example: if we allocated a space above the HWM for a table, and the system fail in the middle of one INSERT /*+ APPEND */ , the Oracle will need to rollback that data dictionary update.
There will be redo generated but it is to protect the data dictionary, not your
newly inserted data (Oracle will undo the space allocation if it fails, where as your data will disappear).
The data which are not logged will not be able to recover. The data should be backed up after the modification.
Tables and indexes should be set back to LOGGING mode when the NOLOGGING is no longer needed.
NOLOGGING is not needed for Direct Path Insert if the database is in NO ARCHIVE LOG MODE. (See table 1.1)
Table Mode
|
Insert Mode
|
ArchiveLog Mode
|
Result
|
LOGGING
|
APPEND
|
ARCHIVE LOG
|
REDO
GENERATED
|
NOLOGGING
|
APPEND
|
ARCHIVE LOG
|
NO REDO
|
LOGGING
|
NO APPEND
|
ARCHIVE LOG
|
REDO
GENERATED
|
NOLOGGING
|
NO APPEND
|
ARCHIVE LOG
|
REDO
GENERATED
|
LOGGING
|
APPEND
|
NO ARCHIVE LOG
|
NO REDO
|
NOLOGGING
|
APPEND
|
NO ARCHIVE LOG
|
NO REDO
|
LOGGING
|
NO APPEND
|
NO ARCHIVE LOG
|
REDO
GENERATED
|
NOLOGGING
|
NO APPEND
|
NO ARCHIVE LOG
|
REDO
GENERATED
|
The data which is not able to reproduce should not use the NOLOGGING mode. If data which can not be reloaded was loaded using NOLOGGING. The data cannot be recovered when the database crashes before backing the data. NOLOGGING does not apply to UPDATE, DELETE, and INSERT. NOLOGGING will work during certain situations but subsequent DML will generate redo. Some of these situations are: o direct load INSERT (using APPEND hint), o CREATE TABLE ... AS SELECT, o CREATE INDEX. If the LOGGING or NOLOGGING clause is not specified when creating a table, partition, or index the default to the LOGGING attribute, will be the LOGGING attribute of the tablespace in which it resides.
Table redefinition cannot be done NOLOGGING.
Temp files are always set to NOLOGGING mode.
The FORCE LOGGING mode is a persistent attribute of the database. That is, if the database is shut down and restarted, it remains in the same logging mode state. FORCE LOGGING must be configured again after recreating the control file.
If the database has a physical standby database, then NOLOGGING operations will render data blocks in the standby “logically corrupt” because of the missing redo log entries. If the standby database ever switches to the primary role, errors will occur when trying to access objects that were previously written with the NOLOGGING option, you will an error like this:
ORA-01578: ORACLE data block corrupted (file # 3, block # 2527)
ORA-01110: data file 1: '/u1/oracle/dbs/stdby/tbs_nologging_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
Why
Oracle generates redo and undo for DML
When you issue an insert, update or delete, Oracle actually makes the change to the data blocks that contain the affected data even though you have not issued a commit. To ensure database integrity, Oracle must write information necessary to reverse the change (UNDO) into the log to handle transaction failure or rollback. Recovery from media failure is ensured by writing information necessary to re-play database changes (REDO) into the log. So, UNDO and REDO information logically MUST be written into the transaction log of the RDBMS (see below regarding temporary tables).
While the RDBMS logically would only need to write UNDO and REDO into the transaction log, the UNDO portion must also be kept online (on disk and accessible to the RDBMS engine) to enable rollback of failed transactions. If UNDO data was only stored in the transaction log, the log could get archived and the RDBMS would have to try and read it from tape. On some platforms, the tape could be sitting in the DBA's desk drawer, so there are practical problems with this solution. Every RDBMS must meet the basic requirement of online access to undo data, and Oracle does this by storing UNDO data in what we call Rollback Segments (rollback = undo).
Because Oracle places UNDO data into a rollback segment and also must (logically) place this data into the transaction log, it is simpler to just treat rollback tablespaces like any other tablespace from a log generation perspective. That is, Oracle generates REDO for a Rollback Segment, which is logically the same as UNDO for a data block (i.e. your table, index, etc.).
Oracle's transaction log is really called the REDO log because it only contains redo records. There logically MUST be UNDO records stored in the log, but they are stored in the form of redo for rollback segments.
For temporary tables, Oracle will need to do things like facilitate rollback, but it is not necessary to bring back temporary tables following media failure.
The undo data is also needed for things like rollback to save point and read consistency, not just to reclaim space that was used by that temporary table.
While the RDBMS logically would only need to write UNDO and REDO into the transaction log, the UNDO portion must also be kept online (on disk and accessible to the RDBMS engine) to enable rollback of failed transactions. If UNDO data was only stored in the transaction log, the log could get archived and the RDBMS would have to try and read it from tape. On some platforms, the tape could be sitting in the DBA's desk drawer, so there are practical problems with this solution. Every RDBMS must meet the basic requirement of online access to undo data, and Oracle does this by storing UNDO data in what we call Rollback Segments (rollback = undo).
Because Oracle places UNDO data into a rollback segment and also must (logically) place this data into the transaction log, it is simpler to just treat rollback tablespaces like any other tablespace from a log generation perspective. That is, Oracle generates REDO for a Rollback Segment, which is logically the same as UNDO for a data block (i.e. your table, index, etc.).
Oracle's transaction log is really called the REDO log because it only contains redo records. There logically MUST be UNDO records stored in the log, but they are stored in the form of redo for rollback segments.
For temporary tables, Oracle will need to do things like facilitate rollback, but it is not necessary to bring back temporary tables following media failure.
The undo data is also needed for things like rollback to save point and read consistency, not just to reclaim space that was used by that temporary table.
Flashback
and NOLOGGING
When using Flashback Database with a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and data files affected by the NOLOGGING operation. For example, if you perform a direct-path INSERT operation in NOLOGGING mode, and that operation runs from 8:00 to 8:15 on April 7, 2008, and you later need to use Flashback Database to return to the target time 08:07 on that date, the objects and data files updated by the direct-path INSERT may be left with block corruption after the Flashback Database operation completes.
If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING operation. Also, perform a full or incremental backup of the affected data files immediately after any NOLOGGING operation to ensure recoverability to points in time after the operation. If you expect to use Flashback Database to return to a point in time during an operation such as a direct-path INSERT, consider performing the operation in LOGGING mode.
Performance and Recovery considerations
The NOLOGGING mode improves performance because it generates much less log data in the redo log files helping in eliminating the time needed to execute the redo generation (latch acquisition, redolog writing, etc.). The user is responsible for backing up the data after a NOLOGGING insert operation in order to be able to perform media recovery.
Be aware that this feature disables the recover mechanisms for this transaction: It will be required to repeat the process from the very beginning in case of a database or instance failure.
If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING operation. Also, perform a full or incremental backup of the affected data files immediately after any NOLOGGING operation to ensure recoverability to points in time after the operation. If you expect to use Flashback Database to return to a point in time during an operation such as a direct-path INSERT, consider performing the operation in LOGGING mode.
Performance and Recovery considerations
The NOLOGGING mode improves performance because it generates much less log data in the redo log files helping in eliminating the time needed to execute the redo generation (latch acquisition, redolog writing, etc.). The user is responsible for backing up the data after a NOLOGGING insert operation in order to be able to perform media recovery.
Be aware that this feature disables the recover mechanisms for this transaction: It will be required to repeat the process from the very beginning in case of a database or instance failure.
Disabling Redo Generation (NOLOGGING)
The NOLOGGING attribute tells the Oracle that the operation being performed does not need to be recoverable in the event of a failure. In this case Oracle will generate a minimal number of redo log entries in order to protect the data dictionary, and the operation will probably run faster. Oracle is relying on the user to recover the data manually in the event of a media failure.
Logging can be disabled at the table level or the tablespace level. If it is done at the tablespace level then every newly created index or table in this tablespace will be in NOLOGGING mode (You can have logging tables inside a NOLOGGING tablespace). A table or an index can be created with NOLOGGING mode or it can be altered using ALTER TABLE/INDEX NOLOGGING. It is important to note that just because an index or a table was created with NOLOGGING does not mean that redo generation has been stopped for this table or index. NOLOGGING is active in the following situations and while running one of the following commands but not after that.
This is a partial list:
• DIRECT LOAD (SQL*Loader)
• DIRECT LOAD INSERT (using APPEND hint)
• CREATE TABLE ... AS SELECT
• CREATE INDEX
• ALTER TABLE MOVE
• ALTER TABLE ... MOVE PARTITION
• ALTER TABLE ... SPLIT PARTITION
• ALTER TABLE … ADD PARTITION (if HASH partition)
• ALTER TABLE … MERGE PARTITION
• ALTER TABLE … MODIFY PARTITION o ADD SUBPARTITON o COALESCE SUBPARTITON o REBUILD UNUSABLE INDEXES
• ALTER INDEX ... SPLIT PARTITION
• ALTER INDEX ... REBUILD
• ALTER INDEX ... REBUILD PARTITION
Logging is stopped only while one of the commands above is running, so if a user runs this:
• ALTER INDEX new_index NOLOGGING.
The actual rebuild of the index does not generate redo (all data dictionary changes associated with the rebuild will do) but after that any DML on the index will generate redo this includes direct load insert on the table which the index belongs to.
Here is another example to make this point more clear:
• CREATE TABLE new_table_nolog_test NOLOGGING(….);
All the following statements will generate redo despite the fact the table is in NOLOGGING mode:
• INSERT INTO new_table_nolog_test ...,
• UPDATE new_table_nolog_test SET …,
• DELETE FROM new_table_nolog_test ..
The following will not generate redo (except from dictionary changes and indexes):
• INSERT /*+APPEND+/ …
• ALTER TABLE new_table_nolog_test MOVE …
• ALTER TABLE new_table_nolog_test MOVE PARTITION …
Consider the following example:
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 27556720
SQL> insert into scott.redo1 select * from scott.redotesttab;
50000 rows created.
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 8536820
SQL> insert /*+ APPEND */ into scott.redo1 select * from scott.redotesttab;
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 28539944
To activate the NOLOGGING for one of the ALTER commands above add the NOLOGGING clause after the end of the ALTER command.
For example:
• ALTER TABLE new_table_nolog_test MOVE PARTITION parti_001 TABLESPACE new_ts_001 NOLOGGING;
The same applies for CREATE INDEX but for CREATE TABLE the NOLOGGING should come after the table name.
• CREATE TABLE new_table_nolog_test NOLOGGING AS SELECT * FROM big_table;
"It is a common mistake to add the NOLOGGING option at the end of the SQL
(Because oracle will consider it an alias and the table will generate a lot of logging)."
To user Direct Path Load in SQL*Loader you must run the $ORACLE_HOME/rdbms/admin/catldr.sql script before your first sqlldr is run in direct path mode. To run sqlldr in direct path mode use direct=true.
Note: Even though direct path load reduces the generation of redo, it is not totally eliminated. That's because those inserts still generate undo which in turn generates redo.
Logging can be disabled at the table level or the tablespace level. If it is done at the tablespace level then every newly created index or table in this tablespace will be in NOLOGGING mode (You can have logging tables inside a NOLOGGING tablespace). A table or an index can be created with NOLOGGING mode or it can be altered using ALTER TABLE/INDEX NOLOGGING. It is important to note that just because an index or a table was created with NOLOGGING does not mean that redo generation has been stopped for this table or index. NOLOGGING is active in the following situations and while running one of the following commands but not after that.
This is a partial list:
• DIRECT LOAD (SQL*Loader)
• DIRECT LOAD INSERT (using APPEND hint)
• CREATE TABLE ... AS SELECT
• CREATE INDEX
• ALTER TABLE MOVE
• ALTER TABLE ... MOVE PARTITION
• ALTER TABLE ... SPLIT PARTITION
• ALTER TABLE … ADD PARTITION (if HASH partition)
• ALTER TABLE … MERGE PARTITION
• ALTER TABLE … MODIFY PARTITION o ADD SUBPARTITON o COALESCE SUBPARTITON o REBUILD UNUSABLE INDEXES
• ALTER INDEX ... SPLIT PARTITION
• ALTER INDEX ... REBUILD
• ALTER INDEX ... REBUILD PARTITION
Logging is stopped only while one of the commands above is running, so if a user runs this:
• ALTER INDEX new_index NOLOGGING.
The actual rebuild of the index does not generate redo (all data dictionary changes associated with the rebuild will do) but after that any DML on the index will generate redo this includes direct load insert on the table which the index belongs to.
Here is another example to make this point more clear:
• CREATE TABLE new_table_nolog_test NOLOGGING(….);
All the following statements will generate redo despite the fact the table is in NOLOGGING mode:
• INSERT INTO new_table_nolog_test ...,
• UPDATE new_table_nolog_test SET …,
• DELETE FROM new_table_nolog_test ..
The following will not generate redo (except from dictionary changes and indexes):
• INSERT /*+APPEND+/ …
• ALTER TABLE new_table_nolog_test MOVE …
• ALTER TABLE new_table_nolog_test MOVE PARTITION …
Consider the following example:
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 27556720
SQL> insert into scott.redo1 select * from scott.redotesttab;
50000 rows created.
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 8536820
SQL> insert /*+ APPEND */ into scott.redo1 select * from scott.redotesttab;
SQL> select name,value from v$sysstat where name like '%redo size%';
NAME VALUE
----------------------------------------------------------- ----------
redo size 28539944
To activate the NOLOGGING for one of the ALTER commands above add the NOLOGGING clause after the end of the ALTER command.
For example:
• ALTER TABLE new_table_nolog_test MOVE PARTITION parti_001 TABLESPACE new_ts_001 NOLOGGING;
The same applies for CREATE INDEX but for CREATE TABLE the NOLOGGING should come after the table name.
• CREATE TABLE new_table_nolog_test NOLOGGING AS SELECT * FROM big_table;
"It is a common mistake to add the NOLOGGING option at the end of the SQL
(Because oracle will consider it an alias and the table will generate a lot of logging)."
To user Direct Path Load in SQL*Loader you must run the $ORACLE_HOME/rdbms/admin/catldr.sql script before your first sqlldr is run in direct path mode. To run sqlldr in direct path mode use direct=true.
Note: Even though direct path load reduces the generation of redo, it is not totally eliminated. That's because those inserts still generate undo which in turn generates redo.
REDUCING REDO GENERATION TIPS
While Backing Up
As mentioned in the redo generation and recoverability section, user managed backups could generate a lot of redo. The best way to eliminate this problem is to use RMAN. RMAN does not need to write the entire block to redo because it knows when the block is being copied. If the user needs to use the user managed backup then they can follow these steps to reduce redo generation:
• Do not back up all the tablespaces in one go. This will put every tablespace in backup mode for longer than it needs to be and therefore generates redo for longer than it needs to do.
• Automatic backup on the busy tablespaces
• Backup a tablespace during a time when it is least busy in terms of DML.
Bulk Inserts
By bulk we mean a large percentage compared to the existing data.
To reduce the amount of redo generation in a bulk data load, the user needs to disable the indexes (when making a direct load to a table that have indexes, the indexes will produce redo) before the load then re-build them again as follow:
o Alter index index_name unusable ; # Do this for every index o Alter session set skip_unusable_indexes=true ; (*) o Insert into table_name select … o Alter index index_name rebuild;
(*)skip_unusable_indexes is an instance initialization parameter in 10g and it default to true
To reduce the amount of redo generation in a bulk data load, the user needs to disable the indexes (when making a direct load to a table that have indexes, the indexes will produce redo) before the load then re-build them again as follow:
o Alter index index_name unusable ; # Do this for every index o Alter session set skip_unusable_indexes=true ; (*) o Insert into table_name select … o Alter index index_name rebuild;
(*)skip_unusable_indexes is an instance initialization parameter in 10g and it default to true
Bulk Delete
1. Create table new_table with logging
2. Insert into new_table select the records you want to keep from current_table. 3. Create the indexes on the new_table (*)
4. Create constraints, grants etc.
5. Drop current_table.
6. Rename new_table to current.
(*) If the data left is so small or there are a lot of dependencies on the table (views, procedures, functions, etc) the following steps can be used instead of 3-6 above:
6. Disable constrains on current_table.
7. Truncate current_table;
8. Insert into current_table select * from new_table ;
9. commit;
10. enable constraints
11. drop table new_table;
Bulk Update
Use this method if indexes are going to be affected by the update. This is because mass updating indexes is more expensive than re-building them. If a small portion of the data is updated then use this method:
1. Disable constraints.
2. Alter index index_name unusable ;
3. Alter session set skip_unusable_indexes=true ;
4. update the table.
5. commit;
6. alter index index_name rebuild ; 7. Enable constraints.
If the update causes a good portion of the data to be updated then follow this method:
1. Create new_table as select (updating statement)
2. create indexes on the new_table,
3. create grants, constraints etc on the new_table
4. Drop current table
5. Rename the new_table to current_table.
1. Disable constraints.
2. Alter index index_name unusable ;
3. Alter session set skip_unusable_indexes=true ;
4. update the table.
5. commit;
6. alter index index_name rebuild ; 7. Enable constraints.
If the update causes a good portion of the data to be updated then follow this method:
1. Create new_table as select (updating statement)
2. create indexes on the new_table,
3. create grants, constraints etc on the new_table
4. Drop current table
5. Rename the new_table to current_table.
TIPS USING NOLOGGING MODE
DIRECT PATH INSERT
To use Direct Path Insert use the /*+ APPEND */ hint as follow:
• INSERT /*+ APPEND */ into … SELECT …
When direct path insert is used oracle does the following:
Format the data to be inserted as oracle blocks.
Insert the blocks above the High Water Mark (HWM)
When commit takes place the HWM is moved to the new place (The process is done bypassing the buffer cache).
It is clear that direct load is useful for bulk inserts. Using it to insert few hundred records at a time can have bad effect on space and performance.
It is very important to understand how Direct Path Inserts affects redo generation. As mentioned above it does not affect indexes but it is affected by the following factors:
• The database Archivelog mode. • Using the /*+ APPEND */ hint.
• The LOGGING mode of the table.
• The FORCE LOGGING mode of the database (from 9i R2).
If the database is in FORCE LOGGING mode then Oracle will treat the table as if it was in LOGGING mode regardless of its mode. To find out if the database is in FORCED LOGGING or not run:
• select FORCE_LOGGING from v$database ;
If the /*+ APPEND */ Hint is not used then the insertion will generate the normal amount of redo regardless of the other factors.
This table will show the relation between ARCHIVELOG mode and having the table in LOGGING mode when the /*+ APPEND */ hint is used. This does not include index and data dictionary changes redo generation.
• INSERT /*+ APPEND */ into … SELECT …
When direct path insert is used oracle does the following:
Format the data to be inserted as oracle blocks.
Insert the blocks above the High Water Mark (HWM)
When commit takes place the HWM is moved to the new place (The process is done bypassing the buffer cache).
It is clear that direct load is useful for bulk inserts. Using it to insert few hundred records at a time can have bad effect on space and performance.
It is very important to understand how Direct Path Inserts affects redo generation. As mentioned above it does not affect indexes but it is affected by the following factors:
• The database Archivelog mode. • Using the /*+ APPEND */ hint.
• The LOGGING mode of the table.
• The FORCE LOGGING mode of the database (from 9i R2).
If the database is in FORCE LOGGING mode then Oracle will treat the table as if it was in LOGGING mode regardless of its mode. To find out if the database is in FORCED LOGGING or not run:
• select FORCE_LOGGING from v$database ;
If the /*+ APPEND */ Hint is not used then the insertion will generate the normal amount of redo regardless of the other factors.
This table will show the relation between ARCHIVELOG mode and having the table in LOGGING mode when the /*+ APPEND */ hint is used. This does not include index and data dictionary changes redo generation.
LOGGING MODE
|
ARCHIVELOG
|
NOARCHIVELOG
|
LOGGING
|
Redo
|
No Redo
|
NOLOGGING
|
No Redo
|
No Redo
|
For Bulk DML Bulk Inserts
To load bulk data using Direct Path.
o set table in nologging mode. Alter table table_name nologging; o alter index index_name unusable ; o alter session set skip_unusable_indexes=true ;(*) o Insert /*+ APPEND */ into table_name select … o Alter index index_name rebuild nologging; o Alter table table_name logging ; o Alter index index_name logging ; o Backup the data.
(*)skip_unusable_indexes is an instance initialization parameter in 10g and defaulted to true.
There is no direct way (at the time of writing this document) of reducing redo generation for bulk update and delete. The user needs to reduce the workload on the database.
o set table in nologging mode. Alter table table_name nologging; o alter index index_name unusable ; o alter session set skip_unusable_indexes=true ;(*) o Insert /*+ APPEND */ into table_name select … o Alter index index_name rebuild nologging; o Alter table table_name logging ; o Alter index index_name logging ; o Backup the data.
(*)skip_unusable_indexes is an instance initialization parameter in 10g and defaulted to true.
There is no direct way (at the time of writing this document) of reducing redo generation for bulk update and delete. The user needs to reduce the workload on the database.
Bulk Delete
1. Create a new_table with no logging, CREATE TABLE table_name NOLOGGING (….); The NOLOGGING comes after the TABLE_NAME not at the end of the statement.
2. Insert /*+ Append */ into new_table select the records you want to keep from current_table.
3. Create the indexes on the new table with NOLOGGING (*)
4. Create constraints, grants etc.
5. Drop current_table.
6. Rename new_table to current.
7. Alter new_table and indexes logging.
8. Backup the data.
(*) If the data left is so small or there are a lot of dependencies on the table (views, procedures, functions) the following steps can be used instead of 3-6 above:
9. Disable constrains on current_table;
10. Truncate current_table;
11. make
indexes unusable;
1. alter current table NOLOGGING ;
2. Insert /*+ APPEND */ into current_table select * from new_table ;
3. commit;
4. rebuild indexes with NOLOGGING;
5. enable constraints
6. Put current table and indexes in LOGGING mode
7. backup the data
9. drop table new_table;
2. Insert /*+ APPEND */ into current_table select * from new_table ;
3. commit;
4. rebuild indexes with NOLOGGING;
5. enable constraints
6. Put current table and indexes in LOGGING mode
7. backup the data
9. drop table new_table;
Bulk Update
Follow the steps for bulk Delete but integrate the update within the select statement. Lets say that you want to update the value column in the goods table by increasing it by 10% the statement will be like:
1. Create a new_table with no logging, CREATE TABLE table_name NOLOGGING (….); (The nologging comes after the table_name, not at the end of the statement.).
2. Insert /*+ Append */ into new_table select (update statement eg: col1, col2*
1.1,…)
3. Create the indexes on the new table with NOLOGGING (*)
4. Create constraints, grants etc.
5. Drop current_table.
6. Rename new_table to current.
7. Alter new_table and indexes logging.
1. Create a new_table with no logging, CREATE TABLE table_name NOLOGGING (….); (The nologging comes after the table_name, not at the end of the statement.).
2. Insert /*+ Append */ into new_table select (update statement eg: col1, col2*
1.1,…)
3. Create the indexes on the new table with NOLOGGING (*)
4. Create constraints, grants etc.
5. Drop current_table.
6. Rename new_table to current.
7. Alter new_table and indexes logging.
Redo Logging I/O-Related Wait Events
There are a number of Wait Events that happen during Redo Logging activities and most of them are I/O-related.
The two most important ones are 'log file parallel write' and 'log file sync'. Oracle foreground processes wait for 'log file sync' whereas the LGWR process waits for 'log file parallel write'.
Although we usually find 'log file sync' in the "Top 5 Wait/Timed Events" section of the Statspack report, in order to understand it we will first look at 'log file parallel write':
'log file parallel write' (See Metalink Note: 34583.1)
-------------------------------------------------------------------
The LGWR background process waits for this event while it is copying redo records from the memory Log Buffer cache to the current redo group's member log files on disk.
Asynchronous I/O will be used if available to make the write parallel, otherwise these writes will be done sequentially one member after the other. However, LGWR has to wait until the I/Os to all member log files are complete before the wait is completed. Hence, the factor that determines the length of this wait is the speed with which the I/O subsystem can perform the writes to the log file members.
To reduce the time waited for this event, one approach is to reduce the amount of redo generated by the database:
• Make use of UNRECOVERABLE/NOLOGGING options.
• Reduce the number of redo group members to the minimum necessary to ensure not all members can be lost at the same time.
• Do not leave tablespaces in BACKUP mode for longer than necessary.
• Only use the minimal level of Supplemental Logging required to achieve the required functionality e.g. in LogMiner, Logical Standby or Streams.
Another approach is to tune the I/O itself:
• Place redo group members on storage locations so that parallel writes do not contend with each other.
• Do not uses RAID-5 for redo log files.
• Use Raw Devices for redo log files.
• Use faster disks for redo log files.
• If archiving is being used setup redo storage so that writes for the current redo group members do not contend with reads for the group(s) currently being archived.
'log file sync' (See Metalink Note: 34592.1)
--------------------------------------------------------
This Wait Event occurs in Oracle foreground processes when they have issued a COMMIT or ROLLBACK operation and are waiting for it to complete.
Part (but not all) of this wait includes waiting for LGWR to copy the redo records for the session's transaction from Log Buffer memory to disk.
So, in the time that a foreground process is waiting for 'log file sync', LGWR will also wait for a portion of this time on 'log file parallel write'.
The key to understanding what is delaying 'log file sync' is to compare average times waited for 'log file sync' and 'log file parallel write':
• If they are almost similar, then redo logfile I/O is causing the delay and the guidelines for tuning it should be followed.
• If 'log file parallel write' is significantly different i.e smaller, then the delay is caused by the other parts of the Redo Logging mechanism that occur during a COMMIT/ROLLBACK (and are not I/O-related). Sometimes there will be latch contention on redo latches, evidenced by 'latch free' or 'LGWR wait for redo copy' wait events.
Metalink Notes:
285285.1, 373804.1, 272503.1, 212119.1, 199298.1, 188691.1, 94402.1, 160092.1,
268476.1, 427456.1, 223117.1, 427805.1, 115079.1, 432763.1, 116522.1, 49739.1,
147474.1, 269274.1, 269274.1, 4047608.8, 211125.1, 174951.1, 234096.1, 69739.1,
4047608.8, 234096.1, 28814.1, 150694.1, 167492.1, 268395.1, 334878.1, 118300.1
269274.1, 290161.1, 403747.1, 435998.1
Although we usually find 'log file sync' in the "Top 5 Wait/Timed Events" section of the Statspack report, in order to understand it we will first look at 'log file parallel write':
'log file parallel write' (See Metalink Note: 34583.1)
-------------------------------------------------------------------
The LGWR background process waits for this event while it is copying redo records from the memory Log Buffer cache to the current redo group's member log files on disk.
Asynchronous I/O will be used if available to make the write parallel, otherwise these writes will be done sequentially one member after the other. However, LGWR has to wait until the I/Os to all member log files are complete before the wait is completed. Hence, the factor that determines the length of this wait is the speed with which the I/O subsystem can perform the writes to the log file members.
To reduce the time waited for this event, one approach is to reduce the amount of redo generated by the database:
• Make use of UNRECOVERABLE/NOLOGGING options.
• Reduce the number of redo group members to the minimum necessary to ensure not all members can be lost at the same time.
• Do not leave tablespaces in BACKUP mode for longer than necessary.
• Only use the minimal level of Supplemental Logging required to achieve the required functionality e.g. in LogMiner, Logical Standby or Streams.
Another approach is to tune the I/O itself:
• Place redo group members on storage locations so that parallel writes do not contend with each other.
• Do not uses RAID-5 for redo log files.
• Use Raw Devices for redo log files.
• Use faster disks for redo log files.
• If archiving is being used setup redo storage so that writes for the current redo group members do not contend with reads for the group(s) currently being archived.
'log file sync' (See Metalink Note: 34592.1)
--------------------------------------------------------
This Wait Event occurs in Oracle foreground processes when they have issued a COMMIT or ROLLBACK operation and are waiting for it to complete.
Part (but not all) of this wait includes waiting for LGWR to copy the redo records for the session's transaction from Log Buffer memory to disk.
So, in the time that a foreground process is waiting for 'log file sync', LGWR will also wait for a portion of this time on 'log file parallel write'.
The key to understanding what is delaying 'log file sync' is to compare average times waited for 'log file sync' and 'log file parallel write':
• If they are almost similar, then redo logfile I/O is causing the delay and the guidelines for tuning it should be followed.
• If 'log file parallel write' is significantly different i.e smaller, then the delay is caused by the other parts of the Redo Logging mechanism that occur during a COMMIT/ROLLBACK (and are not I/O-related). Sometimes there will be latch contention on redo latches, evidenced by 'latch free' or 'LGWR wait for redo copy' wait events.
Metalink Notes:
285285.1, 373804.1, 272503.1, 212119.1, 199298.1, 188691.1, 94402.1, 160092.1,
268476.1, 427456.1, 223117.1, 427805.1, 115079.1, 432763.1, 116522.1, 49739.1,
147474.1, 269274.1, 269274.1, 4047608.8, 211125.1, 174951.1, 234096.1, 69739.1,
4047608.8, 234096.1, 28814.1, 150694.1, 167492.1, 268395.1, 334878.1, 118300.1
269274.1, 290161.1, 403747.1, 435998.1