Oracle Goldengate is
a software for real-time data integration and replication in heterogeneous IT
Systems.
Logdump is a standard
utility that comes with the Oracle GoldenGate (OGG) application. Logdump
enables the user to view and scan for transactions and records, obtain
transaction statistics, and save data contained within OGG Trails. In this
article we shall present examples of the Oracle GoldenGate Log File Dump
Utility version 12.1.2 functionality that aid in troubleshooting data issues,
determining database recovery points, and OGG group load balancing.
What
are Trail Files?
Extract Process (GoldenGate Component) all operations in each committed transactions in a file called as Trail File.
Extract Process (GoldenGate Component) all operations in each committed transactions in a file called as Trail File.
Logdump Utility
Oracle GoldenGate Software includes the Logdump Utility for viewing data directly from the trail files.
Oracle GoldenGate Software includes the Logdump Utility for viewing data directly from the trail files.
The File Header
contains a wealth of information about the physical environment that produced
the Trail file and Trail file contents. This information includes:
|
GroupID
|
Tokens
|
|
Trail
Info
|
Signature,
Compatibility, Character Set, Creation Time, URI, URI History, File Name,
Multi-part, Trail Sequence Number, File Size, First CSN, Last CSN, First IO
Time, Last IO Time, Log BSN
|
|
Machine
Info
|
Operating
System Name, Node Name, Release, Version, Hardware
|
|
Database
Info
|
Vendor,
Name, Character Set, Major Version, Minor Version, Version String, Client
Character Set, Client Version String, Client NCharacter Set, Db Locale, Db
NCharacter Set, Db Object Metadata, Db Time Zone
|
|
Producer
Info
|
Name,
Data Source, Major Version, Minor Version, Maintenance Level, Bug Fix Level,
Build Number, Version String
|
|
Continuity
Info
|
Recovery
Mode
|
Each source operation
captured is encapsulated into a unique transaction record. This record consists
of a GoldenGate Header (GHDR), variable length data area, and GGS and User
Tokens.
GHDR
The GoldenGate
Header, or GHDR, area details information about the source operation and
transaction. This includes the source commit timestamp (IO Time token), record
length, database operation type (IO Type), whether this record is a before or
after data image, and source table name. The GHDR also contains information
that denotes each record as the beginning record of a transaction, the ending
record of a transaction, a record between the beginning and end of a
transaction, or a single record transaction.
Data
Area
The variable length
data area contains the atomic data for the source database operation. This
includes the column index number and data changed for an after image, or
original column data for a before image.
GGS
Tokens
GoldenGate Tokens
written for each transaction record include the source database row id, source
database change number (LOGCSN), and source database transaction id (TRANID).
User
Tokens
User Tokens are data
defined by the user and added to the Trail via the Extract @TOKEN column conversion
function
Without the Logdump,
it is not possible to read the content of the Oracle GoldenGate trail files, as
the trail files are in a binary format. With Logdump, we can open up the trail
file, read its content, navigate through the file, view transactions at different
RBA’s (relative byte address – file position), help identify the type of
commands (DML or DDL) issued on the source, including delete, insert, update,
alter and create statements.
Hence Logdump Utility
is most important utility in troubleshooting GoldenGate Issues.
To invoke the
utility, go to the GoldenGate home directory and type “logdump”, as shown in
the following example.
[oracle@dbserver1
ggs]$ ./logdump
LogDump Commands:
1. How to open LogDump: Navigate to the directory where the Oracle GoldenGate Software is installed and execute the Logdump.
1. How to open LogDump: Navigate to the directory where the Oracle GoldenGate Software is installed and execute the Logdump.
[GoldenGate]$
$GG_HOME/logdump
2. Opening a Trail
File: To open a trail file and read its content, specify the trail file
at the logdump prompt. Trail files are usually found in the GoldenGate dirdat
directory.
$ ls -lrt
$GG_HOME/dirdat$-rw-rw-rw- 1 oracle oinstall 78325 Dec 7 10:38 AA000001
$-rw-rw-rw- 1 oracle oinstall 78325 Dec 7 10:42 AA000002
$-rw-rw-rw- 1 oracle oinstall 78325 Dec 7 10:55 AA000003
You can also
determine the current trail file directory/name by running the “INFO
process_name” command at the ggsci prompt.
3. Open and view the
details of local trail file:
Logdump> OPEN
./dirdat/AA000001
Change the file name
and location as required.
4. Set Output Format:
Enable the following options so that you are able to view the results in a
readable format in your Logdump session.
a) Set trail file
header details on: The FILEHEADER contains the header details of the currently
opened trail file.
Logdump>
FILEHEADER DETAIL
b) Record Header:
The tokens FirstCSN and LastCSN identify the beginning and ending CSN (SCN for Oracle Databases) for transaction records in this trail. This may come in handy should you ever need to perform a database recovery or start an OGG Group that reads the Trail at a specific CSN.
The LastCSN token is only populated when a Trail has been closed by its Extract. Trails are closed only when an Extract opens and begins writing to a new trail. If an Extract is in the STOPPED or ABEND state, or you are viewing the most current Trail; this token will contain no data.
Logdump> GHDR ON or view GHDR
c) Set Column Details
on: It displays the list of columns, their ID, length, Hex values etc.
The GHDR identifies this record as an After Image (BeforeAfter),"lenth" bytes in length (RecLength), was committed in the source database at on "Tansation time" (IO Time), is the start of a multi-record transaction (TransInd 0), and is an insert operation (IOType) to the table EAST.CATEGORIES (Name).
The remaining data is the hexadecimal representation of the inserted record and table column index information. From this view of the data we cannot determine much about this transaction record, so lets display two additional options.
Once again the GHDR and Data records are displayed, along with the data for each record column, the Oracle Row Id of the source data, the source database CSN for this transaction, and the source database Transaction Id.
Logdump> DETAIL ON
d) User Token
Details: User token is the user-defined information stored in a trail,
associated with the table mapping statements. The CSN (SCN in Oracle Database)
associated with the transaction is available in this section.
Logdump> USERTOKEN
DETAIL
e) Set length of the
record to be displayed: In this case, it is 128 characters.
Logdump> RECLEN
128
5. Viewing the
records: To view particular records in the trail files, navigate as below in
the local trail file.
a) First record in
the trail file: Here “0” is the beginning of the trail file
Logdump> POS 0
b) Move to a specific
record, at a particular RBA
The “xxxx” is the RBA number.
The “xxxx” is the RBA number.
Logdump> POS xxxx
c) Next record in the
opened trail file
Logdump> N
Or
Logdump> NEXT
Logdump> N
Or
Logdump> NEXT
d) Moving forward or
reverse in the trail file
Logdump> POS FORWARD
or
Logdump> POS REVERSE
Logdump> POS FORWARD
or
Logdump> POS REVERSE
e) Skip certain
number of records
Here ‘x’ is the number of records you want to skip.
Here ‘x’ is the number of records you want to skip.
Logdump> SKIP x
f) Last record in the
trail file
Logdump> POS last
Logdump> POS last
6. Filter Commands:
We can use filter
commands to view the specific operations or data records, a record at a
specific RBA, the record length, record type, etc. using the commands below.
To start filtration,
use the “filter” keyword, followed by include or exclude. These options allow
the data to be removed or shown, based on the filter criteria. Then apply other
conditionslike file name, rectype, iotype etc. Here rectype is record type and
iotype is input output type.
There are number of
operation we can filter using the Logdump. To view the list of operation types
and the number assigned to them, run below command.
a) Show the Record
Types
Logdump> SHOW
RECTYPE
b) Enable or disable
filteration:
Logdump> FILTER [
ENABLE | DISABLE ]
Filter Records by
Table Name
Logdump> FILTER
INCLUDE FILENAME CC_APP.IMAGE_DETAIL
c) Filter Records by
Operation Type: Operation types are Insert, Update, and Delete.
Logdump> FILTER
INCLUDE IOTYPE INSERT
d) Filter Records
using the operation number
You can specify the IOTYPE by using the equivalent operation number.
You can specify the IOTYPE by using the equivalent operation number.
Logdump 374>
FILTER INCLUDE IOTYPE 160
Logdump 374> N
n
Sample Output:
Logdump 374> N
n
Sample Output:
2013/02/18
00:36:05.000.000 DDLOP Len 1169 RBA 3049
Name:
After Image: Partition 0 G s
2c43 353d 2733 3135 3435 272c 2c42 373d 2733 3135 | ,C5=’31545′,,B7=’315
3735 272c 2c42 323d 2727 2c2c 4233 3d27 5331 272c | 75′,,B2=”,,B3=’S1′,
2c42 343d 2754 4553 545f 3132 272c 2c43 3132 3d27 | ,B4=’TEST’,,C12=’
272c 2c43 3133 3d27 272c 2c42 353d 2754 4142 4c45 | ‘,,C13=”,,B5=’TABLE
272c 2c42 363d 2743 5245 4154 4527 2c2c 4238 3d27 | ‘,,B6=’CREATE’,,B8=’
4747 5553 4552 2e47 4753 5f44 444c 5f48 4953 5427 | GGUSER.GGS_DDL_HIST’
2c2c 4239 3d27 5331 272c 2c43 373d 2731 312e 322e | ,,B9=’S1′,,C7=’11.2.
Filtering suppressed 2 records
Name:
After Image: Partition 0 G s
2c43 353d 2733 3135 3435 272c 2c42 373d 2733 3135 | ,C5=’31545′,,B7=’315
3735 272c 2c42 323d 2727 2c2c 4233 3d27 5331 272c | 75′,,B2=”,,B3=’S1′,
2c42 343d 2754 4553 545f 3132 272c 2c43 3132 3d27 | ,B4=’TEST’,,C12=’
272c 2c43 3133 3d27 272c 2c42 353d 2754 4142 4c45 | ‘,,C13=”,,B5=’TABLE
272c 2c42 363d 2743 5245 4154 4527 2c2c 4238 3d27 | ‘,,B6=’CREATE’,,B8=’
4747 5553 4552 2e47 4753 5f44 444c 5f48 4953 5427 | GGUSER.GGS_DDL_HIST’
2c2c 4239 3d27 5331 272c 2c43 373d 2731 312e 322e | ,,B9=’S1′,,C7=’11.2.
Filtering suppressed 2 records
Note: Here 160
represent DDL operation and in the detail, we can sAA the DDL type like below
is “CREATE” and suppressed means number of records skipped to reach next filter
value.
e) View currently
applied filters
Logdump> FILTER
SHOW
Sample output:
Data filters are ENABLED
Data filters are ENABLED
Include Match ANY
Rectypes: DDLOP
Rectypes: DDLOP
Exclude Match ANY
f) Filter on multiple
conditions:
We can filter the data of trail file using the multiple conditions together.
We can filter the data of trail file using the multiple conditions together.
For that we can
string multiple FILTER commands together, separating each one with a semicolon,
as shown in the below example:
Logdump>FILTER
INCLUDE FILENAME [SCHEMA].[TABLE]; FILTER RECTYPE 5; FILTER INCLUDE IOTYPE
INSERT
The above example will display only “5”,” insert” statement records from the specified table.
The above example will display only “5”,” insert” statement records from the specified table.
Note: [SCHEMA] &
[TABLE] is the name of the schema and table, and should be in upper case.
g) Clear the filter
in the session
Logdump> FILTER CLEAR
Logdump> FILTER CLEAR
Save Records To New Trail File
To save the records to a new Trail file, first let's make sure we are at the initial record associated with the CSN. As shown prior, position Logdump at the beginning of the Trail file with the command pos 0 and then scan to the first record via the n command. This displays the Insert record for vishalbh.vb at RBA 73658 in the Trail. To make sure this record is saved to the new Trail, we need to position the Logdump read pointer to the record prior to this one.
To save the records to a new Trail file, first let's make sure we are at the initial record associated with the CSN. As shown prior, position Logdump at the beginning of the Trail file with the command pos 0 and then scan to the first record via the n command. This displays the Insert record for vishalbh.vb at RBA 73658 in the Trail. To make sure this record is saved to the new Trail, we need to position the Logdump read pointer to the record prior to this one.
Set Logdump to read the Trail in reverse, pos rev, and then scan for the header of the previous record, sfh. This displays the previous record, a DDL operation performed on the vishalbh.vb table at RBA 73495 in the Trail.
From this point in the Trail file, have Logdump read forward via the pos for command. To save the records to a new trail named zz999999, issue the command save ./dirdat/zz999999 2 records. The new Trail file will be created and will contain the header record and the two data records associated with the source database CSN.


