A. Introduction
According to oracle note, GoldenGate is a best-of-breed, easy-to-deploy product used to replicate and integrate transactional data with subsecond speed among a variety of enterprise systems. Oracle GoldenGate provides the flexibility to move data between like-to-like and heterogeneous systems, including different versions of Oracle Database, different hardware platforms, and between Oracle and non-Oracle databases including Microsoft SQL Server, IBM DB2 for open systems and z/OS, Sybase, and more.is a best-of-breed, easy-to-deploy product used to replicate and integrate transactional data with subsecond speed among a variety of enterprise systems.
Oracle GoldenGate provides the flexibility to move data between like-to-like and heterogeneous systems, including different versions of Oracle Database, different hardware platforms, and between Oracle and non-Oracle databases including Microsoft SQL Server, IBM DB2 for open systems and z/OS, Sybase, and more.
In this article, i would like to present you a simple use of GoldenGate to replicate the TEST_GOLDENGATE table owned by the user COLLECTE from the database DB1 to the TEST_GOLDENGATE table owned by the user USER_GG from the database DB2.
The main components of the GoldenGate architecture are :
EXTRACT : Collects the changes vectors from the redo logs and archive logs file from the source database, and send them to the manager process (like capture process for streams).
It regulary checkpoints its R/W position to a file.
Can have the following status :
- STOPPED
- RUNNING
- STARTING
- ABENED (abnormal end)
TRAIL FILES : GoldenGate converts data to replicate into a special format which is written to trail files both on the source and target systems. They are called local trail on the source system and remote trail on the target system.
DATA PUMP : On the local trail, GoldenGate requires an additional extract process called data pump to send data in large block accross TCP/IP.
SERVER COLLECTOR : This process runs on the target system and accepts data from the source and writes it to the remote trails (like propagation process for streams).
MANAGER : Runs on both source and target systems. Stores the change vectors in TRAIL files. It controls the starting, monitoring and restarting processes, the reporting errors and events. Can have the following status :
- STOPPED
- RUNNING
REPLICAT : Recovers the change vectors from trail files and apply them to the target database (like apply process for streams). It regulary checkpoints its R/W position to a file.
Can have the following status :
- STOPPED
- RUNNING
- STARTING
- ABENED (abnormal end)
GGSCI : GoldenGate Software Command Interface. It is a tool that provides a set of commands to create, configurate and monitors GoldenGate.
B. Install of GoldenGate
For my example, both on source S_S and target system T_S i have used oracle 11.2.0.1, linux x86_64, so i downloaded for my developement environements fbo_ggs_Linux_x64_ora11g_64bit.zip and installed it on /u01/Golden_Gate/goldengate.
[oracle@xxxx]$ mkdir -p /u01/Golden_Gate/goldengate
[oracle@xxxx]$ unzip fbo_ggs_Linux_x64_ora11g_64bit.zip
[oracle@xxxx]$ -xvof fbo_ggs_Linux_x64_ora11g_64bit.tar
C. Configuration Principle
1. Manager
Both for source system S_S and target system T_S :
1.1 export LD_LIBRARY_PATH=$ORACLE_HOME/lib to launch the ggsci interface
./ggsci
GGSCI (xxxx) 1> CREATE SUBDIRS
Parameter files /u01/Golden_Gate/goldengate/dirprm: created
Report files /u01/Golden_Gate/goldengate/dirrpt: created
Checkpoint files /u01/Golden_Gate/goldengate/dirchk: created
Process status files /u01/Golden_Gate/goldengate/dirpcs: created
SQL script files /u01/Golden_Gate/goldengate/dirsql: created
Database definitions files /u01/Golden_Gate/goldengate/dirdef: created
Extract data files /u01/Golden_Gate/goldengate/dirdat: created
Temporary files /u01/Golden_Gate/goldengate/dirtmp: created
Veridata files /u01/Golden_Gate/goldengate/dirver: created
Veridata Lock files /u01/Golden_Gate/goldengate/dirver/lock: created
Veridata Out-Of-Sync files /u01/Golden_Gate/goldengate/dirver/oos: created
Veridata Out-Of-Sync XML files /u01/Golden_Gate/goldengate/dirver/oosxml: created
Veridata Parameter files /u01/Golden_Gate/goldengate/dirver/params: created
Veridata Report files /u01/Golden_Gate/goldengate/dirver/report: created
Veridata Status files /u01/Golden_Gate/goldengate/dirver/status: created
Veridata Trace files /u01/Golden_Gate/goldengate/dirver/trace: created
Stdout files /u01/Golden_Gate/goldengate/dirout: created
dirchk:
default location of extract/replicat checkpoint files that provide persistence of R/W operations.
Extension is cpe for Extract and cpr for Replicat.
dirdat:
default location for trail files and extract files created by the Extract processes. Theses files are processed by the Replicat process and other utilities.
dirpcs:
default location for process status files. Created while the processes are running.
Extension is pce for Extract, pcr for Replicat and pcm for Manager processes.
dirprm:
default location for parameters files created by administrators.
dirrpt:
default location for report files.
1.2 Edit the mgr parameter
GGSCI (xxxx) > edit params mgr
port 7809
PURGEOLDEXTRACTS /ggs/dirdat/sa*, USECHECKPOINTS, MINKEEPHOURS 2
PURGEOLDEXTRACTS : not to delete the trail files until the checkpoint confirme they have been processed.
MINKEEPHOURS : minimum hour to keep file on disk
1.3 Start mgr
./mgr paramfile /u01/Golden_Gate/goldengate/dirprm/mgr.prm
***********************************************************************
Oracle GoldenGate Manager for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
Linux, x64, 64bit (optimized), Oracle 11g on Oct 4 2011 23:46:15
Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
Starting at 2012-02-06 10:41:20
***********************************************************************
Operating System Version:
Linux
Version #1 SMP Wed Dec 17 11:41:38 EST 2008, Release 2.6.18-128.el5
Node: SRVD-SSM
Machine: x86_64
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited
Process id: 15767
Parameters...
port 7809
2. Configuring Change Data Capture : Extract (Source system)
2.0 At the source database DB1, create the master goldengate user : master_gg
Target Database : DB1
Tareget table : COLLECTE.TEST_GOLDENGATE
create a master goldengate admin:
create user master_gg identified by master_gg
default tablespace users temporary tablespace temp;
grant create session, resource, dba to master_gg;
2.1 Launch ggsci interface and connect to the source database DB1
./ggsci
GGSCI (S_S) > DBLOGIN USERID master_gg@localhost:1521/DB1 PASSWORD master_gg
Successfully logged into database.
GGSCI (S_S) 3> ADD TRANDATA COLLECTE.TEST_GOLDENGATE
Logging of supplemental redo log data enabled for table COLLECTE.TEST_GOLDENGATE.
2.2 Archivelog mode
The source Database DB1 must be in archivelog mode and support the supplemental logging.
SQL> alter database add supplemental log data;
SQL> alter system switch logfile;-- start the supplemental logging
2.3 Primary Key
Ensure that each source table to replicate has a primary key.
2.4 Edit EXTRACT my_extra param file
GGSCI (S_S) > EDIT PARAM my_extra
EXTRACT my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost localhost, mgrport 7809
EXTTRAIL ./dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;
2.5 Create the EXTRACT group : my_extra
GGSCI (S_S) > ADD EXTRACT my_extra , TRANLOG, BEGIN NOW
EXTRACT added.
TRANLOG means that the transcation (redo) log is the datasource.
2.6 Add and EXTTRAIL
EXTTRAIL is recommended if you are setting up replication on the same local machine, otherwise use RMTTRAIL.
GGSCI (S_S) > ADD EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex, EXTRACT my_extra
EXTTRAIL added.
The local trail has the prefix ex, maximum size : 50Mb
2.7 Start EXTRACT my_extra and more details
GGSCI (S_S) > START EXTRACT my_extra
Sending START request to MANAGER ...
EXTRACT my_extra starting
GGSCI (S_S) > STATUS EXTRACT my_extra
EXTRACT my_extra: RUNNING
GGSCI (S_S) > INFO EXTRACT my_extra
EXTRACT MY_EXTRA Last Started 2012-08-30 16:18 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
Log Read Checkpoint Oracle Redo Logs
2012-09-05 12:34:36 Seqno 1065, RBA 11402752
Target Extract Trails:
Remote Trail Name Seqno RBA Max MB
/u01/Golden_Gate/goldengate/dirdat/ex 1 1686 10
Extract Source Begin End
/u03/oracle/oradata/DB1:/redo/redo030.log 2012-08-30 16:15 2012-09-05 12:34
Not Available * Initialized * 2012-08-30 16:15
Current directory /u01/Golden_Gate/goldengate
Report file /u01/Golden_Gate/goldengate/dirrpt/MY_EXTRA.rpt
Parameter file /u01/Golden_Gate/goldengate/dirprm/my_extra.prm
Checkpoint file /u01/Golden_Gate/goldengate/dirchk/MY_EXTRA.cpe
Process file /u01/Golden_Gate/goldengate/dirpcs/MY_EXTRA.pce
Stdout file /u01/Golden_Gate/goldengate/dirout/MY_EXTRA.out
Error log /u01/Golden_Gate/goldengate/ggserr.log
2.8 Add a data pump process
The Data pump process is an extract that pulls data from the local trail and routes the changes to the GoldenGate on the target system.
2.8.1 Param file
GGSCI (S_S) > EDIT PARAM my_pump
EXTRACT my_pump
PASSTHRU
rmthost T_S, mgrport 7809
RMTTRAIL ./dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;
2.8.2 Add the data pump extract group :
GGSCI (S_S) > ADD EXTRACT my_pump, EXTTRAILSOURCE /u01/Golden_Gate/goldengate/dirdat/ex
2.8.3 Define the remote trail, located on the target system:
GGSCI (S_S) > ADD RMTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex, EXTRACT MY_PUMP
2.8.4 start ...
GGSCI (S_S) > START EXTRACT my_pump
GGSCI (S_S) > info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING MY_EXTRA 00:00:00 00:00:09
EXTRACT RUNNING MY_PUMP 00:00:00 00:00:02
3. Configuring Change Delivery : Replicat (Target system)
Define a manager for the target system as for the source system
3.1 At the target database DB2, create the master goldengate user : master_gg
Target Database : DB2
Tareget table : USER_GG.TEST_GOLDENGATE
create a master goldengate admin:
create user master_gg identified by master_gg
default tablespace users temporary tablespace tbs_temp;
grant create session, resource, dba to master_gg;
3.2 Launch ggsci interface and define the checkpoint table
./ggsci
GGSCI (S_T) > EDIT PARAMS ./GLOBALS
CHECKPOINTTABLE master_gg.gg_checkpoint
exit and re-start ggsci !!
3.3 Under ggsci, connect to the target database DB2
GGSCI (S_T) > DBLOGIN USERID master_gg@localhost:1521/DB2 PASSWORD master_gg
GGSCI (S_T) > add checkpointtable
No checkpoint table specified, using GLOBALS specification (master_gg.gg_checkpoint)...
Successfully created checkpoint table MASTER_GG.GG_CHECKPOINT.
3.4 Create the REPLICAT group : MY_REPLI
GGSCI (S_T) >ADD REPLICAT my_repli, EXTTRAIL ./dirdat/ex
3.5 Edit the REPLICAT param file
GGSCI (S_T) > EDIT PARAM my_repli
REPLICAT my_repli
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/my_dest.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE;
we use the ASSUMETARGETDEFS paramater if the two tables have the same DDL structure.
3.5 start REPLICAT DESTINAT
GGSCI (S_T) > start REPLICAT my_repli
4. Troubleshooting
4.1 status
GGSCI (xxx) > info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING SOURCE 00:00:00 00:00:09
REPLICAT RUNNING DESTINAT 00:00:00 00:00:03
4.2 Errors
use VIEW GGSEVT to view log errors:
GGSCI (xxx) 25> view ggsevt
D. Initial Load
We can synchronize the target database and the source database with an Initial Load.
Many ways are possible for the Initial Load :
File to Replicat : The Extract writes the data into a file, the replicat extracts the data from the file and applies them.
Direct Bulk Load : Replicat Process uses SQL*Loader utility
Direct Load : No file used, the extract sends the data directly to the replicat, which applies them.
File to Database Utility : Extract process writes the data into a file to be used by a DB bulk load utility.
D1. File to replicat
I would like to replicate all the data for the source table COLLECTE.GG_FILE_REP to the target table USER_GG.GG_FILE_REP.
Verification :
select count(*) from COLLECTE.GG_FILE_REP; -----> 3
select count(*) from USER_GG.GG_FILE_REP;---->0
Source System :
1.
GGSCI (S_S) >
ADD EXTRACT exinitld, SOURCEISTABLE
2.
GGSCI (S_S) > edit param exinitld
extract exinitld
sourceistable
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost S_T, mgrport 7809
rmtfile ./dirdat/exinitld.dat, purge
TABLE COLLECTE.GG_FILE_REP;
3. Linux Command
./extract paramfile /u01/Golden_Gate/goldengate/dirprm/exinitld.prm reportfile /u01/Golden_Gate/goldengate/dirrpt/exinitld.rpt
>>>>>>>> the file exinitld.dat is created on the target system
GGSCI (S_S) >VIEW REPORT exinitld
.....
***********************************************************************
* ** Run Time Statistics ** *
***********************************************************************
Report at 2012-09-11 13:52:53 (activity since 2012-09-11 13:52:48)
Output to ./dirdat/exinitld.dat:
From Table COLLECTE.GG_FILE_REP:
# inserts: 3
# updates: 0
# deletes: 0
# discards: 0
REDO Log Statistics
Bytes parsed 0
Bytes output 267
Target System
4. GGSCI (S_T) > ADD REPLICAT repintld, EXTFILE ./dirdat/exinitld.dat
5. GGSCI (S_T) > edit param repintld
REPLICAT repintld
SPECIALRUN
ASSUMETARGETDEFS ---------> paramater if the two tables have the same DDL structure
HANDLECOLLISIONS ----------->resolve duplicate and missing records
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
EXTFILE ./dirdat/exinitld.dat
DISCARDFILE ./dirrpt/exinitld.dsc, PURGE
MAP COLLECTE.GG_FILE_REP, TARGET
USER_GG.GG_FILE_REP;
END RUNTIME
6. Linux Command
./replicat paramfile /u01/Golden_Gate/goldengate/dirprm/repintld.prm reportfile /u01/Golden_Gate/goldengate/dirrpt/repintld.rpt
wait few seconds ....
Verification :
select count(*) from USER_GG.GG_FILE_REP;-----> 3
D2. Direct Load
I would like to replicate all the data for the source table COLLECTE.GG_DIRECT_LOAD to the target table USER_GG.GG_FILE_REP.
Verification :
select count(*) from COLLECTE.GG_DIRECT_LOAD;-----> 3
select count(*) from USER_GG.GG_DIRECT_LOAD;-----> 0
Source System :
1.
GGSCI (S_S) >
ADD EXTRACT exdirtld, SOURCEISTABLE
2.
GGSCI (S_S) > edit param exdirtld
extract exdirtld
sourceistable
USERID master_gg@localhost:1521/STATMAC, PASSWORD master_gg
rmthost S_T, mgrport 7809
rmttask replicat, group repdrtld
TABLE COLLECTE.GG_DIRECT_LOAD;
Target System
3. GGSCI (S_T) > ADD REPLICAT repdrtld , SPECIALRUN
4. GGSCI (S_T) > edit param repdrtld
REPLICAT repdrtld
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
MAP COLLECTE.GG_DIRECT_LOAD, TARGET USER_GG.GG_DIRECT_LOAD;
Source System :
5. GGSCI (S_S) > start EXTRACT exdirtld
EXTRACT EXDIRTLD Initialized 2012-09-11 15:08 Status STOPPED
Checkpoint Lag Not Available
Log Read Checkpoint Not Available
First Record Record 0
Task SOURCEISTABLE
....
EXTRACT EXDIRTLD Initialized 2012-09-11 15:08 Status RUNNING
Checkpoint Lag Not Available
Log Read Checkpoint Not Available
First Record Record 3
Task SOURCEISTABLE
...
EXTRACT EXDIRTLD Initialized 2012-09-11 15:08 Status STOPPED
Checkpoint Lag Not Available
Log Read Checkpoint Not Available
First Record Record 3
Task SOURCEISTABLE
Verification :
select count(*) from USER_GG.GG_DIRECT_LOAD;-----> 3
E. GoldenGate Options
E1. BATCHSQL & SQL CHACHE:
In order to make Replicat more perfomant, Goldengate uses the BATCHSQL option to organize
the similar SQL statements into barches and applies them all at once.
The Sql statements are cached in the SQL CACHE memory.
The different parameters can be used to manage the sql cache memory :
BATCHESPERQUEUE : maximum number of batches per queue (default 50).
BYTESPERQUEUE : maximum number of bytes per queue (default 20Mb).
OPSPERBATCH : maximum number of rows operations per batrch (default 200).
OPSPERQUEUE : maximum number of rows operations a memory queue with multiple batches can contain (default 1200).
MAXSQLSTATEMENTS : number of statements that are cached. Old ones are recycled.
Example :
REPLICAT my_repli
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/my_dest.rpt, PURGE
BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000
E2. DATA ENCRYPTION
One feature is the message encryption that uses Blowfish block cipher that encrypts the individual messages sent over TCP/IP.
The Server Collector process decryptes the data on the target machine.
Steps to enable data encryption :
1. On both source and target machine, generate a key with the linux command keygen :
[oracle@S_T goldengate]$ ./keygen 128 1
0x4D63FF2B15612542CD2B9D4389CCAF60
2. On both source and target machine, create a file named ENCKEYS as :
MyKey1 0x4D63FF2B15612542CD2B9D4389CCAF60
3. On source machine, start ggsci and add the ENCRYPT and KEYNAME parameters to RMTHOST in the source Extract or Data Pump parameter file :
EXTRACT my_pump
PASSTHRU
rmthost S_S, mgrport 7809, ENCRYPT BLOWFISH, KEYNAME MyKey1
RMTTRAIL /u01/Golden_Gate/goldengate/dirdat/re
TABLE COLLECTE.TEST_GOLDENGATE;
4. On the target system, obtain the port for the Server Collector as :
[oracle@S_T]$ ps -ef | grep ora | grep server
oracle 31038 30302 0 Sep10 ? 00:00:06 ./server -w 300 -p 7840 -k -l /u01/Gol den_Gate/goldengate/ggserr.log
5. On both source and target machine, stop extracts and replicat processes.
6. On the target system, configurate the Server Collector and start it manually as :
[oracle@S_T]$ ./server -p 7840 -ENCRYPT BLOWFISH -KEYNAME MyKey1 &
[1] 18250
[oracle@S_T]$
2012-09-12 11:59:50 INFO OGG-01678 Listening for requests.
7. Start extract and replicat processes
8. On the target machine, check the Server Collector as
[oracle@S_T]$ ps -ef | grep ora | grep server
oracle 18250 30317 0 11:59 pts/2 00:00:00 ./server -p 7840 -ENCRYPT BLOWFISH -KEYNAME MyKey1
E3. PASSWORD ENCRYPTION
You can prevent access to GoldenGate files by changing the OS rights (Read only) to all the users diferent from the oracle user : chmod 600 *.prm.
You can also encrypt the database pwd in the different paramter files using the default method and the named method:
Default method
1.
GGSCI (S_S) 1> encrypt password master_gg
No key specified, using default key...
Encrypted password: AACAAAAAAAAAAAJAIGRCGIIGZAWEZEGBRCDDPELASDJETACE
2. Copy the key into each param files
EXTRACT my_extra
USERID master_gg@localhost:1521/STATMAC, PASSWORD AACAAAAAAAAAAAJAIGRCGIIGZAWEZEGBRCDDPELASDJETACE
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;
Named method
Here you can generate the encrypetd key:
1. ./keygen 128 1
0x0ADBD071B1520F6E36118C56A6286D75
2. Add to the ENCKEYS file :
MyKey2 0x0ADBD071B1520F6E36118C56A6286D75
3. Encrypt the database pwd:
GGSCI (S_S) 1> encrypt password master_gg, ENCRYPTKEY MyKey2
Encrypted password: AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH
4. copy the key into each param files :
EXTRACT my_extra USERID master_gg@localhost:1521/STATMAC, PASSWORD
AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH, ENCRYPTKEY MyKey2
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;
E4. DATA COMPRESSION:
If the copmression option is configurated in the Extract or Data Pump process, the Server Collector process decompresses the data before writing to the remote trail files.
The different parameters for compression are :
Compression CPU time : The time the process used the CPU resource ( seconds ).
Compress time : The overall time the compression tool takes including waits on CPU resource.
Uncompressed bytes and compressed bytes : Detail on the amount of compression taking place.
Example 1 :
GGSCI (S_S) 75> send extract MY_PUMP , gettcpstats
Sending GETTCPSTATS request to EXTRACT MY_PUMP ...
RMTTRAIL /u01/Golden_Gate/goldengate/dirdat/re000000, RBA 1926
OK
Session Index 0
Stats started 2012/09/10 10:43:48.470803 23:47:16.144857
Local address 172.18.205.236:28277 Remote address 172.18.200.138:40990
Inbound Msgs 188752 Bytes 1698708, 9 bytes/second
Outbound Msgs 188753 Bytes 13969552, 81 bytes/second
Recvs 377504
Sends 188753
Avg bytes per recv 4, per msg 8
Avg bytes per send 74, per msg 74
Recv Wait Time 265252973, per msg 1405, per recv 702
Send Wait Time 1683785, per msg 8, per send 8
Example 2 :
EXTRACT my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost localhost, mgrport 7809
EXTTRAIL ./dirdat/ex
COMPRESS, COMPRESSTHRESHOLD 512 --------->minimum number of bytes in block
TABLE COLLECTE.TEST_GOLDENGATE;
You can use the SHOWCH option (the "Write Checkpoint" section) to derive the block size.
GGSCI (S8t) 77> info MY_PUMP, showch
EXTRACT MY_PUMP Last Started 2012-09-10 10:43 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Log Read Checkpoint File /u01/Golden_Gate/goldengate/dirdat/ex000000
2012-09-12 09:50:42.000000 RBA 1876
Current Checkpoint Detail:
Read Checkpoint #1
GGS Log Trail
Startup Checkpoint (starting position in the data source):
Sequence #: 0
RBA: 0
Timestamp: Not Available
Extract Trail: /u01/Golden_Gate/goldengate/dirdat/ex
Current Checkpoint (position of last record read in the data source):
Sequence #: 0
RBA: 1876
Timestamp: 2012-09-12 09:50:42.000000
Extract Trail: /u01/Golden_Gate/goldengate/dirdat/ex
Write Checkpoint #1
GGS Log Trail
Current Checkpoint (current write position):
Sequence #: 0
RBA: 1926
Timestamp: 2012-09-12 10:43:29.827268
Extract Trail: /u01/Golden_Gate/goldengate/dirdat/re
Header:
Version = 2
Record Source = A
Type = 1
# Input Checkpoints = 1
# Output Checkpoints = 1
File Information:
Block Size = 2048
Max Blocks = 100
Record Length = 2048
Current Offset = 0
....
E5. Triggers & Events
You can trace some DML on the source table to the target reports using the EVENTACTIONS option, here an example :
REPLICAT my_repli
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE;
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (@GETENV ("GGHEADER", "OPTYPE")= "DELETE"),
EVENTACTIONS (TRACE order_1.trc TRANSACTION);
So i can have the follwoing replicat report :
...
***********************************************************************
* ** Run Time Statistics ** *
***********************************************************************
Last record for the last committed transaction is the following:
___________________________________________________________________
Report at 2012-09-12 14:53:58 (activity since 2012-09-12 14:52:58)
From Table COLLECTE.TEST_GOLDENGATE to USER_GG.TEST_GOLDENGATE:
# inserts: 1
# updates: 0
# deletes: 1
# discards: 0
....
For more information about all the options, please report to the Oracle GoldenGate Reference.
F. Clauses & Functions
F1. DEFGEN & DEFSFILE
You can store the definition of the source table in a source file, using the DEFSFILE command. Here an example (source system) :
GGSCI (S_S) 1> edit param my_definition
DEFSFILE /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/STATMAC, PASSWORD AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH, ENCRYPTKEY MyKey2
TABLE COLLECTE.TEST_GOLDENGATE;
now, run the defgen utility:
[oracle@S_S goldengate]$ ./defgen paramfile/u01/Golden_Gate/goldengate/dirprm/
my_definition.prm
***********************************************************************
Oracle GoldenGate Table Definition Generator for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
Linux, x64, 64bit (optimized), Oracle 11g on Oct 5 2011 00:08:57
Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
Starting at 2012-09-18 11:39:56
***********************************************************************
Operating System Version:
Linux
Version #1 SMP Wed Dec 17 11:41:38 EST 2008, Release 2.6.18-128.el5
Node: S_S
Machine: x86_64
soft limit hard limit
Address Space Size : unlimited unlimited
Heap Size : unlimited unlimited
File Size : unlimited unlimited
CPU Time : unlimited unlimited
Process id: 31547
***********************************************************************
** Running with the following parameters **
***********************************************************************
DEFSFILE /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/STATMAC, PASSWORD *************************************************
ENCRYPTKEY MyKey2
F2. SOURCEDEF & COLMAP
Suppose that the source and the target tables have not the same definition, you can use the COLMAP & SOURCEDEF clauses to match the different source and target columns.
source table :
SQL> desc TEST_GOLDENGATE
Nom NULL ? Type
----------------------------------------- -------- -------------
ID1 NOT NULL NUMBER
LABELE VARCHAR2(10)
ID2 NUMBER
ID3 NUMBER
LABEL3 VARCHAR2(10)
target table :
SQL> desc TEST_GOLDENGATE
Nom NULL ? Type
----------------------------------------- -------- -------------
ID1 NOT NULL NUMBER
LABELE VARCHAR2(10)
ID2 NUMBER
ID3 NUMBER
LABEL2 VARCHAR2(10)
First, from the source system, edit with DEFGEN, the defintion of the source table and copy the output file (test_gg.sql) on the target system.
Second, modify (or create) the replicat process as :
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3);
F3. Calling Stored procedure & Query
Suppose that you want to call a stored procedure to retrieve a data from any tables, or doing some complex calculation...
You can use the SQLEXEC, SPNAME, PARAMS and GETVAL clauses to call the procedure.
Example :
On the master admin (target system) :
CREATE OR REPLACE PROCEDURE "PRO_COMPUTE_ID4" (
PN_ID2 IN NUMBER,
PN_ID4 OUT NUMBER
) as
begin
SELET ID_VALUE4
INTO PN_ID4
FROM My_Table
WHERE ID_VALUE2 := 2*PN_ID2 ;
end;
/
For the replicat :
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE, &
SQLEXEC (
SPNAME PRO_COMPUTE_ID4, &PARAMS (PN_ID2=ID2)
), &
COLMAP (usedefaults,
LABEL2=LABEL3,
MY_DATE=@DATE("YYYY-MM-DD:HH:MI:SS", "YYYYMMDDHHMISS", 20120919145320),
ID4=@GETVAL(PRO_COMPUTE_ID4.PN_ID4)
);
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE, &
SQLEXEC (ID LOOKUP, &
QUERY "SELECT ID4 from
USER_GG.REF_GOLDENGATE where ID2=:V_ID2", &
PARAMS (V_ID2=ID2)
),&
COLMAP (usedefaults,
LABEL2=LABEL3,
MY_DATE=@DATE("YYYY-MM-DD:HH:MI:SS", "YYYYMMDDHHMISS", 20120919145320),
ID4=@GETVAL(lookup.id4)
);
F4. WHERE
You can filter the data to replicate using the clause WHERE.
Example :
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3),
WHERE (ID2 > 6);
F5. FILTER
You can filter the data to replicate using the clause FILTER.
Example : we filter for update DML on records having ID3 > 5 and for insert DML on records having ID2 > 15.
We can delete any records.
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3),
FILTER (ON UPDATE, ID3 > 5),
FILTER (ON INSERT, ID2 > 15);
the filters can be applied on the extract process :
EXTRACT my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH, ENCRYPTKEY MyKey2
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (ON UPDATE, ID3 > 5);
F6. STRFIND
It determines the position of a string within a string column else returns 0 if the string deos not exist.
Example :
LABEL3 = 'MY_TEST' , @strfind(label3, "TES") = 4
LABEL3 = 'MY_TEST' , @strfind(label3, "HK") = 0
LABEL3 = 'TES_MY_TEST' , @strfind(label3, "TES", 3 ) = 8 : the serach starts at the third caracter.
Example with the extract process :
EXTRACT my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost 172.18.200.138, mgrport 7809
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (@STRFIND (LABEL3, "3")>0);
F7. COMPUTE
Result of an arithmetic expression.
Example with the extract process : extract only the records having ID2 -ID3 > 10
EXTRACT my_extra
USERID master_gg@localhost:1521/DB1,PASSWORD master_gg
rmthost 172.18.200.138, mgrport 7809
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (@STRFIND (LABEL3, "3")>0),
FILTER (@COMPUTE(ID2 - ID3)>10);
F8. DATE
Returns date and times format.
Example:
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/MISPROD, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3, MY_DATE=@DATE("YYYY-MM-DD:HH:MI:SS", "YYYYMMDDHHMISS", 20120919145320));
F9. MACROS
coming ...