mardi 25 septembre 2012

GoldenGate Configuration Example : From Installation To Troubleshooting



A. Introduction
According to oracle note, GoldenGate is a best-of-breed, easy-to-deploy product used to replicate and integrate transactional data with subsecond speed among a variety of enterprise systems. Oracle GoldenGate provides the flexibility to move data between like-to-like and heterogeneous systems, including different versions of Oracle Database, different hardware platforms, and between Oracle and non-Oracle databases including Microsoft SQL Server, IBM DB2 for open systems and z/OS, Sybase, and more.is a best-of-breed, easy-to-deploy product used to replicate and integrate transactional data with subsecond speed among a variety of enterprise systems.
Oracle GoldenGate provides the flexibility to move data between like-to-like and heterogeneous systems, including different versions of Oracle Database, different hardware platforms, and between Oracle and non-Oracle databases including Microsoft SQL Server, IBM DB2 for open systems and z/OS, Sybase, and more.
In this article, i would like to present you a simple use of GoldenGate to replicate the TEST_GOLDENGATE table owned by the user COLLECTE from the database DB1 to the TEST_GOLDENGATE table owned by the user USER_GG from the database DB2.

The main components of the GoldenGate architecture are :
EXTRACT : Collects the changes vectors from the redo logs and archive logs file from the source database, and send them to the manager process (like capture process for streams).
        It regulary checkpoints its R/W position to a file.
        Can have the following status :
        - STOPPED
        - RUNNING
        - STARTING
        - ABENED (abnormal end)
TRAIL FILES : GoldenGate converts data to replicate into a special format which is written to trail files both on the source and target systems. They are called local trail on the source system and remote trail on the target system.
DATA PUMP : On the local trail, GoldenGate requires an additional extract process called data pump to send data in large block accross TCP/IP.
SERVER COLLECTOR : This process runs on the target system and accepts data from the source and writes it to the remote trails  (like propagation process for streams).
MANAGER : Runs on both source and target systems. Stores the change vectors in TRAIL files. It controls the starting, monitoring and restarting processes, the reporting errors and events. Can have the following status :
                    - STOPPED
                    - RUNNING
REPLICAT :  Recovers the change vectors from trail files and apply them to the target database (like apply process for streams). It regulary checkpoints its R/W position to a file.
        Can have the following status :
        - STOPPED
        - RUNNING
        - STARTING
        - ABENED (abnormal end)
GGSCI : GoldenGate Software Command Interface. It is a tool that provides a set of commands to create, configurate and monitors GoldenGate.






















B. Install of GoldenGate
For my example, both on source S_S and target system T_S i have used oracle 11.2.0.1, linux x86_64, so i downloaded for my developement environements fbo_ggs_Linux_x64_ora11g_64bit.zip and installed it on /u01/Golden_Gate/goldengate.

[oracle@xxxx]$ mkdir -p /u01/Golden_Gate/goldengate
[oracle@xxxx]$ unzip fbo_ggs_Linux_x64_ora11g_64bit.zip
[oracle@xxxx]$ -xvof fbo_ggs_Linux_x64_ora11g_64bit.tar


C. Configuration Principle
 1. Manager
Both for source system S_S and target system T_S :

1.1 export LD_LIBRARY_PATH=$ORACLE_HOME/lib to launch the ggsci interface
./ggsci
 GGSCI (xxxx) 1> CREATE SUBDIRS
 Parameter files                                       /u01/Golden_Gate/goldengate/dirprm: created
 Report files                                             /u01/Golden_Gate/goldengate/dirrpt: created
 Checkpoint files                                      /u01/Golden_Gate/goldengate/dirchk: created
 Process status files                                 /u01/Golden_Gate/goldengate/dirpcs: created
 SQL script files                                       /u01/Golden_Gate/goldengate/dirsql: created
 Database definitions files                      /u01/Golden_Gate/goldengate/dirdef: created
 Extract data files                                   /u01/Golden_Gate/goldengate/dirdat: created
 Temporary files                                     /u01/Golden_Gate/goldengate/dirtmp: created
 Veridata files                                        /u01/Golden_Gate/goldengate/dirver: created
 Veridata Lock files                              /u01/Golden_Gate/goldengate/dirver/lock: created
 Veridata Out-Of-Sync files                 /u01/Golden_Gate/goldengate/dirver/oos: created
 Veridata Out-Of-Sync XML files      /u01/Golden_Gate/goldengate/dirver/oosxml: created
 Veridata Parameter files                   /u01/Golden_Gate/goldengate/dirver/params: created
 Veridata Report files                        /u01/Golden_Gate/goldengate/dirver/report: created
 Veridata Status files                         /u01/Golden_Gate/goldengate/dirver/status: created
 Veridata Trace files                         /u01/Golden_Gate/goldengate/dirver/trace: created
 Stdout files                                       /u01/Golden_Gate/goldengate/dirout: created

dirchk:
default location of extract/replicat checkpoint files that provide persistence of R/W operations.
Extension is cpe for Extract and cpr for Replicat.

dirdat:
default location for trail files and extract files created by the Extract processes. Theses files are processed by the Replicat process and other utilities.

dirpcs:
default location for process status files. Created while the processes are running.
Extension is pce for Extract, pcr for Replicat and pcm for Manager processes.

dirprm:
default location for parameters files created by administrators.

dirrpt:
default location for report files.

 1.2 Edit the mgr parameter
GGSCI (xxxx) > edit params mgr
 port 7809
PURGEOLDEXTRACTS /ggs/dirdat/sa*, USECHECKPOINTS, MINKEEPHOURS 2

PURGEOLDEXTRACTS  : not to delete the trail files until the checkpoint confirme they have been processed.
MINKEEPHOURS : minimum hour to keep file on disk


 1.3 Start mgr
./mgr paramfile /u01/Golden_Gate/goldengate/dirprm/mgr.prm
***********************************************************************
                 Oracle GoldenGate Manager for Oracle
     Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
   Linux, x64, 64bit (optimized), Oracle 11g on Oct  4 2011 23:46:15
Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.

                    Starting at 2012-02-06 10:41:20
***********************************************************************
Operating System Version:
Linux
Version #1 SMP Wed Dec 17 11:41:38 EST 2008, Release 2.6.18-128.el5
Node: SRVD-SSM
Machine: x86_64
soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited
Process id: 15767
Parameters...
port 7809


2. Configuring Change Data Capture : Extract (Source system)

2.0 At the source database DB1, create the master goldengate user : master_gg
Target Database  :  DB1
Tareget table         : COLLECTE.TEST_GOLDENGATE
create a master goldengate admin:

 create user master_gg  identified by master_gg
 default tablespace users  temporary tablespace temp;
 grant create session, resource, dba to master_gg;

2.1 Launch ggsci interface and connect to the source database DB1
./ggsci
GGSCI (S_S) > DBLOGIN USERID master_gg@localhost:1521/DB1 PASSWORD master_gg

Successfully logged into database.
GGSCI (S_S) 3> ADD TRANDATA COLLECTE.TEST_GOLDENGATE
Logging of supplemental redo log data enabled for table COLLECTE.TEST_GOLDENGATE.


2.2 Archivelog mode
The source Database DB1 must be in archivelog mode and support the supplemental logging.
SQL> alter database add supplemental log data;
SQL> alter system switch logfile;-- start the supplemental logging

2.3 Primary Key
Ensure that each source table to replicate has a primary key.


2.4 Edit EXTRACT my_extra param file

GGSCI (S_S) > EDIT PARAM my_extra
EXTRACT  my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost localhost, mgrport 7809
EXTTRAIL ./dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;



2.5 Create the EXTRACT group : my_extra
GGSCI (S_S) >  ADD EXTRACT my_extra , TRANLOG, BEGIN NOW
EXTRACT added.
TRANLOG means that the transcation (redo) log is the datasource.


2.6 Add and EXTTRAIL
EXTTRAIL is recommended if you are setting up replication on the same local machine, otherwise use RMTTRAIL.

GGSCI (S_S) >  ADD EXTTRAIL  /u01/Golden_Gate/goldengate/dirdat/ex, EXTRACT  my_extra
EXTTRAIL added.

The local trail has the prefix ex, maximum size : 50Mb


2.7 Start EXTRACT my_extra and more details
GGSCI (S_S) >  START EXTRACT  my_extra
Sending START request to MANAGER ...
EXTRACT my_extra starting

GGSCI (S_S) > STATUS EXTRACT my_extra
EXTRACT my_extra: RUNNING

GGSCI (S_S) > INFO EXTRACT  my_extra

EXTRACT    MY_EXTRA  Last Started 2012-08-30 16:18   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2012-09-05 12:34:36  Seqno 1065, RBA 11402752
  Target Extract Trails:
  Remote Trail Name                                                     Seqno        RBA        Max MB
  /u01/Golden_Gate/goldengate/dirdat/ex                1                1686         10
  Extract Source                          Begin             End
  /u03/oracle/oradata/DB1:/redo/redo030.log  2012-08-30 16:15  2012-09-05 12:34
  Not Available                           * Initialized *   2012-08-30 16:15
Current directory      /u01/Golden_Gate/goldengate
Report file                  /u01/Golden_Gate/goldengate/dirrpt/MY_EXTRA.rpt
Parameter file           /u01/Golden_Gate/goldengate/dirprm/my_extra.prm
Checkpoint file         /u01/Golden_Gate/goldengate/dirchk/MY_EXTRA.cpe
Process file                /u01/Golden_Gate/goldengate/dirpcs/MY_EXTRA.pce
Stdout file                  /u01/Golden_Gate/goldengate/dirout/MY_EXTRA.out
Error log                    /u01/Golden_Gate/goldengate/ggserr.log


2.8 Add a data pump process
The Data pump process is an extract that pulls data from the local trail and routes the changes to the GoldenGate on the target system.

2.8.1 Param file
GGSCI (S_S) > EDIT PARAM my_pump
EXTRACT  my_pump
PASSTHRU
rmthost T_S, mgrport 7809
RMTTRAIL ./dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;

2.8.2 Add the data pump extract group :
GGSCI (S_S) >  ADD EXTRACT my_pump, EXTTRAILSOURCE  /u01/Golden_Gate/goldengate/dirdat/ex

2.8.3 Define the remote trail, located on the target system:
GGSCI (S_S) > ADD RMTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex, EXTRACT MY_PUMP


2.8.4 start ...
GGSCI (S_S) >   START EXTRACT  my_pump


GGSCI (S_S) > info all

Program     Status      Group       Lag           Time Since Chkpt
MANAGER     RUNNING
EXTRACT     RUNNING     MY_EXTRA    00:00:00      00:00:09
EXTRACT     RUNNING     MY_PUMP     00:00:00      00:00:02


3.  Configuring Change Delivery : Replicat (Target system)

Define a manager for the target system as for the source system

3.1 At the target database DB2, create the master goldengate user : master_gg
Target Database  :  DB2
Tareget table        : USER_GG.TEST_GOLDENGATE
create a master goldengate admin:

 create user master_gg  identified by master_gg
 default tablespace users  temporary tablespace tbs_temp;
 grant create session, resource, dba to master_gg;

3.2 Launch ggsci interface and define the checkpoint table
./ggsci
GGSCI (S_T) > EDIT PARAMS ./GLOBALS
CHECKPOINTTABLE master_gg.gg_checkpoint

exit and re-start ggsci !!

3.3 Under ggsci, connect to the target database DB2
GGSCI (S_T) > DBLOGIN USERID master_gg@localhost:1521/DB2 PASSWORD master_gg
GGSCI (S_T) > add checkpointtable
No checkpoint table specified, using GLOBALS specification (master_gg.gg_checkpoint)...
Successfully created checkpoint table MASTER_GG.GG_CHECKPOINT.


3.4 Create the REPLICAT group : MY_REPLI
GGSCI (S_T) >ADD REPLICAT my_repli, EXTTRAIL ./dirdat/ex

3.5 Edit the REPLICAT param file
GGSCI (S_T) > EDIT PARAM my_repli
REPLICAT my_repli
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/my_dest.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE;

we use the ASSUMETARGETDEFS paramater if the two tables have the same DDL structure.

 3.5 start REPLICAT DESTINAT
GGSCI (S_T) > start REPLICAT my_repli

4. Troubleshooting

 4.1 status
GGSCI (xxx) > info all
Program          Status             Group            Lag              Time Since Chkpt
MANAGER      RUNNING
EXTRACT       RUNNING      SOURCE       00:00:00      00:00:09
REPLICAT      RUNNING      DESTINAT    00:00:00      00:00:03

4.2 Errors
use VIEW GGSEVT to view log errors:
GGSCI (xxx) 25> view ggsevt


D. Initial Load

We can synchronize the target database and the source database with an Initial Load.
Many ways are possible for the  Initial Load :
File to Replicat : The Extract writes the data into a file, the replicat extracts the data from the file and applies them.
Direct Bulk Load   : Replicat Process uses SQL*Loader utility
Direct Load : No file used, the extract sends the data directly to the replicat, which applies them.
File to Database Utility : Extract process writes the data into a file to be used by a DB bulk load utility.

 D1. File to replicat
I would like to replicate all the data for the source table COLLECTE.GG_FILE_REP to the target table USER_GG.GG_FILE_REP.

Verification :
 select count(*) from COLLECTE.GG_FILE_REP; -----> 3
 select count(*) from USER_GG.GG_FILE_REP;---->0

Source System  :
1.
GGSCI (S_S) >
ADD EXTRACT exinitld, SOURCEISTABLE

2.
GGSCI (S_S) > edit param exinitld
extract exinitld
sourceistable
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost S_T, mgrport 7809
rmtfile ./dirdat/exinitld.dat, purge
TABLE COLLECTE.GG_FILE_REP;


3. Linux Command
./extract paramfile /u01/Golden_Gate/goldengate/dirprm/exinitld.prm reportfile /u01/Golden_Gate/goldengate/dirrpt/exinitld.rpt
>>>>>>>> the file  exinitld.dat  is created on the target system

GGSCI (S_S) >VIEW REPORT exinitld
.....
***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************
Report at 2012-09-11 13:52:53 (activity since 2012-09-11 13:52:48)
Output to ./dirdat/exinitld.dat:
From Table COLLECTE.GG_FILE_REP:
       #                   inserts:         3
       #                   updates:         0
       #                   deletes:         0
       #                  discards:         0
 REDO Log Statistics
  Bytes parsed                    0
  Bytes output                  267


Target System
4.  GGSCI (S_T) > ADD REPLICAT repintld, EXTFILE ./dirdat/exinitld.dat

5. GGSCI (S_T) > edit param repintld
REPLICAT repintld
SPECIALRUN
ASSUMETARGETDEFS ---------> paramater if the two tables have the same DDL structure
HANDLECOLLISIONS ----------->resolve duplicate and missing records
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
EXTFILE ./dirdat/exinitld.dat
DISCARDFILE ./dirrpt/exinitld.dsc, PURGE
MAP COLLECTE.GG_FILE_REP, TARGET
USER_GG.GG_FILE_REP;
END RUNTIME

6. Linux Command
./replicat paramfile /u01/Golden_Gate/goldengate/dirprm/repintld.prm reportfile /u01/Golden_Gate/goldengate/dirrpt/repintld.rpt
wait few seconds ....

Verification :
 select count(*) from USER_GG.GG_FILE_REP;-----> 3


D2. Direct Load
I would like to replicate all the data for the source table COLLECTE.GG_DIRECT_LOAD to the target table USER_GG.GG_FILE_REP.

Verification :
 select count(*) from COLLECTE.GG_DIRECT_LOAD;-----> 3
 select count(*) from USER_GG.GG_DIRECT_LOAD;-----> 0

Source System  :
1.
GGSCI (S_S) >
ADD EXTRACT exdirtld, SOURCEISTABLE

2.
GGSCI (S_S) >  edit param exdirtld
extract exdirtld
sourceistable
USERID master_gg@localhost:1521/STATMAC, PASSWORD master_gg
rmthost S_T, mgrport 7809
rmttask replicat, group  repdrtld
TABLE COLLECTE.GG_DIRECT_LOAD;

Target System
3.  GGSCI (S_T) > ADD REPLICAT  repdrtld , SPECIALRUN


4. GGSCI (S_T) > edit param repdrtld
REPLICAT repdrtld
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
MAP COLLECTE.GG_DIRECT_LOAD, TARGET USER_GG.GG_DIRECT_LOAD;

Source System  :
5. GGSCI (S_S) >  start  EXTRACT exdirtld
EXTRACT    EXDIRTLD  Initialized   2012-09-11 15:08   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE
....
EXTRACT    EXDIRTLD  Initialized   2012-09-11 15:08   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 3
Task                 SOURCEISTABLE
...
EXTRACT    EXDIRTLD  Initialized   2012-09-11 15:08   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 3
Task                 SOURCEISTABLE


Verification :
 select count(*) from USER_GG.GG_DIRECT_LOAD;-----> 3


E. GoldenGate Options

E1. BATCHSQL & SQL CHACHE:
In order to make Replicat more perfomant, Goldengate uses the BATCHSQL option to organize
the similar SQL statements into barches and applies them all at once.
The Sql statements are cached in the SQL CACHE memory.
The different parameters can be used to manage the sql cache memory :
BATCHESPERQUEUE : maximum number of batches per queue (default 50).
BYTESPERQUEUE       : maximum number of bytes per queue (default 20Mb).
OPSPERBATCH            : maximum number of rows operations per batrch (default 200).
OPSPERQUEUE            : maximum number of rows operations a memory queue with   multiple batches can contain (default 1200).
MAXSQLSTATEMENTS : number of statements that are cached. Old ones are recycled.
Example :
REPLICAT my_repli
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/my_dest.rpt, PURGE
BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000


E2. DATA ENCRYPTION
One feature is the message encryption that uses Blowfish block cipher that encrypts the individual messages sent over TCP/IP.
The Server Collector process decryptes the data on the target machine.

Steps to enable data encryption :


1. On both source and target machine, generate a key with the linux command keygen :
    [oracle@S_T goldengate]$ ./keygen 128 1
    0x4D63FF2B15612542CD2B9D4389CCAF60

2. On both source and target machine, create a file named ENCKEYS as :
    MyKey1  0x4D63FF2B15612542CD2B9D4389CCAF60

3. On source machine, start ggsci and add the ENCRYPT and KEYNAME parameters to RMTHOST in the source Extract or Data Pump parameter file :
EXTRACT  my_pump
PASSTHRU
rmthost S_S, mgrport 7809, ENCRYPT BLOWFISH, KEYNAME MyKey1
RMTTRAIL /u01/Golden_Gate/goldengate/dirdat/re
TABLE COLLECTE.TEST_GOLDENGATE;

4. On the target system, obtain the port for the Server Collector as :
[oracle@S_T]$ ps -ef | grep ora | grep server
oracle   31038 30302  0 Sep10 ?        00:00:06 ./server -w 300 -p 7840 -k -l /u01/Gol                                    den_Gate/goldengate/ggserr.log

5. On both source and target machine, stop extracts and replicat processes.

6. On the target system, configurate the Server Collector and start it manually as :

[oracle@S_T]$  ./server -p 7840 -ENCRYPT BLOWFISH -KEYNAME MyKey1 &
[1] 18250
[oracle@S_T]$
2012-09-12 11:59:50  INFO    OGG-01678  Listening for requests.

7. Start extract and replicat processes

 8. On the target machine, check the Server Collector as
[oracle@S_T]$ ps -ef | grep ora | grep server
oracle   18250 30317  0 11:59 pts/2    00:00:00 ./server -p 7840 -ENCRYPT BLOWFISH -KEYNAME MyKey1


E3. PASSWORD ENCRYPTION
You can prevent access to GoldenGate files by changing the OS rights (Read only) to all the users diferent from the oracle user : chmod 600 *.prm.
You can also encrypt the database pwd in the different paramter files using the default method and the named method:

Default method
1.
GGSCI (S_S) 1> encrypt password master_gg
No key specified, using default key...
Encrypted password:  AACAAAAAAAAAAAJAIGRCGIIGZAWEZEGBRCDDPELASDJETACE

2. Copy the key into each param files
EXTRACT my_extra
USERID master_gg@localhost:1521/STATMAC, PASSWORD AACAAAAAAAAAAAJAIGRCGIIGZAWEZEGBRCDDPELASDJETACE
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;

Named method
Here you can generate the encrypetd key:

1. ./keygen 128 1
0x0ADBD071B1520F6E36118C56A6286D75



2. Add to the ENCKEYS file :
MyKey2  0x0ADBD071B1520F6E36118C56A6286D75


3. Encrypt the database pwd:
GGSCI (S_S) 1> encrypt password master_gg,  ENCRYPTKEY MyKey2
Encrypted password:  AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH


4. copy the key into each param files :
EXTRACT my_extra USERID master_gg@localhost:1521/STATMAC, PASSWORD
AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH, ENCRYPTKEY MyKey2
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE;


E4. DATA COMPRESSION:
If the copmression option is configurated in the Extract or Data Pump process, the Server Collector process decompresses the data before writing to the remote trail files.
The different parameters for compression are :
Compression CPU time : The time the process used the CPU resource ( seconds ).
Compress time               : The overall time the compression tool takes including waits on CPU resource.
Uncompressed bytes and compressed bytes           : Detail on the amount of compression taking place.
Example 1 :

GGSCI (S_S) 75>  send extract MY_PUMP , gettcpstats

Sending GETTCPSTATS request to EXTRACT MY_PUMP ...
RMTTRAIL /u01/Golden_Gate/goldengate/dirdat/re000000, RBA      1926
OK

Session Index   0
Stats started 2012/09/10 10:43:48.470803         23:47:16.144857
Local address 172.18.205.236:28277   Remote address 172.18.200.138:40990
Inbound Msgs       188752    Bytes      1698708,        9 bytes/second
Outbound Msgs      188753    Bytes     13969552,       81 bytes/second
Recvs              377504
Sends              188753
Avg bytes per recv          4, per msg      8
Avg bytes per send         74, per msg     74
Recv Wait Time      265252973, per msg         1405, per recv      702
Send Wait Time        1683785, per msg            8, per send        8

Example 2 :

EXTRACT  my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost localhost, mgrport 7809
EXTTRAIL ./dirdat/ex
COMPRESS, COMPRESSTHRESHOLD 512 --------->minimum number of bytes in block
TABLE COLLECTE.TEST_GOLDENGATE;


You can use the SHOWCH option (the "Write Checkpoint" section) to derive the block size.
GGSCI (S8t) 77> info MY_PUMP, showch

EXTRACT    MY_PUMP   Last Started 2012-09-10 10:43   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:08 ago)
Log Read Checkpoint  File /u01/Golden_Gate/goldengate/dirdat/ex000000
                     2012-09-12 09:50:42.000000  RBA 1876


Current Checkpoint Detail:

Read Checkpoint #1

  GGS Log Trail

  Startup Checkpoint (starting position in the data source):
    Sequence #: 0
    RBA: 0
    Timestamp: Not Available
    Extract Trail: /u01/Golden_Gate/goldengate/dirdat/ex

  Current Checkpoint (position of last record read in the data source):
    Sequence #: 0
    RBA: 1876
    Timestamp: 2012-09-12 09:50:42.000000
    Extract Trail: /u01/Golden_Gate/goldengate/dirdat/ex

Write Checkpoint #1

  GGS Log Trail

  Current Checkpoint (current write position):
    Sequence #: 0
    RBA: 1926
    Timestamp: 2012-09-12 10:43:29.827268
    Extract Trail: /u01/Golden_Gate/goldengate/dirdat/re

Header:
  Version = 2
  Record Source = A
  Type = 1
  # Input Checkpoints = 1
  # Output Checkpoints = 1

File Information:
  Block Size = 2048
  Max Blocks = 100
  Record Length = 2048
  Current Offset = 0
....

E5. Triggers & Events
You can trace some DML on the source table to the target reports using the EVENTACTIONS option, here an example :

REPLICAT my_repli
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE;
TABLE COLLECTE.TEST_GOLDENGATE,  FILTER (@GETENV ("GGHEADER", "OPTYPE")= "DELETE"),
EVENTACTIONS (TRACE order_1.trc TRANSACTION);

So i can have the follwoing replicat report :
...
***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************
Last record for the last committed transaction is the following:
___________________________________________________________________

Report at 2012-09-12 14:53:58 (activity since 2012-09-12 14:52:58)

From Table COLLECTE.TEST_GOLDENGATE to USER_GG.TEST_GOLDENGATE:
       #                   inserts:         1
       #                   updates:         0
       #                   deletes:         1
       #                  discards:         0
....
For more information about all the options, please report to the Oracle GoldenGate Reference.


F.  Clauses & Functions

F1. DEFGEN & DEFSFILE
You can store the definition of the source table in a source file, using the DEFSFILE command. Here an example (source system) :

GGSCI (S_S) 1> edit param my_definition
DEFSFILE /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/STATMAC, PASSWORD AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH, ENCRYPTKEY MyKey2
TABLE COLLECTE.TEST_GOLDENGATE;

now, run the defgen utility:
[oracle@S_S goldengate]$ ./defgen paramfile/u01/Golden_Gate/goldengate/dirprm/
my_definition.prm

***********************************************************************
        Oracle GoldenGate Table Definition Generator for Oracle
     Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
   Linux, x64, 64bit (optimized), Oracle 11g on Oct  5 2011 00:08:57

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2012-09-18 11:39:56
***********************************************************************
Operating System Version:
Linux
Version #1 SMP Wed Dec 17 11:41:38 EST 2008, Release 2.6.18-128.el5
Node: S_S
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited
Process id: 31547
***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
DEFSFILE /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/STATMAC, PASSWORD ************************************************* 
ENCRYPTKEY MyKey2

F2. SOURCEDEF & COLMAP
Suppose that the source and the target tables have not the same definition, you can use the COLMAP &  SOURCEDEF clauses to match the different source and target columns.

source table :
SQL> desc TEST_GOLDENGATE
 Nom                                       NULL ?   Type
 ----------------------------------------- -------- -------------
 ID1                                       NOT NULL NUMBER
 LABELE                              VARCHAR2(10)
 ID2                                       NUMBER
 ID3                                       NUMBER
 LABEL3                              VARCHAR2(10)


target table :
SQL> desc TEST_GOLDENGATE
 Nom                                       NULL ?   Type
 ----------------------------------------- -------- -------------
 ID1                                         NOT NULL NUMBER
 LABELE                                VARCHAR2(10)
 ID2                                         NUMBER
 ID3                                         NUMBER
 LABEL2                               VARCHAR2(10)

First, from the source system, edit with DEFGEN, the defintion of the source table and copy the output file (test_gg.sql) on the target system.
Second, modify (or create) the replicat process as :
REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3);



F3. Calling Stored procedure & Query
Suppose that you want to call a stored procedure to retrieve a data from any tables, or doing some complex calculation...
You can use the SQLEXEC, SPNAME, PARAMS and GETVAL clauses to call the procedure.

Example :
On the master admin (target system) :
CREATE OR REPLACE  PROCEDURE "PRO_COMPUTE_ID4" (
PN_ID2 IN NUMBER,
PN_ID4 OUT NUMBER
as
begin
  SELET ID_VALUE4 
  INTO PN_ID4
  FROM My_Table
  WHERE  ID_VALUE2  := 2*PN_ID2  ; 
end; 
/

For the replicat :

REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE, &
SQLEXEC (
         SPNAME PRO_COMPUTE_ID4, &
         PARAMS (PN_ID2=ID2)
        ), &
COLMAP (usedefaults, 
        LABEL2=LABEL3, 
        MY_DATE=@DATE("YYYY-MM-DD:HH:MI:SS", "YYYYMMDDHHMISS", 20120919145320),
        ID4=@GETVAL(PRO_COMPUTE_ID4.PN_ID4)
       );

You can also execute a query :

REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE, &
SQLEXEC (ID LOOKUP, &
         QUERY "SELECT ID4 from 
         USER_GG.REF_GOLDENGATE where ID2=:V_ID2", &
         PARAMS (V_ID2=ID2)
        ),& 
COLMAP (usedefaults, 
        LABEL2=LABEL3, 
        MY_DATE=@DATE("YYYY-MM-DD:HH:MI:SS", "YYYYMMDDHHMISS", 20120919145320),
        ID4=@GETVAL(lookup.id4)
       );


F4. WHERE
You can filter the data to replicate using the clause WHERE.
Example :

REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3),
WHERE (ID2 > 6);


F5. FILTER
You can filter the data to replicate using the clause FILTER.
Example : we filter for update DML on records having ID3 > 5 and for insert DML on records having ID2 > 15.
We can delete any records.

REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/DB2, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3),
FILTER (ON UPDATE, ID3 > 5),
FILTER (ON INSERT, ID2 > 15);

the filters can be applied on the extract process :
EXTRACT my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD AACAAAAAAAAAAAJATHDAQFMJTADESHLBSFRJRINFQITBXAKH, ENCRYPTKEY MyKey2
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (ON UPDATE, ID3 > 5);


F6. STRFIND
It determines the position of a string within a string column else returns 0 if the string deos not exist.

Example :
LABEL3    = 'MY_TEST'  , @strfind(label3, "TES") = 4
LABEL3    = 'MY_TEST'  , @strfind(label3, "HK")  = 0
LABEL3    = 'TES_MY_TEST'  , @strfind(label3, "TES", 3 ) = 8  : the serach starts at the third caracter.

Example with the extract process :

EXTRACT my_extra
USERID master_gg@localhost:1521/DB1, PASSWORD master_gg
rmthost 172.18.200.138, mgrport 7809
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (@STRFIND (LABEL3, "3")>0);

F7. COMPUTE
Result of an arithmetic expression.
Example with the extract process : extract only the records having  ID2 -ID3 > 10

EXTRACT my_extra
USERID master_gg@localhost:1521/DB1,PASSWORD master_gg
rmthost 172.18.200.138, mgrport 7809
EXTTRAIL /u01/Golden_Gate/goldengate/dirdat/ex
TABLE COLLECTE.TEST_GOLDENGATE, FILTER (@STRFIND (LABEL3, "3")>0),
FILTER (@COMPUTE(ID2 - ID3)>10);

F8. DATE
Returns date and times format.
Example:

REPLICAT my_repli
SOURCEDEFS /u01/Golden_Gate/goldengate/dirprm/test_gg.sql
USERID master_gg@localhost:1521/MISPROD, PASSWORD master_gg
DISCARDFILE ./dirrpt/my_repli.rpt, PURGE
MAP COLLECTE.TEST_GOLDENGATE, TARGET USER_GG.TEST_GOLDENGATE,
COLMAP (usedefaults, LABEL2=LABEL3, MY_DATE=@DATE("YYYY-MM-DD:HH:MI:SS", "YYYYMMDDHHMISS", 20120919145320));







F9. MACROS

coming ...

jeudi 9 août 2012

Manual Upgrade of a 10gr2 Database To a 11gr2 Database

Introduction
The goal of this article is to give an example of an manual upgrade from 10gr2 database to 11gr2 database.
In my example, the 10gr2 database is BDTEST.

A. Check list to do Source Database

 1 Clean up duplicate objects in the SYS and SYSTEM schemas 
Some objects created under SYS schema during the installation of the DB, can be created also under SYSTEM schema with manaul launch scripts (for example for replication).
Exception for :
            AQ$_SCHEDULES TABLE
            AQ$_SCHEDULES_PRIMARY INDEX
            DBMS_REPCAT_AUTH PACKAGE
            DBMS_REPCAT_AUTH PACKAGE BODY


1-1 To find duplicate objects, run as SYS:
   select object_name, object_type 
   from dba_objects  
   where object_name||object_type in 
   (select object_name||object_type  
    from dba_objects 
    where owner = 'SYS') 
    and owner = 'SYSTEM';



1-2 to drop duplicate objects, run as SYSTEM:
set pause off 
set heading off 
set pagesize 0 
set feedback off 
set verify off 
spool duplicate_objects.sql 
select 'DROP ' || object_type || ' SYSTEM.' || object_name || ';' 
from dba_objects 
where object_name||object_type in 
   (select object_name||object_type  
    from dba_objects 
    where owner = 'SYS')
and  object_name not in ('AQ$_SCHEDULES', 'AQ$_SCHEDULES_PRIMARY', 'DBMS_REPCAT_AUTH', 'DBMS_REPCAT_AUTH' )
and owner = 'SYSTEM'; 
spool off 
exit 


2. Purge Recylce bin
purge the recycle bin beforethe upgrade.
SQL> purge DBA_RECYCLEBIN;

3. Cluster
If CLUSTER_DATABASE is set to TRUE, change it to FALSE before upgrading the database.


4. Pre-Upgrade Tool
To prepare your 10gr2 database for upgrade, you must first download the scirpt utility utlu112i_sql.  Refer to My Oracle Support Document 884522.1: How to Download and Run Oracle's Database Pre-Upgrade Utility for details on downloading and running the Pre-Upgrade Utility. This SQL script checks the following:
1. Database configuration: Determines if the logfiles and datafiles are sized adequately for the upgrade
2. Initialization parameters: Reports which initialization parameters need changing, replacing or removing before the upgrade
3. Components: Which installed components will be upgraded
4. Miscellaneous Warnings: Any other situations requiring attention before the upgrade
5. Required tablespace: Ensure that the SYSAUX tablespace is created in the current database BEFORE the upgrade is carried out.
6. Timezone file version: Reports which file version is used and when/how to upgrade the timezone version.


Failure to run the pre-upgrade tool (utlu112i.sql) will result in the following error while running the catupgrd.sql script :
SQL> SELECT TO_NUMBER('MUST_BE_SAME_TIMEZONE_FILE_VERSION')
2 FROM registry$database
3 WHERE tz_version != (SELECT version from v$timezone_file);
SELECT TO_NUMBER('MUST_BE_SAME_TIMEZONE_FILE_VERSION')
*
ERROR at line 1:
ORA-01722: invalid number



From source server (10gr2) as sys:

SQL> spool /u03/upgrade/utlu112i_1.log
SQL>  @/u03/upgrade/utlu112i_1.sql
Oracle Database 11.2 Pre-Upgrade Information Tool 08-09-2012 11:25:35
Script Version: 11.2.0.1.0 Build: 004
**********************************************************************
Database:
**********************************************************************
--> name:          BDTEST
--> version:       10.2.0.5.0
--> compatible:    10.2.0.1.0
--> blocksize:     8192
--> platform:      Linux x86 64-bit
--> timezone file: V4
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 1331 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 34 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 721 MB
--> TBS_ADMIN_STREAM tablespace is adequate for the upgrade.
.... minimum required size: 3 MB
--> TBS_STREAM tablespace is adequate for the upgrade.
.... minimum required size: 3 MB
--> TBS_TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
--> If Target Oracle is 32-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
--> If Target Oracle is 64-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
--> background_dump_dest         11.1       DEPRECATED   replaced by  "diagnostic_dest"
--> user_dump_dest               11.1       DEPRECATED   replaced by  "diagnostic_dest"
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views                      [upgrade]  VALID
--> Oracle Packages and Types            [upgrade]  VALID
--> JServer JAVA Virtual Machine       [upgrade]  VALID
--> Oracle XDK for Java                         [upgrade]  VALID
--> Oracle Workspace Manager           [upgrade]  VALID
--> OLAP Analytic Workspace              [upgrade]  VALID
--> OLAP Catalog                                   [upgrade]  VALID
--> EM Repository                                   [upgrade]  VALID
--> Oracle Text                                         [upgrade]  VALID
--> Oracle XML Database                      [upgrade]  VALID
--> Oracle Java Packages                     [upgrade]  VALID
--> Oracle interMedia                            [upgrade]  VALID
--> Spatial                                                 [upgrade]  VALID
--> Data Mining                                       [upgrade]  VALID
--> Expression Filter                               [upgrade]  VALID
--> Rule Manager                                    [upgrade]  VALID
--> Oracle OLAP API                              [upgrade]  VALID
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database is using a timezone file older than version 11.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.5.0 database timezone version
.... to the latest version which comes with the new release.
WARNING: --> Database contains INVALID objects prior to upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new invalid
.... objects due to the upgrade.
.... USER BEMIS has 33 INVALID objects.
.... USER PUBLIC has 1 INVALID objects.
.... USER ADMINSTRM has 2 INVALID objects.
.... USER SYS has 2 INVALID objects.
WARNING: --> EM Database Control Repository exists in the database.
.... Direct downgrade of EM Database Control is not supported. Refer to the
.... Upgrade Guide for instructions to save the EM data prior to upgrade.
WARNING: --> There are files which need media recovery.
.... Ensure no files need media recovery prior to upgrade.
WARNING: --> Sync standby database prior to upgrade.
WARNING: --> Your recycle bin contains 2359 object(s).
.... It is REQUIRED that the recycle bin is empty prior to upgrading
.... your database.  The command:
        PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.
**********************************************************************
Recommendations
**********************************************************************
Oracle recommends gathering dictionary statistics prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;
**********************************************************************
Oracle recommends removing all hidden parameters prior to upgrading.
To view existing hidden parameters execute the following command
while connected AS SYSDBA:
    SELECT name,description from SYS.V$PARAMETER WHERE name
        LIKE '\_%' ESCAPE '\'
Changes will need to be made in the init.ora or spfile.
**********************************************************************
Oracle recommends reviewing any defined events prior to upgrading.
To view existing non-default events execute the following commands
while connected AS SYSDBA:
  Events:
    SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
      WHERE  UPPER(name) ='EVENT' AND  isdefault='FALSE'
  Trace Events:
    SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2
      WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'
Changes will need to be made in the init.ora or spfile.
**********************************************************************
SQL> spool off

Now, on the 10gr2 database BDTEST,  perform the recomendations from utlu112i_1.log.


5. Compile invalid objects :
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql


6. Gather stats for tables that lacks statistics :

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

B. Upgrade
  • Do a cold backup pf BDTEST (10gr2) and move all the files to the 11gr2 server.
  • [oracle@**** ~]$ export ORACLE_SID=BDTEST
  • on the 11gr2 serveur :



  • [oracle@**** ~]$ sqlplus '/as sysdba'

  • SQL*Plus: Release 11.2.0.1.0 Production on Thu Aug 9 10:31:25 2012

    Copyright (c) 1982, 2009, Oracle.  All rights reserved.

    Connected to an idle instance.
    SQL> startup UPGRADE;
    SQL> spool catupgrd.log
    SQL> @?/rdbms/admin/catupgrd.sql
    ......

    SQL>

    SQL>@?/rdbms/admin/utlrp.sql

    SQL>shutdown immediate;


    SQL>startup

    Execute the script utlu112i.sql that provides a summury of the upgrade :
    SQL>  @?/rdbms/admin/utlu112i.sql
    Oracle Database 11.2 Pre-Upgrade Information Tool    08-09-2012 14:27:06

    **********************************************************************
    Database:
    **********************************************************************
    --> name:          BDTEST
    --> version:       11.2.0.1.0
    --> compatible:    11.2.0.0.0
    --> blocksize:     8192
    --> timezone file: V4
    Database already upgraded; to rerun upgrade use rdbms/admin/catupgrd.sql.
    PL/SQL procedure successfully completed.
    SQL>

    Main important changes
    According to the oracle note Oracle 11gR2 Upgrade Companion [ID 785351.1], the main changes 
    in behavior between 10g and 11g are :

    New Background Processes (Oracle Database 10g to 11g Change)

    The following are the new database background process that you could be
    running depending upon the features being used.
    • DBRM (database resource manager) process is responsible for setting resource plans and other resource manager related tasks.
    • DIA0 (diagnosability process 0) (only 0 is currently being used) is responsible for hang detection and deadlock resolution.
    • EMNC (event monitor coordinator) is the background server process used for database event management and notifications.
    • FBDA (flashback data archiver process) archives the historical rows of tracked tables into flashback data archives. Tracked tables are tables which are enabled for flashback archive. When a transaction containing DML on a tracked table commits, this process stores the pre-image of the rows into the flashback archive. It also keeps metadata on the current rows.
    • FBDA is also responsible for automatically managing the flashback data archive for space, organization, and retention and keeps track of how far the archiving of tracked transactions has occurred.
    • GTX0-j (global transaction) processes provide transparent support for XA global transactions in an Oracle RAC environment. The database autotunes the number of these processes based on the workload of XA global transactions. Global transaction processes are only seen in an Oracle RAC environment
    • GEN0 (general task execution process) performs required tasks including SQL and DML
    • GMON maintains disk membership in ASM disk groups.
    • MARK marks ASM allocation units as stale following a missed write to an offline disk.
    • SMCO (space management coordinator) process coordinates the execution of various space management related tasks, such as proactive space allocation and space reclamation. It dynamically spawns slave processes (Wnnn) to implement the task.
    • VKTM (virtual keeper of time) is responsible for providing a wall-clock time (updated every second) and reference-time counter (updated every 20 ms and available only when running at elevated priority
    • VKRM (virtual scheduler for resource manager process) manages the CPU scheduling for all managed Oracle processes. The process schedules managed processes in accordance with an active resource plan.



    Initialization Parameters (Oracle Database 10g to Oracle Database 11g Change)

    1. OPTIMIZER_USE_INVISIBLE_INDEXES
    OPTIMIZER_USE_INVISIBLE_INDEXES was introduced in Oracle Database 11g
    Release 1. This parameter enables or disables the use of invisible indexes. The default
    value for OPTIMIZER_USE_INVISIBLE_INDEXES is FALSE.

    2. OPTIMIZER_USE_PENDING_STATISTICS
    OPTIMIZER_USE_PENDING_STATISTICS was introduced in Oracle Database 11g
    Release 1. This parameter specifies whether or not the optimizer uses pending
    statistics when compiling SQL statements. The default value for
    OPTIMIZER_USE_PENDING_STATISTICS is FALSE.

    3. OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES
    OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES was introduced in Oracle Database
    11g Release 1. This parameter enables or disables the automatic recognition of
    repeatable SQL statements, as well as the generation of SQL plan baselines for such
    statements. The default value for OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is
    FALSE.

    4. OPTIMIZER_USE_SQL_PLAN_BASELINES
    OPTIMIZER_USE_SQL_PLAN_BASELINES was introduced in Oracle Database 11g
    Release 1. This parameter enables or disables the use of SQL plan baselines stored in
    SQL Management Base. When enabled, the optimizer looks for a SQL plan baseline for
    the SQL statement being compiled. If one is found in SQL Management Base, then the
    optimizer will cost each of the baseline plans and pick one with the lowest cost. The
    default value for OPTIMIZER_USE_SQL_PLAN_BASELINES is TRUE.


    Automatic Optimizer Statistics Collection

    In Oracle Database 11g Automatic optimizer statistics collection runs as part of the automated
    maintenance tasks infrastructure (AutoTask) and is enabled by default to run in all predefined
    maintenance windows. The AutoTask statistics collection replaces the Oracle Database 10g
    GATHER_STATS_JOB. Automatic optimizer statistics collection is enabled by default. If for some 
    reason automatic optimizer statistics collection needs to be disabled, you can disable it using the 

    DISABLE procedure in the DBMS_AUTO_TASK_ADMIN package.

    BEGIN
    DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => 'auto optimizer stats collection',
    operation => NULL,
    window_name => NULL);
    END;
    To re-enable automatic optimizer statistics collection, you can enable it using the ENABLE
    procedure in the DBMS_AUTO_TASK_ADMIN package
    BEGIN
    DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => 'auto optimizer stats collection',
    operation => NULL,
    window_name => NULL);
    END;



    for complete information, please refer to the [ID 785351.1] note.