mercredi 30 décembre 2015

GoldenGate : ERROR OGG-01705 Input checkpoint position xxx is greater than then size of the file yyyy



A. Introduction :

Due to the crash of the server hosting the database, the Datapump extract was ABENDED.
In general, Datapump Extracts and Replicats read the current trail file data from the disk cache instead from the physical file when the read checkpoint is very
close to the current EOF of the trail file it reads. In other words, when it keeps up with the trail. There is therefore a chance that the Datapump or Replicat will
checkpoint an RBA which is still in cache. If there is a disk or system outage or for similar issues, the data in the cache may be lost. This occurs because the
system doesn't have a chance to flush the data to the disk.


B. Version & Status :



Version:


Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.4


Status:

EXTRACT RUNNING EXT_MGR 00:00:00 00:00:09
EXTRACT ABENDED EXT_PMP 66:31:22 02:12:53

in the ggserr.log, wa can find :
2015-12-29 11:33:27  ERROR   OGG-01705  Input checkpoint position 106570861 for input trail file '/apps/trail/DBPROD/ab00125' is greater than the size of the file (106570847).  Please consult Oracle Knowledge Management Doc ID 1138409.1. for instructions.


C. Solution:

I have used the Metalink note 1138409.1 and the option 2.
The goal is to compute the new RBA using the following:

New datapump / Replicat RBA =    (Reader's too-big checkpoint RBA) (Step 1)
        -  (Actual size of datapump / replicat trail file (seqno X)) (Step 2)
        +  First record in the new trail file (after the restart abend)  (Step 3)   
  
The size of the current trail and the next are :

-rw-rw-rw-    1 oracle   dba       106570847 Dec 27 01:14 /apps/trail/DBPROD/ab00125   << where the checkpoint is pointing
-rw-rw-rw-    1 oracle   dba       106570836 Dec 27 04:33 /apps/trail/DBPROD/ab00126  << the next available trail file.

Step 1 :Reader's too-big checkpoint RBA:

Step 1:
  We know the Reader's too-big checkpoint RBA (Step 1)  from the details of the datapump:
 
 GGSCI  >
   info EXT_PMP
   EXTRACT    EXT_PMP    Last Started 2015-12-27 13:52   Status ABENDED
   Checkpoint Lag       25:06:45 (updated 00:28:04 ago)
   Log Read Checkpoint  File /apps/trail/DBPROD/ab00125
        2015-12-26 12:46:07.000000  RBA 106570861


 Here the actual trail file size, End of File (EOF), is 106570847 where the replicat read checkpoint rba is at 106570861.
   
So  Reader's too-big checkpoint RBA = 106570861

Step 2:Actual size of datapump / replicat trail file (seqno X)

ls -l /apps/trail/DBPROD/ab00125:
-rw-rw-rw-    1 oracle   dba       106570847 Dec 27 01:14 /apps/trail/DBPROD/ab00125   << where the checkpoint is pointing
 So Actual size of datapump / replicat trail file = 106570847.

Step 3 : First record in the new trail file (after the restart abend)

Use the logdump utility and go to the next file :
./logdump

Logdump 61 >open /apps/trail/DBPROD/ab00126
Current LogTrail is /apps/trail/DBPROD/ab00126
Logdump 62 >ghdr on
Logdump 63 >ggstoken detail
Logdump 64 >detail data on
Logdump 65 >n



2015/12/27 01:15:56.628.828 FileHeader           Len  1075 RBA 0
Name: *FileHeader*
......
.....
Logdump 66 >n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x00)
UndoFlag : . (x00) BeforeAfter: A (x41)
...
..... RestartAbend         Len     0 RBA 1083
Name:
After  Image:   Partition 0   G  s
Logdump 67 >n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
.....
....  FieldComp            Len   502 RBA 1144
Name: MY_SCHEMA.MY_TABLE
....
Logdump 68 >


So First record in the new trail file (after the restart abend)  = 1144



D : New RBA



New datapump / Replicat RBA = (Reader's too-big checkpoint RBA) - (Actual size of datapump / replicat trail file (seqno X)) + First record in the new trail file (after the restart abend)


1) Reader's too-big checkpoint RBA (A)                             = 106570861 --- step 1
2) Actual size of datapump / replicat trail file (seqno X)     = 106570847 --- step2
3) First record in the new trail file (after the restart abend)  = 1144 --- step3



New datapump / Replicat RBA = 106570861 - 106570847 + 1144 = 1158





E : Restart the Datapump


Ensure that you have a good record at sequence number 126 and RBA 1158 with TransInd x00 or x03.
If so we could have the datapump / replicat altered to trail file sequence number 126 and rba 1158 by using the following command :


Logdump 80 >open /apps/trail/DBPROD/ab00126
Logdump 81 >pos 1158
Reading forward from RBA 1158
Logdump 82 >n
2015/12/26 12:46:10.003.297 FieldComp            Len  1785 RBA 1158
Name: *FileHeader*
 .......
Logdump 82 >n
___________________________________________________________________
....
TransInd   :     .  (x03)     FormatType :     R  (x52)
...

ggsci>
alter extract EXT_PMP, extseqno 126, extrba 1158
Start EXT_PMP

vendredi 13 novembre 2015

Migration From 11gr2 To 12c Using Rman Duplicate NoOpen New Feature

Introduction:

The goal of this article is to show you how i have migrated an 11gr2 database to 12c, using Duplicate with the new feature NOOPEN.


We assume that the the first database is DB1 on the server S1 : 11.2.0.3
and that the second database is DB2 on the server S2 : 12.1.0.2




A. Check list to do Source Database


I. Pre-Upgrade Steps:

Complete Checklist for Manual Upgrades to Oracle Database 12c Release 1 (12.1)
 (Doc ID 1503653.1)

The Pre-upgrade must be applied on the first database DB1.
Run the Pre-Upgrade Information Tool for Collecting Pre-Upgrade Information
* Log into the system as the owner of the Oracle Database 12c Release 1 (12.1) Oracle Home directory.
* Copy the Pre-Upgrade Information Tool script preupgrd.sql and utluppkg.sql from the Oracle

 Database 12c Release 1 (12.1) $ORACLE_HOME/rdbms/admin directory to the
 $ORACLE_HOME/rdbms/admin directory of the source Oracle Home.
 Step 2:
 * Run the new Pre-Upgrade Information Tool. For example, if you copied preupgrd.sql to the /admin
 directory of the source Oracle Home:

SQL> @$ORACLE_HOME/rdbms/admin/preupgrd.sql  <==============ON DB1 !!!!
preupgrade.log, preupgrade_fixups.sql and postupgrade_fixups.sql files are created in
 $ORACLE_HOME/cfgtoollogs/$ORACLE_SID/preupgrade/, which is under the source database
 ORACLE_HOME to be upgraded.
You will obtain three files : the first two files to be applied on the source  DB1 and DB2 for parameters, the last one on the target : DB2

For DB1:
1./apps/oracle/cfgtoollogs/DB1/preupgrade/preupgrade.log
2./apps/oracle/cfgtoollogs/DB1/preupgrade/preupgrade_fixups.sql


For DB2:

Move /apps/oracle/cfgtoollogs/DB1/preupgrade/postupgrade_fixups.sql to the second server, for example on :
 /apps/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql

1./apps/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql
 Be carreful you must change :
      IF con_name = 'DB1' THEN
 to
     IF con_name = 'DB2' THEN
 Other note:
 the first pre-check file can give you some parameters to change : theses parameters must be changed on the target DB: DB2.

II. Rman Backup



Take and RMAN backup on the first database DB1.
rman "target / nocatalog"
RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT '<db_backup_directory>%U' TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO '<controlfile_backup_directory>';
}

Move the backup to the second server S2 : to /apps/oracle/DB2/Backup/













B. Restore the backup on the second database

On  the second database DB2, restore the backup using the Duplicate command.
Startup on nomount DB2 with appropriate 12c parameters : sepecially compatible set to 12.1.0 or higher.

. oraenv
DB2
rman auxiliary /








RMAN> run
{
 allocate auxiliary channel chnl1 device type disk;
 DUPLICATE DATABASE TO DB2 nofilenamecheck
 NOOPEN BACKUP LOCATION '/apps/oracle/DB2/Backup/' NOREDO ;
 }



We use NOOPEN : so RMAN stops at the MOUNT step.
We use NOREDO if the backup was COLD.



When the databse is restored, open it with the upgrade mode :


SQL> alter database open resetlogs upgrade;

C. Migration to 12c


Use catctl.pl to upgrade the database.

Run the catctl.pl script from the new Oracle home.
In this release, the new Upgrade Utility, catctl.pl, replaces catupgrd.sql.



cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl  catctl.pl -n 6 -l  /apps/orafra/DB2/migration-12c/ catupgrd.sql

Analyzing file catupgrd.sql
Log files in /apps/orafra/DB2/migration-12c/
catcon: ALL catcon-related output will be written to /apps/orafra/DB2/migration-12c//catupgrd_catcon_13566188.lst
catcon: See /apps/orafra/DB2/migration-12c//catupgrd*.log files for output generated by scripts
catcon: See /apps/orafra/DB2/migration-12c//catupgrd_*.lst files for spool files, if any
Number of Cpus        = 8
SQL Process Count     = 6
------------------------------------------------------
Phases [0-73]
Serial   Phase #: 0 Files: 1     Time: 82s
Serial   Phase #: 1 Files: 5     Time: 67s
Restart  Phase #: 2 Files: 1     Time: 1s
Parallel Phase #: 3 Files: 18    Time: 17s
Restart  Phase #: 4 Files: 1     Time: 0s
Serial   Phase #: 5 Files: 5     Time: 30s
Serial   Phase #: 6 Files: 1     Time: 12s
Serial   Phase #: 7 Files: 4     Time: 11s
Restart  Phase #: 8 Files: 1     Time: 1s
Parallel Phase #: 9 Files: 62    Time: 40s
Restart  Phase #:10 Files: 1     Time: 1s
Serial   Phase #:11 Files: 1     Time: 27s
Restart  Phase #:12 Files: 1     Time: 1s
Parallel Phase #:13 Files: 91    Time: 13s
Restart  Phase #:14 Files: 1     Time: 0s
Parallel Phase #:15 Files: 111   Time: 23s
Restart  Phase #:16 Files: 1     Time: 0s
Serial   Phase #:17 Files: 3     Time: 2s
Restart  Phase #:18 Files: 1     Time: 0s
Parallel Phase #:19 Files: 32    Time: 21s
Restart  Phase #:20 Files: 1     Time: 1s
Serial   Phase #:21 Files: 3     Time: 8s
Restart  Phase #:22 Files: 1     Time: 1s
Parallel Phase #:23 Files: 23    Time: 88s
Restart  Phase #:24 Files: 1     Time: 1s
Parallel Phase #:25 Files: 11    Time: 42s
Restart  Phase #:26 Files: 1     Time: 0s
Serial   Phase #:27 Files: 1     Time: 1s
Restart  Phase #:28 Files: 1     Time: 0s
Serial   Phase #:30 Files: 1     Time: 0s
Serial   Phase #:31 Files: 257   Time: 27s
Serial   Phase #:32 Files: 1     Time: 0s
Restart  Phase #:33 Files: 1     Time: 0s
Serial   Phase #:34 Files: 1     Time: 8s
Restart  Phase #:35 Files: 1     Time: 0s
Restart  Phase #:36 Files: 1     Time: 1s
Serial   Phase #:37 Files: 4     Time: 53s
Restart  Phase #:38 Files: 1     Time: 1s
Parallel Phase #:39 Files: 13    Time: 50s
Restart  Phase #:40 Files: 1     Time: 1s
Parallel Phase #:41 Files: 10    Time: 10s
Restart  Phase #:42 Files: 1     Time: 1s
Serial   Phase #:43 Files: 1     Time: 7s
Restart  Phase #:44 Files: 1     Time: 1s
Serial   Phase #:45 Files: 1     Time: 7s
Serial   Phase #:46 Files: 1     Time: 0s
Restart  Phase #:47 Files: 1     Time: 0s
Serial   Phase #:48 Files: 1     Time: 345s
Restart  Phase #:49 Files: 1     Time: 0s
Serial   Phase #:50 Files: 1     Time: 42s
Restart  Phase #:51 Files: 1     Time: 1s
Serial   Phase #:52 Files: 1     Time: 1s
Restart  Phase #:53 Files: 1     Time: 0s
Serial   Phase #:54 Files: 1     Time: 214s
Restart  Phase #:55 Files: 1     Time: 0s
Serial   Phase #:56 Files: 1     Time: 78s
Restart  Phase #:57 Files: 1     Time: 0s
Serial   Phase #:58 Files: 1     Time: 145s
Restart  Phase #:59 Files: 1     Time: 0s
Serial   Phase #:60 Files: 1     Time: 294s
Restart  Phase #:61 Files: 1     Time: 1s
Serial   Phase #:62 Files: 1     Time: 1s
Restart  Phase #:63 Files: 1     Time: 1s
Serial   Phase #:64 Files: 1     Time: 2s
Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/apps/oracle/12102/rdb/std/lib; export LD_LIBRARY_PATH; LIBPATH=/apps/oracle/12102/rdb/std/lib; export LIBPATH; LD_LIBRARY_PATH_64=/apps/oracle/12102/rdb/std/lib; export LD_LIBRARY_PATH_64; DYLD_LIBRARY_PATH=/apps/oracle/12102/rdb/std/lib; export DYLD_LIBRARY_PATH; /apps/oracle/12102/rdb/std/perl/bin/perl -I /apps/oracle/12102/rdb/std/rdbms/admin -I /apps/oracle/12102/rdb/std/rdbms/admin/../../sqlpatch /apps/oracle/12102/rdb/std/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only > /apps/orafra/DB2/migration-12c//catupgrd_datapatch_upgrade.log 2> /apps/orafra/DB2/migration-12c//catupgrd_datapatch_upgrade.err
returned from sqlpatch
    Time: 46s
Serial   Phase #:66 Files: 1     Time: 37s
Serial   Phase #:68 Files: 1     Time: 0s
Serial   Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/apps/oracle/12102/rdb/std/lib; export LD_LIBRARY_PATH; LIBPATH=/apps/oracle/12102/rdb/std/lib; export LIBPATH; LD_LIBRARY_PATH_64=/apps/oracle/12102/rdb/std/lib; export LD_LIBRARY_PATH_64; DYLD_LIBRARY_PATH=/apps/oracle/12102/rdb/std/lib; export DYLD_LIBRARY_PATH; /apps/oracle/12102/rdb/std/perl/bin/perl -I /apps/oracle/12102/rdb/std/rdbms/admin -I /apps/oracle/12102/rdb/std/rdbms/admin/../../sqlpatch /apps/oracle/12102/rdb/std/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose > /apps/orafra/DB2/migration-12c//catupgrd_datapatch_normal.log 2> /apps/orafra/DB2/migration-12c//catupgrd_datapatch_normal.err
returned from sqlpatch
    Time: 48s
Serial   Phase #:70 Files: 1     Time: 13s
Serial   Phase #:71 Files: 1     Time: 1s
Serial   Phase #:72 Files: 1     Time: 0s
Serial   Phase #:73 Files: 1     Time: 19s
Grand Total Time: 1951s




Run the Post-Upgrade Status Tool $ORACLE_HOME/rdbms/admin/utlu121s.sql which provides a summary of
the upgrade at the end of the spool log.
It displays the status of the database components in the upgraded database and the time required to
complete each component upgrade.
Any errors that occur during the upgrade are listed with each component and must be addressed.


$ sqlplus "/as sysdba"
SQL> STARTUP
SQL> @utlu121s.sql








Run catuppst.sql, located in the $ORACLE_HOME/rdbms/admin directory, to perform upgrade actions that
do not require the database to be in UPGRADE mode.

SQL> @catuppst.sql

This script can be run concurrently with utlrp.sql.

Run utlrp.sql to recompile any remaining stored PL/SQL and Java code in another session.

SQL> @utlrp.sql



At this moment, DB2 is migrated to 12c :

Oracle Database 12.1 Post-Upgrade Status Tool           11-07-2015 16:37:11
Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS
Oracle Server                          UPGRADED      12.1.0.2.0  00:10:57
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:05:43
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:01:02
Oracle XDK                                VALID      12.1.0.2.0  00:00:40
Oracle Text                               VALID      12.1.0.2.0  00:01:01
Oracle XML Database                       VALID      12.1.0.2.0  00:02:30
Oracle Database Java Packages             VALID      12.1.0.2.0  00:00:13
Oracle Multimedia                         VALID      12.1.0.2.0  00:02:23
Spatial                                UPGRADED      12.1.0.2.0  00:04:52
Final Actions                                                    00:01:21
Post Upgrade                                                     00:00:09

C. Execute in the NEW environment AFTER upgrade

@/apps/oracle/cfgtoollogs/DB2/preupgrade/postupgrade_fixups.sql


jeudi 21 mai 2015

OCR / VOTING DISK MAINTENANCE OPERATIONS: MIGRATION FROM OCFS2 TO ASM


Introduction:

Starting with Oracle 11gR2, Oracle recommendation is to use Oracle ASM to store OCR and Voting Disks. With appropriate redundancy level of the ASM Disk Group being used, Oracle can create required number of Voting Disks as part of installation.


Be sure to:

  • Have the root password.
  • A valid spfile of ASM

One RAC with two nodes on Node_a and Node_b.
OS : Linux

Impact:

The service will be stopped; no connection to the database is possible during this operation.

I. Backups : Make sure there is a recent copy of the OCR/Voting disk before making any changes

1.1  OCR

 

As the root user:


root@Node_a # ocrconfig -manualbackup

/logiciels/oracle/grid/cdata/node-cluster/backup_20130409_145249.ocr
 

1.2  Voting

root@Node_a # dd if=/oracle/ocfs2/storage/vdsk of=vdsk_bkp

41025+0 records in
41025+0 records out
21004800 bytes (21 MB) copied, 0.526757 seconds, 39.9 MB/s
/oracle/ocfs2/storage
total 44384
-rw-r----- 1 root    dba  272756736 Apr 10 14:51 ocr
-rw-r----- 1 oragrid dba   21004800 Feb 12 09:32 vdsk
-rw-r--r-- 1 root    root  21004800 Apr 12 15:21 vdsk_bkp

II.  Preparing the ASM Disk Group: +CRS & +FRA


The approach is:

  1. Create Two ASM Diskgroup :+CRS ,+FRA     
  2. Migrate ocr, voting disks and archive logs to ASM in the new ASM disks : +CRS & +FRA

2.1 Check the status of OCR/Voting

         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3044
         Available space (kbytes) :     259076
         ID                       :  647776079
         Device/File Name         : /oracle/ocfs2/storage/ocr
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
root@Node_a # crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   11d68d19685d4f20bf026eeb463d23aa (/oracle/ocfs2/storage/vdsk)
Located 1 voting disk(s).
 

2.2  Format new SAN disks

root@uvbacko890a # fdisk /dev/sdm
Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous
content won't be recoverable.
 
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1)
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305
Command (m for help): p
 
Disk /dev/sdm: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdm1               1        1305    10482381   83  Linux
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.
Idem for /dev/sdn
Idem for /dev/sdo
Idem for /dev/sds
 
 
On the first node:
root@Node_a# /etc/init.d/oracleasm createdisk DISK8 /dev/sdm1
Marking disk "DISK8" as an ASM disk:                       [  OK  ]
root@Node_a # /etc/init.d/oracleasm createdisk DISK9 /dev/sdn1
Marking disk "DISK9" as an ASM disk:                       [  OK  ]
root@Node_a # /etc/init.d/oracleasm createdisk DISK10 /dev/sdo1
Marking disk "DISK10" as an ASM disk:                       [  OK  ]
root@Node_a # /etc/init.d/oracleasm createdisk DISK10 /dev/sds1
 
 
Marking disk "DISK11" as an ASM disk:                       [  OK  ]
 
 
On the seconde node:
/etc/init.d/oracleasm scandisks
 
Check:
On the first:
root@Node_a # /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
DISK6
DISK7
DISK8
DISK9
DISK10
DISK11
 
 On the second:
root@Node_ba # /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
DISK6
DISK7
DISK8
DISK9
DISK10
DISK11

 

2.3 Create the ASM disk +CRS

Locate the asmca binary (for example /logiciels/oracle/grid/bin/)
cd /logiciels/oracle/grid/bin
            export DISPLAY=XXXXXXX.212:0.0
./asmca
(Normaly the disk DISK10 should have the same size as DISK8 and DISK9)

 


 
Idem for +FRA
All Disks

 


 

2.4 Check: On each node, as oragrid user

 
 
SQL> select state,name,type from v$asm_diskgroup;
 
STATE       NAME                           TYPE
----------- ------------------------------ ------
MOUNTED     DATA_ASM                       EXTERN
MOUNTED     CRS                            NORMAL
MOUNTED     FRA                            EXTERN

2.5 Check that the new disk group is mount :

On each node:
SQL> select state,name,type from v$asm_diskgroup;
STATE       NAME                           TYPE
----------- ------------------------------ ------
MOUNTED     DATA_ASM                       EXTERN
MOUNTED     CRS                            NORMAL
MOUNTED     FRA                            EXTERN


2.5 Case when a disk group is not mount

If for example on the second node, the status of CRS is not MOUNT, like:
SQL> select state,name,type from v$asm_diskgroup;
STATE       NAME                           TYPE
----------- ------------------------------ ------
MOUNTED     DATA_ASM                       EXTERN
MOUNTED     FRA                            EXTERN
DISMOUNTED  CRS

Then check the status of the resource and start it:

NAME           TARGET  STATE        SERVER      STATE_DETAILS

Local Resources

ora.CRS.dg
               ONLINE  ONLINE       Node_a
               OFFLINE OFFLINE      Node_b
ora.DATA_ASM.dg
               ……
To start ora.CRS.dg on Node_b:
oragrid@Node_b:/oracle/ocfs2/storage> crsctl start resource ora.CRS.dg -n Node_b


III. OCR Disk

3.1 Add an OCRMIRROR device when only OCR device is defined:

On one node

root# ocrconfig -add +CRS


3.2 Remove the old non-ASM shared OCR

root# ocrconfig -delete /oracle/ocfs2/storage/ocr

3.3 Check the status of OCR



Status of Oracle Cluster Registry is as follows:
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3028
         Available space (kbytes) :     259092
         ID                       :  647776079
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded
         Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured

         Cluster registry integrity check succeeded
         Logical corruption check succeeded

IV. Voting Disk

4.1 Check the status of the Voting disk

 
root@Node_a # crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   11d68d19685d4f20bf026eeb463d23aa (/oracle/ocfs2/storage/vdsk) []
Located 1 voting disk(s).

4.2 Migrate Voting disk to ASM

root@Node_a #  crsctl replace votedisk +CRS

/logiciels/oracle/grid/log/Node_a/client

root@Node_a # crsctl replace votedisk +CRS
Successful addition of voting disk 22a1974019a04feabfddb6f6ff819926.
Successful addition of voting disk 35d5a0f35db94f07bf5774a36cae4435.
Successful addition of voting disk a09b4c45c86f4fbdbf30f0cdc0ebe446.
Successful deletion of voting disk 475d55cb7dda4f00bf32bf7b3da8cbfc.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced

4.3 Check the status of Voting disk

One one node, as oragrid user:

root@Node_a # crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   22a1974019a04feabfddb6f6ff819926 (ORCL:DISK10) [CRS]
      2. ONLINE   35d5a0f35db94f07bf5774a36cae4435 (ORCL:DISK8)  [CRS]
      3. ONLINE   a09b4c45c86f4fbdbf30f0cdc0ebe446 (ORCL:DISK9)  [CRS]