jeudi 21 mai 2015

OCR / VOTING DISK MAINTENANCE OPERATIONS: MIGRATION FROM OCFS2 TO ASM


Introduction:

Starting with Oracle 11gR2, Oracle recommendation is to use Oracle ASM to store OCR and Voting Disks. With appropriate redundancy level of the ASM Disk Group being used, Oracle can create required number of Voting Disks as part of installation.


Be sure to:

  • Have the root password.
  • A valid spfile of ASM

One RAC with two nodes on Node_a and Node_b.
OS : Linux

Impact:

The service will be stopped; no connection to the database is possible during this operation.

I. Backups : Make sure there is a recent copy of the OCR/Voting disk before making any changes

1.1  OCR

 

As the root user:


root@Node_a # ocrconfig -manualbackup

/logiciels/oracle/grid/cdata/node-cluster/backup_20130409_145249.ocr
 

1.2  Voting

root@Node_a # dd if=/oracle/ocfs2/storage/vdsk of=vdsk_bkp

41025+0 records in
41025+0 records out
21004800 bytes (21 MB) copied, 0.526757 seconds, 39.9 MB/s
/oracle/ocfs2/storage
total 44384
-rw-r----- 1 root    dba  272756736 Apr 10 14:51 ocr
-rw-r----- 1 oragrid dba   21004800 Feb 12 09:32 vdsk
-rw-r--r-- 1 root    root  21004800 Apr 12 15:21 vdsk_bkp

II.  Preparing the ASM Disk Group: +CRS & +FRA


The approach is:

  1. Create Two ASM Diskgroup :+CRS ,+FRA     
  2. Migrate ocr, voting disks and archive logs to ASM in the new ASM disks : +CRS & +FRA

2.1 Check the status of OCR/Voting

         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3044
         Available space (kbytes) :     259076
         ID                       :  647776079
         Device/File Name         : /oracle/ocfs2/storage/ocr
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user
root@Node_a # crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   11d68d19685d4f20bf026eeb463d23aa (/oracle/ocfs2/storage/vdsk)
Located 1 voting disk(s).
 

2.2  Format new SAN disks

root@uvbacko890a # fdisk /dev/sdm
Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous
content won't be recoverable.
 
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1)
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305
Command (m for help): p
 
Disk /dev/sdm: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdm1               1        1305    10482381   83  Linux
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.
Idem for /dev/sdn
Idem for /dev/sdo
Idem for /dev/sds
 
 
On the first node:
root@Node_a# /etc/init.d/oracleasm createdisk DISK8 /dev/sdm1
Marking disk "DISK8" as an ASM disk:                       [  OK  ]
root@Node_a # /etc/init.d/oracleasm createdisk DISK9 /dev/sdn1
Marking disk "DISK9" as an ASM disk:                       [  OK  ]
root@Node_a # /etc/init.d/oracleasm createdisk DISK10 /dev/sdo1
Marking disk "DISK10" as an ASM disk:                       [  OK  ]
root@Node_a # /etc/init.d/oracleasm createdisk DISK10 /dev/sds1
 
 
Marking disk "DISK11" as an ASM disk:                       [  OK  ]
 
 
On the seconde node:
/etc/init.d/oracleasm scandisks
 
Check:
On the first:
root@Node_a # /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
DISK6
DISK7
DISK8
DISK9
DISK10
DISK11
 
 On the second:
root@Node_ba # /etc/init.d/oracleasm listdisks
DISK1
DISK2
DISK3
DISK4
DISK5
DISK6
DISK7
DISK8
DISK9
DISK10
DISK11

 

2.3 Create the ASM disk +CRS

Locate the asmca binary (for example /logiciels/oracle/grid/bin/)
cd /logiciels/oracle/grid/bin
            export DISPLAY=XXXXXXX.212:0.0
./asmca
(Normaly the disk DISK10 should have the same size as DISK8 and DISK9)

 


 
Idem for +FRA
All Disks

 


 

2.4 Check: On each node, as oragrid user

 
 
SQL> select state,name,type from v$asm_diskgroup;
 
STATE       NAME                           TYPE
----------- ------------------------------ ------
MOUNTED     DATA_ASM                       EXTERN
MOUNTED     CRS                            NORMAL
MOUNTED     FRA                            EXTERN

2.5 Check that the new disk group is mount :

On each node:
SQL> select state,name,type from v$asm_diskgroup;
STATE       NAME                           TYPE
----------- ------------------------------ ------
MOUNTED     DATA_ASM                       EXTERN
MOUNTED     CRS                            NORMAL
MOUNTED     FRA                            EXTERN


2.5 Case when a disk group is not mount

If for example on the second node, the status of CRS is not MOUNT, like:
SQL> select state,name,type from v$asm_diskgroup;
STATE       NAME                           TYPE
----------- ------------------------------ ------
MOUNTED     DATA_ASM                       EXTERN
MOUNTED     FRA                            EXTERN
DISMOUNTED  CRS

Then check the status of the resource and start it:

NAME           TARGET  STATE        SERVER      STATE_DETAILS

Local Resources

ora.CRS.dg
               ONLINE  ONLINE       Node_a
               OFFLINE OFFLINE      Node_b
ora.DATA_ASM.dg
               ……
To start ora.CRS.dg on Node_b:
oragrid@Node_b:/oracle/ocfs2/storage> crsctl start resource ora.CRS.dg -n Node_b


III. OCR Disk

3.1 Add an OCRMIRROR device when only OCR device is defined:

On one node

root# ocrconfig -add +CRS


3.2 Remove the old non-ASM shared OCR

root# ocrconfig -delete /oracle/ocfs2/storage/ocr

3.3 Check the status of OCR



Status of Oracle Cluster Registry is as follows:
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3028
         Available space (kbytes) :     259092
         ID                       :  647776079
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded
         Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured

         Cluster registry integrity check succeeded
         Logical corruption check succeeded

IV. Voting Disk

4.1 Check the status of the Voting disk

 
root@Node_a # crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   11d68d19685d4f20bf026eeb463d23aa (/oracle/ocfs2/storage/vdsk) []
Located 1 voting disk(s).

4.2 Migrate Voting disk to ASM

root@Node_a #  crsctl replace votedisk +CRS

/logiciels/oracle/grid/log/Node_a/client

root@Node_a # crsctl replace votedisk +CRS
Successful addition of voting disk 22a1974019a04feabfddb6f6ff819926.
Successful addition of voting disk 35d5a0f35db94f07bf5774a36cae4435.
Successful addition of voting disk a09b4c45c86f4fbdbf30f0cdc0ebe446.
Successful deletion of voting disk 475d55cb7dda4f00bf32bf7b3da8cbfc.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced

4.3 Check the status of Voting disk

One one node, as oragrid user:

root@Node_a # crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   22a1974019a04feabfddb6f6ff819926 (ORCL:DISK10) [CRS]
      2. ONLINE   35d5a0f35db94f07bf5774a36cae4435 (ORCL:DISK8)  [CRS]
      3. ONLINE   a09b4c45c86f4fbdbf30f0cdc0ebe446 (ORCL:DISK9)  [CRS]