http://oracleinstance.blogspot.in/2010/03/oracle-10g-installation-in-linux-5.html
http://samadhandba.wordpress.com/category/administration/page/2/
BACKUP AND RECOVERY SCENARIOS Complete Recovery With RMAN Backup.
previous post i have posted a complete recovery with user-managed backup,
here we are going to see the complete recovery using rman backup.
you can perform complete recovery in the following 5 situations.
RMAN Recovery Scenarios of complete recovery.
1. Complete Closed Database Recovery. System datafile is missing
2. Complete Open Database Recovery. Non system datafile is missing
3. Complete Open Database Recovery (when the database is initially closed). Non system datafile is missing
4. Recovery of a Datafile that has no backups.
5. Restore and Recovery of a Datafile to a different location.
1.Complete Closed Database Recovery. System Datafile is missing
In this case complete recovery is performed, only the system datafile is missing,
so the database can be opened without reseting the redologs.
1. rman target /
2. startup mount;
3. restore database or datafile file#;
4. recover database or datafile file#;
5. alter database open;
workshop1:
view plainprint?
SQL> create user sweety identified by sweety;
Recovery Manager: Release 10.2.0.1.0 - Production on Fri May 7 23:53:51 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database (not started)
RMAN> startup mount
Oracle instance started
database mounted
Total System Global Area
Fixed Size
Variable Size
444596224 bytes
1219904 bytes
130024128 bytes
Database Buffers
310378496 bytes
Redo Buffers
2973696 bytes
RMAN> RESTORE DATABASE;
Starting restore at 07-MAY-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /u01/app/oracle/oradata/testdb/system01.dbf
restoring datafile 00002 to /u01/app/oracle/oradata/testdb/undotbs01.dbf
restoring datafile 00003 to /u01/app/oracle/oradata/testdb/sysaux01.dbf
restoring datafile 00004 to /u01/app/oracle/oradata/testdb/users01.dbf
restoring datafile 00005 to /u03/oradata/test01.dbf
channel ORA_DISK_1: reading from backup piece
/u01/app/oracle/flash_recovery_area/TESTDB/backupset/2010_05_07/o1_mf_nnndf_TAG20100507T232259_5y8n
vxt2_.bkp
channel ORA_DISK_1: restored backup piece 1
piece
handle=/u01/app/oracle/flash_recovery_area/TESTDB/backupset/2010_05_07/o1_mf_nnndf_TAG20100507T23225
9_5y8nvxt2_.bkp tag=TAG20100507T232259
channel ORA_DISK_1: restore complete, elapsed time: 00:02:52
Finished restore at 07-MAY-10
RMAN> RECOVER DATABASE;
Starting recover at 07-MAY-10
using channel ORA_DISK_1
starting media recovery
RMAN> sql 'alter database open';
sql statement: alter database open
RMAN>
SQL> conn sys/oracle as sysdba;
Connected.
SQL> col name format a45
SQL> select name , status from v$datafile;
SQL> conn sweety/sweety
Connected.
SQL> alter system flush buffer_cache;
System altered.
SQL> select * from demo;
select * from demo
*
ERROR at line 1:
ORA-01116: error in opening database file 4
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
[oracle@cdbs1 ~]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Sat May 8 01:35:09 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: TESTDB (DBID=2501713962)
RMAN> sql 'alter database datafile 4 offline';
using target database control file instead of recovery catalog
sql statement: alter database datafile 4 offline
RMAN> restore datafile 4;
Starting restore at 08-MAY-10
using channel ORA_DISK_1
...
channel ORA_DISK_1: restore complete, elapsed time: 00:00:09
Finished restore at 08-MAY-10
RMAN> recover datafile 4;
Starting recover at 08-MAY-10
using channel ORA_DISK_1
starting media recovery
......
media recovery complete, elapsed time: 00:00:05
Finished recover at 08-MAY-10
RMAN> sql 'alter database datafile 4 online';
sql statement: alter database datafile 4 online
RMAN>exit
SQL> conn sweety/sweety;
Connected.
SQL> select * from demo;
ID
---------123
SQL>
3.Complete Open Database Recovery (when the database is initially closed).
Non system datafile is missing
A user datafile is reported missing when trying to startup the database. The datafile can be turned offline and the
database started up. Restore and
recovery are performed using Rman. After recovery is performed the datafile can be turned online again.
1. sqlplus /nolog
2. connect / as sysdba
3. startup mount
4. alter database datafile '' offline;
5. alter database open;
6. exit;
7. rman target /
8. restore datafile '';
9. recover datafile '';
10. sql 'alter tablespace online';
Total System Global Area 444596224 bytes
Fixed Size
Variable Size
1219904 bytes
138412736 bytes
Database Buffers
301989888 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
SQL> alter database datafile 4 offline;
Database altered.
SQL> alter database open;
Database altered.
[oracle@cdbs1 ~]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Sat May 8 01:51:45 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: TESTDB (DBID=2501713962)
RMAN> restore datafile 4;
Starting restore at 08-MAY-10
using target database control file instead of recovery catalog
.....
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 08-MAY-10
RMAN> recover datafile 4;
Starting recover at 08-MAY-10
using channel ORA_DISK_1
starting media recovery
.....
media recovery complete, elapsed time: 00:00:08
Finished recover at 08-MAY-10
RMAN> exit
SQL> alter database datafile 4 online;
Database altered.
SQL> conn sweety/sweety;
Connected.
SQL> select * from test;
TESTID
---------54321
4.Recovery of a Datafile that has no backups (database is up).
If a non system datafile that was not backed up since the last backup is missing,
recovery can be performed if all archived logs since the creation
of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The
option offline immediate is used
to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1. sqlplus '/ as sysdba'
2. alter tablespace offline immediate;
3. alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf;
4. exit
5. rman target /
6. recover tablespace ;
7. sql 'alter tablespace online';
If the create datafile command needs to be executed to place the datafile on a
location different than the original use:
alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf' as
'/user/oradata/u02/dbtst/newdata01.dbf'
restriction: controlfile creation time must be prior than datafile creation time.
for more reference refer previous blog post.(user-managed complete recovery).
workshop4:
view plainprint?
SQL> create user john identified by john
2 default tablespace testing;
SQL> conn john/john;
Connected.
SQL> select * from test_tb;
select * from test_tb
*
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u03/oradata/test01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter tablespace testing offline immediate;
Tablespace altered.
---if you want to create datafile in same location
SQL> alter database create datafile '/u03/oradata/test01.dbf';
Database altered.
---if you want to create a datafile in different location(disk).
SQL> alter database create datafile '/u03/oradata/test01.dbf' as '/u01/app/oracle/oradata/testdb/test01.dbf';
Database altered.
[oracle@cdbs1 ~]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Sat May 8 02:15:28 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: TESTDB (DBID=2501713962)
RMAN> recover tablespace testing;
Starting recover at 08-MAY-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=145 devtype=DISK
starting media recovery
SQL> alter tablespace testing online;
Tablespace altered.
SQL> conn john/john;
Connected.
SQL> select * from test_tb;
TESTID
---------1001
5.Restore and Recovery of a Datafile to a different location. Database is up.
If a non system datafile is missing and its original location not available, restore can be made to a different location
and recovery performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1. Use OS commands to restore the missing or corrupted datafile to the new
location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. alter tablespace rename datafile
'/user/oradata/u01/dbtst/user01.dbf' to '/user/oradata/u02/dbtst/user01.dbf';
4. rman target /
5. recover tablespace ;
6. sql 'alter tablespace online';
workshop5:
follow the same example workshop4 for workshop 5 except creating new datafile, here you have to copy the
recent backup file to the new disk location and perform recovery. thats it , rest of the procedures are same.
BACKUP AND RECOVERY SCENARIOS
Complete Recovery With User-managed Backup.
you can perform complete recovery in the below 5 situations.
User Managed Recovery Scenarios of complete recovery.
1. Complete Closed Database Recovery. System datafile is missing(with recent backups)
2. Complete Open Database Recovery. Non system datafile is missing(with backups).
3. Complete Open Database Recovery (when the database is initially closed). Non system datafile is missing(with
backups)
4. Recovery of a Missing Datafile that has no backups.(Disk corrupted and no backups available)
restriction: datafile should be created after controlfile creation.(i.e,controlfile creation time is prior than datafile
creation time).
you cannot recover or create datafile without backup in the following situation:
view plainprint?
SQL> select CONTROLFILE_CREATED from v$database;
CONTROLFILE_CREATED
-------------------07-MAY-2010 01:23:43
view plainprint?
SQL> select creation_time,name from v$datafile;
CREATION_TIME
5. Restore and Recovery of a Datafile to a different location.(Disk corrupted having recent backup and recover the
datafile in new Disk location).
User Managed Recovery Scenarios
User managed recovery scenarios do require that the database is in archive log mode, and that backups of all
datafiles and control files are made with the tablespaces set to begin backup, if the database is open while the copy
is made. At the end of the copy of each tablespace it is necessaire to take it out of backup mode. Alternatively
complete backups can be made with the database shutdown. Online redologs can be optionally backed up.
Files to be copied:
select name from v$datafile;
select member from v$logfile; # optional
select name from v$controlfile;
1.Complete Closed Database Recovery. System tablespace is missing
If the system tablespace is missing or corrupted the database cannot be started up
so a complete closed database recovery must be performed.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location from recent
backup, ie:
cp -p /user/backup/uman/system01.dbf /user/oradata/u01/dbtst/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;
workshop1: system datafile recovery with recent backup
view plainprint?
SQL> create user rajesh identified by rajesh;
User created.
SQL> grant dba to rajesh;
Grant succeeded.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
i manually deleted the datafile system01.dbf for testing purpose only
SQL> startup
ORACLE instance started.
Total System Global Area 444596224 bytes
Fixed Size
Variable Size
1219904 bytes
138412736 bytes
Database Buffers
301989888 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: '/u01/app/oracle/oradata/testdb/system01.dbf'
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> host cp /u01/app/oracle/oradata/backup/system01.dbf /u01/app/oracle/oradata/testdb/system01.dbf
system datafile restored from recent backup
SQL*Plus: Release 10.2.0.1.0 - Production on Fri May 7 12:51:16 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area 444596224 bytes
Fixed Size
Variable Size
ORA-00280: change 454383 for thread 1 is in sequence #7
Specify log: {=suggested | filename | AUTO | CANCEL}
auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
.
.
.
ORA-00279: change 456039 generated at 05/07/2010 12:46:22 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_11_%u_.arc
ORA-00280: change 456039 for thread 1 is in sequence #11
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_10_5y7hl7dr_.arc' no longer
needed for this recovery
Log applied.
Media recovery complete.
SQL> alter database open;
Database altered.
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
Next log sequence to archive
Current log sequence
12
14
14
SQL> select username from dba_users
2 where username='RAJESH';
USERNAME
-----------------------------RAJESH
2.Complete Open Database Recovery. Non system tablespace is missing
If a non system tablespace is missing or corrupted while the database is open,
the database remain open.
recovery can be performed while
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. recover tablespace ;
4. alter tablespace online;
workshop2: Non-system datafile recovery from recent backup when database is open
view plainprint?
SQL> ALTER USER rajesh DEFAULT TABLESPACE users;
User altered.
SQL> conn rajesh/rajesh;
Connected.
SQL> create table demo(id number);
Table created.
SQL> insert into demo values(123);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from demo;
ID
---------123
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
Next log sequence to archive
Current log sequence
14
16
16
i manually deleted the datafile users01.dbf for testing purpose only
SQL> conn rajesh/rajesh;
Connected.
SQL> alter system flush buffer_cache;
System altered.
SQL> select * from demo;
select * from demo
*
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
SQL> conn sys/oracle as sysdba;
Connected.
SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u01/app/oracle/oradata/testdb/users01.dbf
restore the users01.dbf datafile from recent backup to the testdb folder
SQL> alter tablespace users offline immediate;
Tablespace altered.
SQL> recover tablespace users;
ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
ORA-00280: change 454383 for thread 1 is in sequence #7
Specify log: {=suggested | filename | AUTO | CANCEL}
auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
.....
......
ORA-00279: change 456044 generated at 05/07/2010 12:46:28 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_13_%u_.arc
ORA-00280: change 456044 for thread 1 is in sequence #13
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_12_5y7hldl2_.arc' no longer
needed for this recovery
Log applied.
Media recovery complete.
SQL> alter tablespace users online;
Tablespace altered.
SQL> conn rajesh/rajesh;
Connected.
SQL> select * from demo;
ID
---------123
3.Complete Open Database Recovery (when the database is initially closed).Non system datafile is missing
If a non system tablespace is missing or corrupted and the database crashed, recovery can be performed after
the database is open.
Pre requisites: A closed or open database backup and archived logs.
1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain
mounted)
2.
alter database datafile3 offline; (tablespace cannot be used because the database is not open)
3.
alter database open;
4.
Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf
5.
recover datafile 3;
6.
alter tablespace online;
workshop 3:Non system datafile is missing
view plainprint?
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter system switch logfile;
System altered.
SQL> select username,default_tablespace from dba_users
2 where username='RAJESH';
SQL> conn rajesh/rajesh;
Connected.
SQL> create table testtbl (id number);
Table created.
SQL> insert into testtbl values(786);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from testtbl;
ID
---------786
SQL> conn sys/oracle as sysdba;
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> --manually deleting the users01.dbf datafile from testdb folder
warning:for testing purpose only
SQL> host rm -rf /u01/app/oracle/oradata/testdb/users01.dbf
SQL> startup
ORACLE instance started.
Total System Global Area 444596224 bytes
Fixed Size
Variable Size
1219904 bytes
142607040 bytes
Database Buffers
297795584 bytes
Redo Buffers
2973696 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
SQL> alter database datafile 4 offline;
Database altered.
SQL> alter database open;
Database altered.
SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u01/app/oracle/oradata/testdb/users01.dbf
copying user01.dbf from the recent backup to the testdb folder
SQL> recover datafile 4;
ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
ORA-00280: change 454383 for thread 1 is in sequence #7
Specify log: { =suggested | filename | AUTO | CANCEL} auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
......
.........
ORA-00279: change 456046 generated at 05/07/2010 12:46:29 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_14_%u_.arc
ORA-00280: change 456046 for thread 1 is in sequence #14
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_13_5y7hlfbc_.arc' no longer
needed for this recovery
Log applied.
Media recovery complete.
SQL> alter database datafile 4 online;
Database altered.
SQL> conn rajesh/rajesh;
Connected.
SQL> select * from testtbl;
ID
---------786
4.Recovery of a Missing Datafile that has no backups (database is open).
If a non system datafile that was not backed up since the last backup is missing,
recovery can be performed if all archived logs since the creation
of the missing datafile exist.
Pre requisites: All relevant archived logs.
1. alter tablespace offline immediate;
2. alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf';
3. recover tablespace ;
4. alter tablespace online;
If the create datafile command needs to be executed to place the datafile on a location different than the original
use:
alter database create datafile '/user/oradata/u01/dbtst/newdata01.dbf' as
'/user/oradata/u02/dbtst/newdata01.dbf'
restriction: datafile should be created after controlfile creation.(i.e,controlfile creation time is prior than datafile
creation time).
workshop 4: Missing Non-system Datafile having no backups
view plainprint?
SQL> alter session set nls_date_format='DD-MON-YYYY hh24:mi:ss';
---we can recover the datafile test01.dbf without backup using
view plainprint?
create datafile command in recovery
---in this example i am going to create a table in testing tablespace
view plainprint?
and deleted the test01.dbf datafile and recover it without backup and
view plainprint?
create datafile recovery command.
SQL> create user jay identified by jay
2 default tablespace testing;
User created.
SQL> grant dba to jay;
Grant succeeded.
SQL> select username,default_tablespace from dba_users
2 where username='JAY';
SQL> conn jay/jay;
Connected.
SQL> select * from demo;
ID
---------321
SQL> alter system flush buffer_cache;
System altered.
SQL> select * from demo;
select * from demo
*
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/app/oracle/oradata/testdb/test01.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> alter database datafile 5 offline;
Database altered.
----TO CREATE A NEW RECOVERED DATAFILE IN SAME LOCATION.
SQL> alter database create datafile '/u01/app/oracle/oradata/testdb/test01.dbf';
Database altered.
----TO CREATE A NEW RECOVERED DATAFILE IN DIFFERENT LOCATION.
SQL> alter database create datafile '/u01/app/oracle/oradata/testdb/test01.dbf' as '/u03/oradata/test01.dbf';
Database altered.
SQL> recover datafile 5;
ORA-00279: change 454443 generated at 05/07/2010 16:32:07 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 454443 for thread 1 is in sequence #8
Specify log: {=suggested | filename | AUTO | CANCEL}
auto
ORA-00279: change 454869 generated at 05/07/2010 16:41:38 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_9_%u_.arc
ORA-00280: change 454869 for thread 1 is in sequence #9
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_5y7xcbrm_.arc' no longer
needed for this recovery
.....
.......
ORA-00279: change 454874 generated at 05/07/2010 16:41:45 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_11_%u_.arc
ORA-00280: change 454874 for thread 1 is in sequence #11
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_10_5y7xck8j_.arc' no longer
needed for this recovery
Log applied.
Media recovery complete.
SQL> alter database datafile 5 online;
Database altered.
SQL> conn jay/jay;
Connected.
SQL> select * from demo;
ID
---------321
SQL>
5.Restore and Recovery of a Datafile to a different location.
If a non system datafile is missing and its original location not available, restore can be made to a different location
and recovery performed.
Pre requisites: All relevant archived logs.
1. Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf
2. alter tablespace offline immediate;
3. alter tablespace rename datafile
'/user/oradata/u01/dbtst/user01.dbf' to '/user/oradata/u02/dbtst/user01.dbf';
4. recover tablespace ;
5. alter tablespace online;
workshop 5:
view plainprint?
SQL> create user lachu identified by lachu
2 default tablespace users;
SQL> select * from test_tb;
select * from test_tb
*
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/testdb/users01.dbf'
SQL> conn sys/oracle as sysdba;
Connected.
SQL> alter database datafile 4 offline;
Database altered.
SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u03/oradata/users01.dbf
--restore datafile user01.dbf to new disk from the recent backup of the database.
SQL> alter tablespace users rename datafile
2 '/u01/app/oracle/oradata/testdb/users01.dbf' to '/u03/oradata/users01.dbf';
Tablespace altered.
SQL> recover datafile 4;
ORA-00279: change 454383 generated at 05/07/2010 01:40:11 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_%u_.arc
ORA-00280: change 454383 for thread 1 is in sequence #7
Specify log: {=suggested | filename | AUTO | CANCEL}
auto
ORA-00279: change 456007 generated at 05/07/2010 12:46:10 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_8_%u_.arc
ORA-00280: change 456007 for thread 1 is in sequence #8
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_7_5y7hkty0_.arc' no longer
needed for this recovery
....
......
ORA-00279: change 457480 generated at 05/07/2010 13:09:30 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_15_%u_.arc
ORA-00280: change 457480 for thread 1 is in sequence #15
ORA-00278: log file
'/u01/app/oracle/flash_recovery_area/TESTDB/archivelog/2010_05_07/o1_mf_1_14_5y7jxlvg_.arc' no longer
needed for this recovery
Log applied.
Media recovery complete.
SQL> alter database datafile 4 online;
Database altered.
SQL> select name from v$datafile;
NAME
--------------------------------------------/u01/app/oracle/oradata/testdb/system01.dbf
/u01/app/oracle/oradata/testdb/undotbs01.dbf
/u01/app/oracle/oradata/testdb/sysaux01.dbf
/u03/oradata/users01.dbf ----------restored in new location (disk)
SQL> conn lachu/lachu;
Connected.
SQL> select * from tab;
ID
---------123
Block media recovery recovers an individual corrupt datablock or set of datablocks within a datafile. In cases when
a small number of blocks require media recovery, you can selectively restore and recover damaged blocks rather
than whole datafiles.
More theoretical information read
It’s possible to perform Block Media Recovery with having only OS based “hot” backups and having NO RMAN
backups.
Look at the following demonstration. Here:
1. Create a new user antony and a table corrupt_test in that schema.
2. Take OS backup (hot backup) of the users01.dbf where the table resides
3. Corrupt the data in that table and get block corruption error.
4. Connect with RMAN and try to use BLOCKRECOVER command. As we haven’t any backup, we get an error.
5. Catalog the “hot backup” to the RMAN repository.
6. Use BLOCKRECOVER command and recover the corrupted data block using cataloged “hot backup” of the
datafile.
7. Query the table and get the data back!
Here is the scenario
view plainprint?
SQL> CREATE USER antony IDENTIFIED BY antony;
User created.
SQL> GRANT DBA TO antony;
Grant succeeded.
SQL> CONN antony/antony;
Connected.
SQL> CREATE TABLE corrupt_test (id NUMBER);
Table created.
SQL> INSERT INTO corrupt_test VALUES(123);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> COLUMN segment_name format a15
SQL> SELECT segment_name, tablespace_name from dba_segments
2 WHERE segment_name='CORRUPT_TEST';
SQL> COLUMN tablespace_name format a15
SQL> COLUMN name FORMAT a43
SQL> SELECT segment_name, a.tablespace_name, b.name
2 FROM dba_segments a, v$datafile b
3 WHERE a.header_file=b.file#
4 AND a.segment_name='CORRUPT_TEST';
SQL> SELECT header_block FROM dba_segments WHERE segment_name='CORRUPT_TEST';
HEADER_BLOCK
-----------67
SQL>
[oracle@cdbs1 ~]$ dd of=/u01/app/oracle/oradata/orcl/users01.dbf bs=8192 conv=notrunc seek=68 << EOF
> rajeshkumar testing block corruption
> EOF
0+1 records in
0+1 records out
[oracle@cdbs1 ~]$
SQL> Conn antony/antony
Connected.
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
System altered.
SQL> select * from corrupt_test;
select * from corrupt_test
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 67)
ORA-01110: data file 4: '/u01/app/oracle/oradata/orcl/users01.dbf'
SQL> EXIT
[oracle@cdbs1 ~]$ rman target /
<span style="font-size: x-small;">
Recovery Manager: Release 10.2.0.1.0 - Production on Thu May 6 01:41:46 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
<span style="font-size: x-small;">Starting blockrecover at 06-MAY-10
using channel ORA_DISK_1
channel ORA_DISK_1: restoring block(s) from datafile copy /u01/app/oracle/oradata/backup/users01_backup.dbf
starting media recovery
media recovery complete, elapsed time: 00:00:02
Finished blockrecover at 06-MAY-10
RMAN> EXIT
Recovery Manager complete.</span>
<span style="font-size: x-small;">[oracle@cdbs1 ~]$sqlplus </span>
<span style="font-size: x-small;">
SQL*Plus: Release 10.2.0.1.0 - Production on Thu May 6 01:45:04 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options</span>
SQL> conn antony/antony
Connected.
SQL> select * from CORRUPT_TEST;
ID
---------123
Command Line History and Editing in SQL*Plus and RMAN on Linux
rlwrap (readline wrapper) utility provides a command history and editing of keyboard input for any other command.
This article explains how to install rlwrap and set it up for SQL*Plus and RMAN.
Download the latest rlwrap software from the following URL.
http://utopia.knoware.nl/~hlub/uck/rlwrap/
Unzip and install the software using the following commands.
gunzip rlwrap*.gz
tar -xvf rlwrap*.tar
cd rlwrap*
./configure
make
make check
make install
Run the following commands, or better still append then to the ".bashrc" of the oracle software owner.
alias rlsqlplus='rlwrap sqlplus'
alias rlrman='rlwrap rman'
You can now start SQL*Plus or RMAN using "rlsqlplus" and "rlrman" respectively, and you will have a basic
command history and the current line will be editable using the arrow and delete keys.
[oracle@cdbs1 ~]$ rlrmanRecovery Manager: Release 10.2.0.1.0 - Production on Wed May 5 17:14:57 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> exit
Recovery Manager complete.
Instead of rlrman and rlsqlplus, you can use your own alias name for rman and sqlplus. More than that now you
can use up and down arrow for previous past queries.
[oracle@cdbs1 ~]$ alias rajesh='rlwrap rman'
[oracle@cdbs1 ~]$ rajesh
Recovery Manager: Release 10.2.0.1.0 - Production on Wed May 5 17:15:27 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN>
[oracle@cdbs1 ~]$ alias lakshmi='rlwrap sqlplus'
[oracle@cdbs1 ~]$ lakshmi
SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 5 17:21:38 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
5
Next log sequence to archive
Current log sequence
7
7
SQL> select name from v$database;
NAME
--------ORCL
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
5
Next log sequence to archive
Current log sequence
7
7
SQL> select name from v$database;
NAME
--------ORCL
SQL>
Automated Storage Management (ASM) Pocket Reference Guide
Automated Storage Management (ASM) Pocket Reference Guide
by charles kim
ASM Diskgroups
Create Diskgroup
CREATE DISKGROUP disk_group_1 NORMAL
REDUNDANCY
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2,
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;
Drop disk groups
DROP DISKGROUP DATA INCLUDING CONTENTS;
Add disks
ALTER DISKGROUP DATA ADD DISK '/dev/sda3';
Drop a disk
ALTER DISKGROUP DATA DROP DISK DATA_0001;
Resize all disks in a disk group
ALTER DISKGROUP DATA RESIZE ALL SIZE 100G;
UNDROP DISKS clause of the ALTER DISKGROUP
ALTER DISKGROUP DATA UNDROP DISKS;
Rebalance diskgroup
ALTER DISKGROUP DATA REBALANCE POWER 5;
Check Diskgroup
ALTER DISKGROUP DATA CHECK;
ALTER DISKGROUP DATA CHECK NOREPAIR;
Resetting CSS to new Oracle Home
localconfig reset /apps/oracle/product/11.1.0/ASM
ASM Dictionary Views
v$asm_alias ---list all aliases in all currently mounted diskgroups
v$asm_client ---list all the databases currently accessing the diskgroups
v$asm_disk ----lists all the disks discovered by the ASM instance.
v$asm_diskgroup ---Lists all the diskgroups discovered by the ASM instance.
v$asm_file ---Lists all files that belong to diskgroups mounted by the ASM instance.
v$asm_operation ---Reports information about current active operations. Rebalance activity is reported in this
view.
v$asm_template ---Lists all the templates currently mounted by the ASM instance.
v$asm_diskgroup_stat ---same as v$asm_diskgroup but does discover new disgroups. Use this view instead of
v$asm_diskgroup.
v$asm_disk_stat ---same as v$asm_disk but does not discover new disks. Use this view instead of v$asm_disk.
asmcmd Commands
cd -----changes the current directory to the specified directory
du -----Displays the total disk space occupied by ASM files in the specified
ASM directory and all its subdirectories, recursively.
find -----Lists the paths of all occurrences of the specified name ( with wildcards) under the specified directory.
ls +data/testdb ----Lists the contents of an ASM director, the attributes of the specified file, or the names and
attributes of all disk groups.
lsct -----Lists information about current ASM clients.
lsdg ----Lists all disk groups and their attributes
mkalias ----Creates an alias for a system generated filename.
mkdir -----Creates ASM directories.
pwd --------Displays the path of the current ASM directory.
rm
-------Deletes the specified ASM Files or directories.
rm -f
rmalias ---------Deletes the specified alias, retaining the file that the alias points to
lsdsk ----------Lists disks visible to ASM.
md_backup ------Creates a backup of all of the mounted disk groups.
md_restore ------Restores disk groups from a backup.
remap ----repairs a range of physical blocks on a disk.
cp ------copies files into and out of ASM.
**ASM diskgroup to OS file system.
**OS file system to ASM diskgroup.
**ASM diskgroup to another ASM diskgroup on the same server.
**ASM disk group to ASM diskgroup on a remote server.
SYSASM Role (Starting in Oracle Database 11g)
SQL> Grant sysasm to sys; ---sysdba deprecated sqlplus / as sysasm
ASM Rolling Upgrades START
alter system start rolling migration to 11.2.0.2;
DISABLE
alter system stop rolling migration;
Database INIT parameters for ASM.
*.control_files='+DATA/orcl/controlfile/control1.ctl','+FRA/orcl/controlfile/control2.ctl'
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_recovery_file_dest='+DATA'
*.log_archive_dest_1='LOCATION=+DATA'
*.log_file_name_convert='+DATA/VISKDR','+DATA/VISK' ##added for DG
MIGRATE to ASM using RMAN
run
{
backup as copy database format '+DATA';
switch database to copy;
#For each logfile
sql "alter database rename '/data/oracle/VISK/redo1a.rdo' to '+DATA' ";
alter database open resetlogs;
#For each tempfile
sql "alter tablespace TEMP add tempfile" ;
}
Restore Database to ASM using SET NEWNAME
run
{
allocate channel d1 type disk;
#For each datafile
set newname for datafile 1 to '+DATA';
restore database;
switch datafile all;
release channel d1;
}
Error: ORA-16825: Fast-Start Failover and other errors or warnings detected for the database
ORA-16795:
database resource guard detects that database re-creation is required
ORA-16825:
Fast-Start Failover and other errors or warnings detected for the database
ORA-16817:
unsynchronized Fast-Start Failover configuration
solution:
DGMGRL> show database rajesh
Database
Name:
Role:
Enabled:
rajesh
PRIMARY
YES
Intended State: ONLINE
Instance(s):
rajesh
Current status for "rajesh":
Error: ORA-16825: Fast-Start Failover and other errors or warnings detected for the database
DGMGRL> show database jeyanthi
Database
Name:
Role:
Enabled:
jeyanthi
PHYSICAL STANDBY
NO
Intended State: ONLINE
Instance(s):
jeyanthi
Current status for "jeyanthi":
Error: ORA-16661: the standby database needs to be reinstated
DGMGRL> reinstate database jeyanthi;
Reinstating database "jeyanthi", please wait...
Operation requires shutdown of instance "jeyanthi" on database "jeyanthi"
Shutting down instance "jeyanthi"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "jeyanthi" on database "jeyanthi"
Starting instance "jeyanthi"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "jeyanthi" ...
Reinstatement of database "jeyanthi" succeeded
DGMGRL> show configuration verbose;
Current status for "jeyanthi":
Warning: ORA-16607: one or more databases have failed
then,
stop and start the observer.(start from another machine)
DGMGRL> stop observer
Done.
DGMGRL> connect sys/oracle@jeyanthi
Connected.
DGMGRL> start observer
Observer started
SUCCESS
Configuration of 10g Data Guard Broker and Observer for Switchover
Configuring Data Guard Broker for Switchover, General Review.
On a previous document, 10g Data Guard, Physical Standby Creation, step by step I did describe how to
implement a Data Guard
configuration; on this document I'm adding how to configure the broker and observer, setup the database to
Maximum Availability and
managing switchover from Data Guard Manager, DGMGRL.
Data Guard Broker permit to manage a Data Guard Configuration, from both the Enterprise Manager Grid Control
console, or from a
terminal in command line mode. In this document I will explore command line mode.
Pre requisites include the use of 10g Oracle server, using spfile on both the primary and standby and a third server
for the Observer,
and configure the listeners to include a service for the Data Guard Broker.
The Enviroment
• 2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux, the Primary and Standby databases
are located on these
servers.
• 1 Linux server, RH Linux 2.6.9-42.ELsmp x86_64 GNU/Linux, The Data Guard Broker Observer is located on
this server
• Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes
• Oracle Home is on identical path on both nodes
• Primary database ANTONY
• Standby database JOHN
Step by Step Implementation of Data Guard Broker
Enable Data Guard Broker Start on the Primary and Standby databases
SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH;
System altered.
Setup the Local_Listener parameter on both the Primary and Standby databases
SQL> ALTER SYSTEM SET LOCAL_LISTENER='LISTENER_VMRACTEST' SCOPE=BOTH;
System altered.
Setup the tnsnames to enable communication with both the Primary and Standby databases
The listener.ora should include a service named global_db_nameDGMGRL to enable the broker to start the
databases on the event of
switchover. This configuration needs to be included on both servers.
Listener.ora on Node 1
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT = 1521)(IP = FIRST))
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = antony)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
(SID_NAME = antony)
)
(SID_DESC =
(SID_NAME= antony)
(GLOBAL_DBNAME = antony_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1 )
)
)
Listener.ora on Node 2
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521)(IP = FIRST))
)
Setup the Broker configuration files
The broker configuration files are automatically created when the broker is started using ALTER SYSTEM SET
DG_BROKER_START=TRUE.
The default destination can be modified using the parameters DG_BROKER_CONFIG_FILE1 and
DG_BROKER_CONFIG_FILE2
On Primary:
SQL>SHOW PARAMETERS DG_BROKER_CONFIG
Next create from within the DGMGRL the configuration
[oracle@rac1 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@antony
Connected.
DGMGRL> create configuration ANTONY AS
> PRIMARY DATABASE IS antony
> CONNECT IDENTIFIER IS antony;
Configuration "antony" created with primary database "antony"
Add the standby to the configuration and check it
DGMGRL> ADD DATABASE john AS
> CONNECT IDENTIFIER IS john
> MAINTAINED AS PHYSICAL;
Database "john" added
DGMGRL> SHOW CONFIGURATION;
Configuration
Name:
antony
Enabled:
NO
Protection Mode:
MaxPerformance
Fast-Start Failover: DISABLED
Databases:
antony - Primary database
john
These are the steps required to enable and check Fast Start Failover and the Observer:
1. Ensure standby redologs are configured on all databases.
on primary:
SQL> SELECT TYPE,MEMBER FROM V$LOGFILE;
2. Ensure the LogXptMode Property is set to SYNC.
Note: These commands will succeed only if database is configured with standby redo logs.
DGMGRL> EDIT DATABASE antony SET PROPERTY 'LogXptMode'='SYNC';
Property "LogXptMode" updated
DGMGRL> EDIT DATABASE john SET PROPERTY 'LogXptMode'='SYNC';
Property "LogXptMode" updated
3.Specify the FastStartFailoverTarget property
DGMGRL> EDIT DATABASE antony SET PROPERTY FastStartFailoverTarget='john';
Property "faststartfailovertarget" updated
DGMGRL> EDIT DATABASE john SET PROPERTY FastStartFailoverTarget='antony';
Property "faststartfailovertarget" updated
4.Upgrade the protection mode to MAXAVAILABILITY, if necessary.
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;
Operation requires shutdown of instance "antony" on database "antony"
Shutting down instance "antony"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "antony" on database "antony"
Starting instance "antony"...
ORACLE instance started.
Database mounted.
note: if ORA-12514: TNS:listener does not currently know of service requested in connect descriptor Failed.
You are no longer connected to ORACLE
Please connect again.
you must start instance (primary database) manually
SQL> conn / as sysdba
SQL> startup mount;
5. Enable Flashback Database on the Primary and Standby Databases.
On Both databases
To enter the standby into Flashback mode you must shutdown the both databases, then while the primary is down
execute the
following commands on the standby:
SQL> ALTER SYSTEM SET UNDO_RETENTION=3600 SCOPE=SPFILE;
System altered.
SQL> ALTER SYSTEM SET UNDO_MANAGEMENT='AUTO' SCOPE=SPFILE;
System altered.
SQL> startup mount;
SQL> ALTER DATABASE FLASHBACK ON;
Enable fast start failover
[oracle@rac1 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@antony;
Connected.
DGMGRL> show configuration verbose;
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxAvailability
Fast-Start Failover: DISABLED
Databases:
antony - Primary database
john
- Physical standby database
Current status for "antony":
SUCCESS
DGMGRL> show database john;
Database
Name:
Role:
john
PHYSICAL STANDBY
Enabled:
YES
Intended State: ONLINE
Instance(s):
john
Current status for "john":
SUCCESS
DGMGRL> ENABLE FAST_START FAILOVER;
Enabled.
start the observer
Start the observer from a third server on background. You may use a script like this:
---------------- script start on next line -------------------#!/bin/ksh
# startobserver
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
export
BASE_PATH=/u01/app/oracle/oracle/scripts/general:/opt/CTEact/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/
bin:/etc:/usr/local/maint/oracle:/usr/ccs/bin:/usr/openwin/bin:/usr/dt/bin:/usr/local/bin:.
export PATH=$ORACLE_HOME/bin:$BASE_PATH
dgmgrl << eof
connect sys/oracle@antony
START OBSERVER;
eof
---------------- script end on previous line -------------------[oracle@rac3 ~]$ nohup ./startobserver &
nohup: appending output to `nohup.out'
[1] 27392
Verify the fast-start failover configuration.
[oracle@rac3 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle@antony
Connected.
DGMGRL> show configuration verbose
Configuration
Name:
antony
Enabled:
YES
Protection Mode:
MaxAvailability
Fast-Start Failover: ENABLED
Databases:
antony - Primary database
john
Check that primary and standby are healthy
This check must return 'SUCCESS' as the status for both databases, otherwise it means there is a configuration
problem.
DGMGRL> show database antony
Database
Name:
Role:
Enabled:
antony
PRIMARY
YES
Intended State: ONLINE
Instance(s):
antony
Current status for "antony":
SUCCESS
DGMGRL> show database john
Database
Name:
Role:
Enabled:
john
PHYSICAL STANDBY
YES
Intended State: ONLINE
Instance(s):
john
Current status for "john":
SUCCESS
DGMGRL>
EXECUTE THE SWITCHOVER:
DGMGRL> SWITCHOVER TO john;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "antony" on database "antony"
Shutting down instance "antony"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "john" on database "john"
Shutting down instance "john"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "antony" on database "antony"
Starting instance "antony"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "john" on database "john"
Starting instance "john"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "john"
DGMGRL>
Current status for "antony":
SUCCESS
fast-start failover DATAGUARD BROKER
here is an example to check the fast start failover in dataguard environment
i manually killed the mandatory background process smon of primary database.
then the observer automatically initiate the standby database to primary database, and reinstate the old primary
database to standby.
Number of resources: 1
Resources:
Name: whiteowl (default) (verbose name='whiteowl')
Current status for "whiteowl":
Warning: ORA-16817: unsynchronized Fast-Start Failover configuration
on observer: machine rac3
12:19:20.23 Monday, January 25, 2010
Initiating fast-start failover to database "blackowl"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "blackowl"
12:19:51.84 Monday, January 25, 2010
12:24:33.93 Monday, January 25, 2010
Initiating reinstatement for database "whiteowl"...
Reinstating database "whiteowl", please wait...
Operation requires shutdown of instance "whiteowl" on database "whiteowl"
Shutting down instance "whiteowl"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "whiteowl" on database "whiteowl"
Starting instance "whiteowl"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "whiteowl" ...
Reinstatement of database "whiteowl" succeeded
12:26:02.89 Monday, January 25, 2010
Current status for "whiteowl":
SUCCESS
Data Guard errors and solution- i faced
today i started the primary and standby database, i got this error message in primary database
ORA-16649: database will open after Data Guard broker has evaluated Fast-Start Failover status
after connecting with the observer, i gave show configuration verbose command and show database verbose
'whiteowl' command, it showed the below error message.
note: here my database name whiteowl
ORA-16820:
Fast-Start Failover observer is no longer observing this databaseCause:
A previously started observer was
no longer actively observing this database. A significant amount of time elapsed since this database last heard from
the observer. Possible reasons were: - The node where the observer was running was not available.
- The network connection between the observer and this database was not available.
- Observer process was terminated unexpectedly.
Action: Check the reason why the observer cannot contact this database. If the problem cannot be corrected,
stop the current observer by connecting to the Data Guard configuration and issue the DGMGRL "STOP OBSERVER"
command. Then restart the observer on another node. You may use the DGMGRL "START OBSERVER" command to
start the observer on the other node.
what i have done?
i checked the listeners, tnsnames.ora files and tnsping command in primary,standby, observer machines
and then as above mentioned i stop the observer and then start the observer from primary database machine.
now its working fine.
DGMGRL> show configuration verbose;
DGMGRL>
Step by Step, document for creating Physical Standby Database, 10g DATA GUARD
10g Data Guard, Physical Standby Creation, step by step
primary database name: white on rac2 machine
standby database name: black on rac1 machine
Creating a Data Guard Physical Standby environment, General Review.
Manually setting up a Physical standby database is a simple task when all prerequisites and setup steps are
carefully met and executed.
In this example I did use 2 hosts, that host a RAC database. All RAC preinstall requisites are then in place and no
additional configuration was
necessary to implement Data Guard Physical Standby manually.
The Enviroment
2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes
Oracle Home is on identical path on both nodes
Implementation notes:
Once you have your primary database up and running these are the steps to follow:
1. Enable Forced Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Having followed these steps to implement the Physical Standby you need to follow these steps:
1. Create a Control File for the Standby Database
2. Backup the Primary Database and transfer a copy to the Standby node.
3. Prepare an Initialization Parameter File for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to Support the Standby Database on the standby node.
6. Start the Physical Standby Database
7. Verify the Physical Standby Database Is Performing Properly
Step by Step Implementation of a Physical Standby Environment
Primary Database Steps
Primary Database General View
SQL> archive log list;
Database log mode
No Archive Mode
Automatic archival
Disabled
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
Current log sequence
0
1
SQL> select name from v$database;
NAME
--------WHITE
SQL> select name from v$datafile;
NAME
-------------------------------------------------------------------------------/u01/app/oracle/oradata/white/system01.dbf
/u01/app/oracle/oradata/white/undotbs01.dbf
/u01/app/oracle/oradata/white/sysaux01.dbf
/u01/app/oracle/oradata/white/users01.dbf
Enable Forced Logging
In order to implement Standby Database we enable 'Forced Logging'.
This option ensures that even in the event that a 'nologging' operation is done, force logging takes precedence
and all operations are logged
into the redo logs.
SQL> ALTER DATABASE FORCE LOGGING;
Database altered.
Create a Password File
A password file must be created on the Primary and copied over to the Standby site. The sys password must be
identical on both sites. This is
a key pre requisite in order to be able to ship and apply archived logs from Primary to Standby.
[oracle@rac2 ~]$ cd $ORACLE_HOME/dbs
[oracle@rac2 dbs]$ orapwd file=orapwwhite password=oracle force=y
SQL> select * from v$pwfile_users;
USERNAME
SYSDB SYSOP
------------------------------ ----- ----SYS
TRUE TRUE
Configure a Standby Redo Log
A Standby Redo log is added to enable Data Guard Maximum Availability and Maximum Protection modes. It is
important to configure the
Standby Redo Logs (SRL) with the same size as the online redo logs.
In this example I'm using Oracle Managed Files, that's why I don't need to provide the SRL path and file name. If
you are not using OMF's
you then must pass the full qualified name.
SQL> select group#,type,member from v$logfile;
Set Primary Database Initialization Parameters
Data Guard must use spfile, in order to configure it we create and configure the standby parameters on a regular
pfile, and once it is ready we
convert it to an spfile.
Several init.ora parameters control the behavior of a Data Guard environment. In this example the Primary
database init.ora is configured so
that it can hold both roles, as Primary or Standby.
SQL> CREATE PFILE FROM SPFILE;
File created.
(or)
SQL> CREATE PFILE='/tmp/initwhite.ora' from spfile;
File created.
Edit the pfile to add the standby parameters, here shown highlighted:
Once the new parameter file is ready we create from it the spfile:
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora
ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated
ORA-07286: sksagdi: cannot obtain device information.
Linux Error: 2: No such file or directory
note: create a archive log destination(location) folder as per in parameter file and then startup the database.
SQL> startup nomount pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/initwhite.ora
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size
1218992 bytes
Variable Size
96470608 bytes
Database Buffers
184549376 bytes
Redo Buffers
2973696 bytes
SQL> create spfile from pfile;
File created.
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
Enable Archiving
On 10g you can enable archive log mode by mounting the database and executing the archivelog command:
SQL> startup mount
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size
1218992 bytes
Variable Size
96470608 bytes
Database Buffers
184549376 bytes
Redo Buffers
2973696 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/white/arch/
Oldest online log sequence
1
Next log sequence to archive
Current log sequence
2
2
SQL>
Standby Database Steps
Here, i am going to create standby database using backup of the primary database datafiles,redologs, controlfile
by rman. compare with user managed backup, rman is comfortable and flexible method.
Create an RMAN backup which we will use later to create the standby:
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jan 20 18:41:51 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: WHITE (DBID=3603807872)
RMAN> backup full database format '/u01/app/oracle/backup/%d_%U.bckp' plus archivelog format
'/u01/app/oracle/backup/%d_%U.bckp';
Next, create a standby controlfile backup via RMAN:
RMAN> configure channel device type disk format '/u01/app/oracle/backup/%U';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT
'/u01/app/oracle/backup/%U';
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1
RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY;
RMAN> BACKUP ARCHIVELOG ALL;
In this simple example, I am backing up the primary database to disk; therefore, I must make the backupsets
available to the standby host if I want to use them as the basis for my duplicate operation:
[oracle@rac2 ~]$ cd /u01/app/oracle/backup
[oracle@rac2 backup]$ ls -lart
total 636080
drwxrwxr-x 9 oracle oinstall
NOTE:
The primary and standby database location for backup folder must be same.
for eg: /u01/app/oracle/backup folder
On the standby node create the required directories to get the datafiles
mkdir -p /u01/app/oracle/oradata/black
mkdir -p /u01/app/oracle/oradata/black/arch
mkdir -p /u01/app/oracle/admin/black
mkdir -p /u01/app/oracle/admin/black/adump
mkdir -p /u01/app/oracle/admin/black/bdump
mkdir -p /u01/app/oracle/admin/black/udump
mkdir -p /u01/app/oracle/flash_recovery_area/WHITE
mkdir -p /u01/app/oracle/flash_recovery_area/WHITE/onlinelog
Prepare an Initialization Parameter File for the Standby Database
Copy from the primary pfile to the standby destination
[oracle@rac2 ~]$ cd /u01/app/oracle/product/10.2.0/db_1/dbs/
[oracle@rac2 dbs]$ scp initwhite.ora oracle@rac1:/tmp/initblack.ora
initwhite.ora
100% 1704
1.7KB/s
00:00
Copy and edit the primary init.ora to set it up for the standby role,as here shown highlighted:
Configure the listener and tnsnames to support the database on both nodes
Configure listener.ora on both servers to hold entries for both databases
#on RAC2 Machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
)
)
Configure tnsnames.ora on both servers to hold entries for both databases
#on rac2 machine
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT = 1521))
)
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener_vmractest.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.localdomain)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac1.localdomain)(PORT=1521)))
STATUS of the LISTENER
-----------------------Alias
Version
LISTENER_VMRACTEST
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac1.localdomain)(PORT=1521)))
Services Summary...
Service "black" has 1 instance(s).
Instance "black", status UNKNOWN, has 1 handler(s) for this service...
Service "black_DGMGRL" has 1 instance(s).
Instance "black", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac1 tmp]$ tnsping black
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:21
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.localdomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = black)))
OK (10 msec)
[oracle@rac1 tmp]$ tnsping white
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:00:29
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.localdomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = white)))
OK (10 msec)
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/liste ner.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener _vmractest.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2.localdomain)(PORT=1 521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=rac2.localdomain)(PORT=1 521)))
STATUS of the LISTENER
-----------------------Alias
Version
LISTENER_VMRACTEST
TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2.localdomain)(PORT=1521)))
Services Summary...
Service "white" has 1 instance(s).
Instance "white", status UNKNOWN, has 1 handler(s) for this service...
Service "white_DGMGRL" has 1 instance(s).
Instance "white", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac2 dbs]$ tnsping white
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :14
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.loc aldomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = whi te)))
OK (0 msec)
[oracle@rac2 dbs]$ tnsping black
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 21-JAN-2010 00:23 :18
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1.loc aldomain)(PORT =
1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = bla ck)))
OK (10 msec)
Set Up the Environment to Support the Standby Database on the standby node.
Create a passwordfile for the standby:
[oracle@rac1 ~]$ orapwd file=$ORACLE_HOME/dbs/orapwblack password=oracle
note: sys password must be identical for both primary and standby database
This chapter show how to create the Rman catalog, how to register a database with it and how to review some of
the information contained in the catalog.
The catalog database is usually a small database it contains and maintains the metadata of all rman backups
performed using the catalog.
1.Creating and Register a database with Recovery Catalog
step1: create a tablespace for storing recovery catalog information in recovery catalog database
here my recovery catalog database is demo1
[oracle@rac2 bin]$ . oraenv
ORACLE_SID = [oracle] ? demo1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac2 bin]$ sqlplus '/as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Thu Dec 31 10:28:22 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> startup
ORACLE instance started.
Total System Global Area 481267712 bytes
Fixed Size 1300716 bytes
Variable Size 226494228 bytes
Database Buffers 247463936 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.
step 2: create a user for recovery catalog and assign a tablespace and resources to that user
SQL> create user sai identified by sai default tablespace rman quota unlimited on rman;
SQL> grant connect,resource, recovery_catalog_owner to sai;
step 3: Connect to recovery catalog and register the database with recovery catalog:
[oracle@rac2 bin]$ . oraenv
ORACLE_SID = [oracle] ? demo1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac2 bin]$ rman target /
RMAN> connect catalog sai/sai@demo1;
RMAN> create catalog;
RMAN> register database;
RMAN> report schema;
2.How to register a new database with RMAN recovery catalog
Replace username/password with the actual username and password for recovery catalog; and
DEMO1 with the name of the recovery catalog database and new database name ANTO
1. Change SID to the database you want to register
. oraenv
ORACLE_SID
2. Connect to RMAN catalog database
rman target / catalog username/password@DEMO1
3. Register database
RMAN> register database;
example:
[oracle@rac2 bin]$ . oraenv
ORACLE_SID = [anto] ? anto
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac2 bin]$ rman target / catalog sai/sai@demo1;
Recovery Manager: Release 11.1.0.6.0 - Production on Thu Dec 31 10:32:15 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: ANTO (DBID=2484479252)
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
verification:
connect to the recovery catalog database demo1 and connect as recovery catalog user sai;
SQL> conn sai/sai;
Connected.
SQL> select * from db;
DB_KEY
DB_ID CURR_DBINC_KEY
---------- ---------- --------------
1 3710360247
141 2484479252
2
142
3. Unregister the database from recovery catalog:
Login as rman catalog owner in sql*plus prompt
SQL> select * from rc_database where dbid = DBID;
SUCCESSFULLY, REMOVED THE DATABASE ANTO FROM THE RECOVER CATALOG
Recovering a Standby database from a missing archivelog
Hi friends,
today i came across one issue recovering a standby database from a missing archivelog files.
on primary database
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/archive
Oldest online log sequence 16
Next log sequence to archive 18
Current log sequence 18
on standby database
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/archive
Oldest online log sequence 13
Next log sequence to archive 0
Current log sequence 18
i tried to solve the problem using shutdownabort.com document
Register a missing log file
alter database register physical logfile '';
If FAL doesn't work and it says the log is already registered
alter database register or replace physical logfile '';
If that doesn't work, try this...
shutdown immediate
startup nomount
alter database mount standby database;
alter database recover automatic standby database;
wait for the recovery to finish - then cancel
shutdown immediate
startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect;
Check which logs are missing
Run this on the standby...
select local.thread#
,
local.sequence# from
(select thread#
,
sequence#
from
v$archived_log
where dest_id=1) local
where local.sequence# not in
(select sequence#
from v$archived_log
where dest_id=2 and
thread# = local.thread#)
/
THREAD# SEQUENCE#
---------- ---------1
9
1
10
1
11
1
12
1
13
1
14
1
15
still i the archive logs are not applied to the standby database.
finally i tried recovering a standby database using rman , el-caro blog document
i got a solution, now my primary and standby database has equal archives.
A Physical Standby database relies on continuous application of
archivelogs from a Primary Database to be in synch with it. In Oracle
Database versions prior to 10g in the event of an archivelog gone
missing or corrupt you had to rebuild the standby database from scratch.
In
10g you can use an incremental backup and recover the standby using the
same to compensate for the missing archivelogs as shown below
In
the case below archivelogs with sequence numbers 137 and 138 which are
required on the standby are deleted to simulate this problem.
Step 1: On the standby database check the current scn.
SQL> select current_scn from v$database;
CURRENT_SCN
----------548283
Step 2: On the primary database create the needed incremental backup from the above SCN
login to primary database rman target /
RMAN> backup device type disk incremental from scn 548283 database format '/u01/backup/bkup_%U';
Starting backup at 28-DEC-09
using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-10
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/demo1/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/demo1/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/demo1/rman01.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/demo1/rman02.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/demo1/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/demo1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-09
channel ORA_DISK_1: finished piece 1 at 28-DEC-09
piece handle=/u01/backup/bkup_07l21ukv_1_1 tag=TAG20091228T143302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:24:19
using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-10
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-09
channel ORA_DISK_1: finished piece 1 at 28-DEC-09
piece handle=/u01/backup/bkup_08l2202v_1_1 tag=TAG20091228T143302 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 28-DEC-09
RMAN>
Step 3: Cancel managed recovery at the standby database
SQL>recover managed standby database cancel;
Media recovery complete.
Move the backup files to a new folder called new_incr so that they are the only files in that folder.
Step 4: Catalog the Incremental Backup Files at the Standby Database
[oracle@rac1 bin]$ . oraenv
ORACLE_SID = [RAC1] ? stby
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac1 bin]$ rman target /
Recovery Manager: Release 11.1.0.6.0 - Production on Mon Dec 28 15:01:33 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: DEMO1 (DBID=3710229940, not open)
RMAN> catalog start with '/u01/backup/new_incr';
using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/backup/new_incr
List of Files Unknown to the Database
=====================================
File Name: /u01/backup/new_incr/bkup_08l2202v_1_1
File Name: /u01/backup/new_incr/bkup_07l21ukv_1_1
Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /u01/backup/new_incr/bkup_08l2202v_1_1
File Name: /u01/backup/new_incr/bkup_07l21ukv_1_1
Step 5: Apply the Incremental Backup to the Standby Database
RMAN> recover database noredo;
Starting recover at 28-DEC-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=141 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/stby/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/stby/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/stby/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stby/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/stby/rman01.dbf
destination for restore of datafile 00006: /u01/app/oracle/oradata/stby/rman02.dbf
channel ORA_DISK_1: reading from backup piece /u01/backup/new_incr/bkup_07l21ukv_1_1
channel ORA_DISK_1: piece handle=/u01/backup/new_incr/bkup_07l21ukv_1_1 tag=TAG20091228T143302
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished recover at 28-DEC-09
RMAN>
Step 6: Put the standby database back to managed recovery mode.
SQL> recover managed standby database nodelay disconnect;
Media recovery complete.
From the alert.log you will notice that the standby database is still looking for the old log files
*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 137-137
DBID 768471617 branch 600609988
**************************************************
This is because the controlfile has not been updated.
Hence the standby controlfile has to be recreated
On the primary DATABASE
SQL> alter database create standby controlfile as
2 '/u01/control01.ctl';
Copy the standby control file to the standby site and restart the standby database in managed recovery mode...
NOW CHECK THE ARCHIVE LOG LIST ON BOTH PRIMARY AND STANDBY DATABASE,
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/archive
Oldest online log sequence
20
Next log sequence to archive
Current log sequence
22
22
SQL> archive log list;
Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
/u01/app/oracle/oradata/archive
Oldest online log sequence
20
Next log sequence to archive
Current log sequence
0
22
SQL>
changing database dbid
SQL> startup mount
ORACLE instance started.
Total System Global Area 481267712 bytes
Fixed Size 1300716 bytes
Variable Size 226494228 bytes
Database Buffers 247463936 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [demo2] ?
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle
[oracle@rac1 ~]$ nid target = /
DBNEWID: Release 11.1.0.6.0 - Production on Thu Dec 24 20:05:44 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to database DEMO2 (DBID=3682169720)
Connected to server version 11.1.0
Control Files in database:
/u01/app/oracle/oradata/demo2/control01.ctl
/u01/app/oracle/oradata/demo2/control02.ctl
/u01/app/oracle/oradata/demo2/control03.ctl
Change database ID of database DEMO2? (Y/[N]) => y
Proceeding with operation
Changing database ID from 3682169720 to 3682222232
Control File /u01/app/oracle/oradata/demo2/control01.ctl - modified
Control File /u01/app/oracle/oradata/demo2/control02.ctl - modified
Control File /u01/app/oracle/oradata/demo2/control03.ctl - modified
Datafile /u01/app/oracle/oradata/demo2/system01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/sysaux01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/undotbs01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/users01.dbf - dbid changed
Datafile /u01/app/oracle/oradata/demo2/temp01.dbf - dbid changed
Control File /u01/app/oracle/oradata/demo2/control01.ctl - dbid changed
Control File /u01/app/oracle/oradata/demo2/control02.ctl - dbid changed
Control File /u01/app/oracle/oradata/demo2/control03.ctl - dbid changed
Instance shut down
Database ID for database DEMO2 changed to 3682222232.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database ID.
DBNEWID - Completed succesfully.
[oracle@rac1 ~]$
SQL> alter database open resetlogs;
Database altered.
SQL> select dbid from v$database;
DBID
---------3682222232
INTERNAL OPERATION OF HOT BACKUP
What Happens When A Tablespace/Database Is Kept In Begin Backup Mode
This document explains in detail about what happens when a tablespace/datafile is kept in hot backup/begin
backup mode.
To perform online/hot backup we have to put the tablespace in begin backup mode followed by copying the
datafiles and then putting the tablespace to end backup.
In 8i, 9i we have to put each tablespace individually in begin/end backup mode to perform the online backup. From
10g onwards the entire database can be put in begin/end backup mode.
Make sure that the database is in archivelog mode
Example :
Performing a single tablespace backup
+ sql>alter tablespace system begin backup;
+ Copy the corresponding datafiles using appropriate O/S commands.
+ sql>alter tablespace system end backup;
Performing a full database backup (starting from 10g)
+ sql> alter database begin backup;
+ Copy all the datafiles using appropriate O/S commands.
+ sql> alter database end backup;
One danger in making online backups is the possibility of inconsistent data within a block. For example, assume
that you are backing up block 100 in datafile users.dbf. Also, assume that the copy utility reads the entire block
while DBWR is in the middle of updating the block. In this case, the copy utility may read the old data in the top
half of the block and the new data in the bottom top half of the block. The result is called a fractured block,
meaning that the data contained in this block is not consistent. at a given SCN.
Therefore oracle internally manages the consistency as below :
1. The first time a block is changed in a datafile that is in hot backup mode, the entire block is written to the redo
log files, not just the changed bytes. Normally only the changed bytes (a redo vector) is written. In hot backup
mode, the entire block is logged the first time. This is because you can get into a situation where the process
copying the datafile and DBWR are working on the same block simultaneously.
Lets say they are and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time). The
backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile -- DBWR has asked to rewrite this
block. the OS schedules the DBWR write to occur right now. The entire 8k block is rewritten. The backup program
starts running again (multi-tasking OS here) and reads the last 4k of the block. The backup program has now
gotten an fractured block -- the head and tail are from two points in time.
We cannot deal with that during recovery. Hence, we log the entire block image so that during recovery, this block
is totally rewritten from redo and is consistent with itself atleast. We can recover it from there.
2. The datafile headers which contain the SCN of the last completed checkpoint are not updated while a file is in
hot backup mode. This lets the recovery process understand what archive redo log files might be needed to fully
recover this file.
To limit the effect of this additional logging, you should ensure you only place one tablepspace at a time in backup
mode and bring the tablespace out of backup mode as soon as you have backed it up. This will reduce the number
of blocks that may have to be logged to the minimum possible.
Try to take the hot/online backups when there is less / no load on the database, so that less redo will be
generated.
v$ASM view, Automatic Storage Management views
The following v$ASM views describe the structure and components of ASM:
v$ASM_ALIAS
This view displays all system and user-defined aliases. There is one row for every alias present in every diskgroup
mounted by the ASM instance. The RDBMS instance displays no rows in this view.
V$ASM_ATTRIBUTE
This Oracle Database 11g view displays one row for each ASM attribute defined. Theseattributes are listed when
they are defined in CREATE DISKGROUP or ALTER DISKGROUP statements. DISK_REPAIR_TIMER is an example of
an attribute.
V$ASM_CLIENT
This view displays one row for each RDBMS instance that has an opened ASM diskgroup.
V$ASM_DISK
This view contains specifics about all disks discovered by the ASM isntance, including mount status, disk state, and
size. There is one row for every disk discovered by the ASM instance.
V$ASM_DISK_IOSTAT
This displays information about disk I/O statistics for each ASM Client. If this view is queried from the database
instance, only the rows for that instance are shown.
V$ASM_DISK_STAT
This view contains similar content as the v$ASM_DISK, except v$ASM_DISK_STAT reads disk information from
cache and thus performs no disk discovery. Thsi view is primarily used form quick acces to the disk information
without the overhead of disk discovery.
V$ASM_DISKGROUP
This view displays one row for every ASM diskgroup discovered by the ASM instance on the node.
V$ASM_DISKGROUP_STAT
This view contains all the similar view contents as the v$ASM_DISKGROUP, except that v$ASM_DISK_STAT reads
disk information from the cache and thus performs no disk discovery. This view is primarily used for quick access to
the diskgroup information without the overhead of disk discovery.
V$ASM_FILE
This view displays information about ASM files. There is one row for every ASM file in every diskgroup mounted by
the ASM instance. In a RDBMS instance, V$ASM_FILE displays no row.
V$ASM_OPERATION
This view describes the progress of an influx ASM rebalance operation. In a RDBMS instance, v$ASM_OPERATION
displays no rows.
V$ASM_TEMPLATE
This view contains information on user and system-defined templated. v$ASM_TEMPLATE displays one row for
every template present in every diskgroup mounted by the ASM instance. In a RDBMS instance, v$ASM_TEMPLATE
displays one row for every template present in every diskgroup mounted by the ASM instance with which the
RDBMS instance communicates.
thats it,
oracle DBA Tips (PART-II)
ORACLE DBA TIPS:- PART-2
---------------------------------26.Retrieving Threshold Information
SELECT metrics_name, warning_value, critical_value, consecutive_occurrences
FROM DBA_THRESHOLDS
WHERE metrics_name LIKE '%CPU Time%';
27.Viewing Alert Data
The following dictionary views provide information about server alerts:
DBA_THRESHOLDS lists the threshold settings defined for the instance.
DBA_OUTSTANDING_ALERTS describes the outstanding alerts in the database.
DBA_ALERT_HISTORY lists a history of alerts that have been cleared.
V$ALERT_TYPES provides information such as group and type for each alert.
V$METRICNAME contains the names, identifiers, and other information about the
system metrics.
V$METRIC and V$METRIC_HISTORY views contain system-level metric values in
memory.
28.The following views can help you to monitor locks:
for getting information about locks, we have to run two scripts
utllockt.sql and catblock.sql
Lists the locks currently held by Oracle Database and outstanding
V$LOCK
requests for a lock or latch
Displays a session if it is holding a lock on an object for which
DBA_BLOCKERS
another session is waiting
Displays a session if it is waiting for a locked object
DBA_WAITERS
Lists all DDL locks held in the database and all outstanding
DBA_DDL_LOCKS
requests for a DDL lock
Lists all DML locks held in the database and all outstanding
DBA_DML_LOCKS
requests for a DML lock
Lists all locks or latches held in the database and all outstanding
DBA_LOCK
requests for a lock or latch
Displays a row for each lock or latch that is being held, and one
DBA_LOCK_INTERNAL
row for each outstanding request for a lock or latch
Lists all locks acquired by every transaction on the system
V$LOCKED_OBJECT
29.Process and Session Views
v$process
v$locked_object
v$session
30.What Is a Control File?
Every Oracle Database has a control file, which is a small binary file that records the
physical structure of the database. The control file includes:
The database name
Names and locations of associated datafiles and redo log files
The timestamp of the database creation
The current log sequence number
Checkpoint information
31.The following views display information about control files:
V$DATABASE Displays database information from the control file
V$CONTROLFILE Lists the names of control files
V$CONTROLFILE_RECORD_SECTION Displays information about control file record sections
V$PARAMETER Displays the names of control files as specified in the CONTROL_FILES initialization parameter
32.Redo Log Contents
Redo log files are filled with redo records. A redo record, also called a redo entry, is
made up of a group of change vectors, each of which is a description of a change made
to a single block in the database.
Redo entries record data that you can use to reconstruct all changes made to the
database, including the undo segments. Therefore, the redo log also protects rollback
data. When you recover the database using redo data, the database reads the change
vectors in the redo records and applies the changes to the relevant blocks.
33.Log Switches and Log Sequence Numbers
A log switch is the point at which the database stops writing to one redo log file and
begins writing to another. Normally, a log switch occurs when the current redo log file
is completely filled and writing must continue to the next redo log file.
You can also force log switches manually.
Oracle Database assigns each redo log file a new log sequence number every time a
log switch occurs and LGWR begins writing to it. When the database archives redo log
files, the archived log retains its log sequence number. A redo log file that is cycled
back for use is given the next available log sequence number.
34. Setting the Size of Redo Log Members
The minimum size permitted for a redo log file is 4 MB.
35.Setting the ARCHIVE_LAG_TARGET Initialization Parameter
The ARCHIVE_LAG_TARGET initialization parameter specifies the target of how many
seconds of redo the standby could lose in the event of a primary shutdown or failure if
the Oracle Data Guard environment is not configured in a no-data-loss mode. It also
provides an upper limit of how long (in seconds) the current log of the primary
database can span. Because the estimated archival time is also considered, this is not
the exact log switch time.
The following initialization parameter setting sets the log switch interval to 30 minutes
(a typical value).
ARCHIVE_LAG_TARGET = 1800
A value of 0 disables this time-based log switching functionality. This is the default
setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no
standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set
specifically to force logs to be switched and archived.
36.Verifying Blocks in Redo Log Files
If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, the database
computes a checksum for each database block when it is written to disk, including
each redo log block as it is being written to the current log. The checksum is stored the header of the block.
Oracle Database uses the checksum to detect corruption in a redo log block. The
database verifies the redo log block when the block is read from an archived log
during recovery and when it writes the block to an archive log file. An error is raised
and written to the alert log if corruption is detected.
37.Clearing a Redo Log File
A redo log file might become corrupted while the database is open, and ultimately
stop database activity because archiving cannot continue. In this situation the ALTER
DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without
shutting down the database.
The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement overcomes two situations where dropping redo logs is not possible:
If there are only two log groups
â–
The corrupt redo log file belongs to the current group
â–
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the
statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The cleared
redo logs are available for use even though they were not archived.
If you clear a log file that is needed for recovery of a backup, then you can no longer
recover from that backup. The database writes a message in the alert log describing the
backups from which you cannot recover.
Note:
If you clear an unarchived redo log file, you should make
another backup of the database.
If you want to clear an unarchived redo log that is needed to bring an offline
tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER
DATABASE CLEAR LOGFILE statement.
38.Viewing Redo Log Information
Displays the redo log file information from the control file
V$LOG
Identifies redo log groups and members and member status
V$LOGFILE
Contains log history information
V$LOG_HISTORY
39.You can use archived redo logs to:
Recover a database
Update a standby database
Get information about the history of a database using the LogMiner utility
40.Changing the database ARCHIVING mode:
(1) shutdown
(2) startup mount
(3) alter database archivelog;
(4) alter database open;
41.Performing Manual Archiving
ALTER DATABASE ARCHIVELOG MANUAL;
ALTER SYSTEM ARCHIVE LOG ALL;
note:When you use manual archiving mode, you cannot specify any standby databases in
the archiving destinations.
42.Understanding Archive Destination Status
Each archive destination has the following variable characteristics that determine its
status:
Valid/Invalid: indicates whether the disk location or service name information is
â–
specified and valid
Enabled/Disabled: indicates the availability state of the location and whether the
â–
database can use the destination
Active/Inactive: indicates whether there was a problem accessing the destination
â–
Several combinations of these characteristics are possible. To obtain the current status
and other information about each destination for an instance, query the
V$ARCHIVE_DEST view.
The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization
parameter lets you control the availability state of the specified destination (n).
ENABLE indicates that the database can use the destination.
â–
DEFER indicates that the location is temporarily disabled.
â–
ALTERNATE indicates that the destination is an alternate.
â–
The availability state of the destination is DEFER, unless there is a failure of its parent destination, in which case its
state becomes ENABLE.
43.Viewing Information About the Archived Redo Log
You can display information about the archived redo logs using the following sources:
(1)Dynamic Performance Views
(2)The ARCHIVE LOG LIST Command
Dynamic Performance Views
------------------------Shows if the database is in ARCHIVELOG or NOARCHIVELOG
V$DATABASE
mode and if MANUAL (archiving mode) has been specified.
Displays historical archived log information from the control
V$ARCHIVED_LOG
file. If you use a recovery catalog, the RC_ARCHIVED_LOG
view contains similar information.
Describes the current instance, all archive destinations, and
V$ARCHIVE_DEST
the current value, mode, and status of these destinations.
Displays information about the state of the various archive
V$ARCHIVE_PROCESSES
processes for an instance.
Contains information about any backups of archived logs. If
V$BACKUP_REDOLOG
you use a recovery catalog, the RC_BACKUP_REDOLOG
contains similar information.
Displays all redo log groups for the database and indicates
V$LOG
which need to be archived.
Contains log history information such as which logs have
V$LOG_HISTORY
been archived and the SCN range for each archived log.
44.Bigfile Tablespaces
A bigfile tablespace is a tablespace with a single, but very large (up to 4G blocks)
datafile. Traditional smallfile tablespaces, in contrast, can contain multiple datafiles,
but the files cannot be as large. The benefits of bigfile tablespaces are the following:
A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile
tablespace with 32K blocks can contain a 128 terabyte datafile. The maximum
number of datafiles in an Oracle Database is limited (usually to 64K files).
Therefore, bigfile tablespaces can significantly enhance the storage capacity of an
Oracle Database.
45.Altering a Bigfile Tablespace
Two clauses of the ALTER TABLESPACE statement support datafile transparency
when you are using bigfile tablespaces:
RESIZE: The RESIZE clause lets you resize the single datafile in a bigfile
tablespace to an absolute size, without referring to the datafile. For example:
ALTER TABLESPACE bigtbs RESIZE 80G;
AUTOEXTEND (used outside of the ADD DATAFILE clause):
With a bigfile tablespace, you can use the AUTOEXTEND clause outside of the ADD
DATAFILE clause. For example:
ALTER TABLESPACE bigtbs AUTOEXTEND ON NEXT 20G;
An error is raised if you specify an ADD DATAFILE clause for a bigfile tablespace.
46.Identifying a Bigfile Tablespace
The following views contain a BIGFILE column that identifies a tablespace as a bigfile
tablespace:
DBA_TABLESPACES
USER_TABLESPACES
V$TABLESPACE
47.Temporary Tablespaces
You can view the allocation and deallocation of space in a temporary tablespace sort
segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies
the current sort users in those segments.
You also use different views for viewing information about tempfiles than you would
for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the
V$DATAFILE and DBA_DATA_FILES views.
49.Multiple Temporary Tablespaces: Using Tablespace Groups:
A tablespace group enables a user to consume temporary space from multiple
tablespaces. A tablespace group has the following characteristics:
It contains at least one tablespace. There is no explicit limit on the maximum
number of tablespaces that are contained in a group.
It shares the namespace of tablespaces, so its name cannot be the same as any
tablespace.
You can specify a tablespace group name wherever a tablespace name would
appear when you assign a default temporary tablespace for the database or a
temporary tablespace for a user.
You do not explicitly create a tablespace group. Rather, it is created implicitly when
you assign the first temporary tablespace to the group. The group is deleted when the
last temporary tablespace it contains is removed from it.
The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member
tablespaces.
50.Creating a Tablespace Group
-----------------------------CREATE TEMPORARY TABLESPACE lmtemp2 TEMPFILE '/u02/oracle/data/lmtemp201.dbf'
SIZE 50M
TABLESPACE GROUP group1;
ALTER TABLESPACE lmtemp TABLESPACE GROUP group2;
Changing Members of a Tablespace Group
-------------------------------------You can add a tablespace to an existing tablespace group by specifying the existing
group name in the TABLESPACE GROUP clause of the CREATE TEMPORARY
TABLESPACE or ALTER TABLESPACE statement.
The following statement adds a tablespace to an existing group. It creates and adds
tablespace lmtemp3 to group1, so that group1 contains tablespaces lmtemp2 and
lmtemp3.
CREATE TEMPORARY TABLESPACE lmtemp3 TEMPFILE '/u02/oracle/data/lmtemp301.dbf'
SIZE 25M
TABLESPACE GROUP group1;
The following statement also adds a tablespace to an existing group, but in this case
because tablespace lmtemp2 already belongs to group1, it is in effect moved from
group1 to group2:
ALTER TABLESPACE lmtemp2 TABLESPACE GROUP group2;
Now group2 contains both lmtemp and lmtemp2, while group1 consists of only
tmtemp3.
You can remove a tablespace from a group as shown in the following statement:
ALTER TABLESPACE lmtemp3 TABLESPACE GROUP '';
Tablespace lmtemp3 no longer belongs to any group. Further, since there are no longer
any members of group1, this results in the implicit deletion of group1.
Assigning a Tablespace Group as the Default Temporary Tablespace
---------------------------------------------------------------ALTER DATABASE sample DEFAULT TEMPORARY TABLESPACE group2;
oracle DBA tips (PART-I)
ORACLE DBA TIPS:- PART-1
--------------------------------1.To dynamically change the default tablespace type after database creation, use the SET
DEFAULT TABLESPACE clause of the ALTER DATABASE statement:
ALTER DATABASE SET DEFAULT BIGFILE TABLESPACE;
2.You can determine the current default tablespace type for the database by querying the
DATABASE_PROPERTIES data dictionary view as follows:
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'DEFAULT_TBS_TYPE';
3.To view the time zone names in the file being used by your database, use the following
query:
SELECT * FROM V$TIMEZONE_NAMES;
4.You can cancel FORCE LOGGING mode using the following SQL statement:
ALTER DATABASE NO FORCE LOGGING;
5.The V$SGA_TARGET_ADVICE view provides information that helps you decide on a
value for SGA_TARGET.
6.The fixed views V$SGA_DYNAMIC_COMPONENTS and V$SGAINFO display the current
actual size of each SGA component.
7.Checking Your Current Release Number
SELECT * FROM PRODUCT_COMPONENT_VERSION;
SELECT * FROM v$VERSION;
8.Bigfile tablespaces can contain only one file, but that file can have up to 4G blocks. The maximum number of
datafiles in an Oracle Database is limited (usually to 64K files).
9.Specifying a Flash Recovery Area with the following initialization parameters:
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
In a RAC environment, the settings for these two parameters must be the same on all
instances.
10.DB_BLOCK_SIZE Initialization Parameter
You cannot change the block size after database creation except by re-creating the
database.
11.Nonstandard Block Sizes
Tablespaces of nonstandard block sizes can be created using the CREATE
TABLESPACE statement and specifying the BLOCKSIZE clause. These nonstandard
block sizes can have any of the following power-of-two values: 2K, 4K, 8K, 16K or 32K.
12.All SGA components allocate and deallocate space in units of granules. Oracle Database tracks SGA memory use
in internal numbers of granules for each SGA component.
13.Viewing Information about the SGA:
v$SGA
v$SGAINFO
v$SGASTAT
v$SGA_DYNAMIC_COMPONENTS
v$SGA_DYNAMIC_FREE_MEMORY
v$SGA_RESIZE_OPS
v$SGA_CURRENT_RESIZE_OPS
v$SGA_TARGET_ADVICE
14.An optional COMMENT clause lets you associate a text string with the parameter
update. When you specify SCOPE as SPFILE or BOTH, the comment is written to the
server parameter file.
example:ALTER SYSTEM
SET LOG_ARCHIVE_DEST_4='LOCATION=/u02/oracle/rbdb1/',MANDATORY,'REOPEN=2'
COMMENT='Add new destimation on Nov 29'
SCOPE=SPFILE;
15.Viewing Parameter Settings
show parameters, v$parameter, v$parameter2, v$spparameter.
16.You can find service information in the following service-specific views:
DBA_SERVICES
ALL_SERVICES or V$SERVICES
V$ACTIVE_SERVICES
V$SERVICE_STATS
V$SERVICE_EVENTS
V$SERVICE_WAIT_CLASSES
V$SERV_MOD_ACT_STATS
V$SERVICE_METRICS
V$SERVICE_METRICS_HISTORY
The following additional views also contain some information about services:
V$SESSION
V$ACTIVE_SESSION_HISTORY
DBA_RSRC_GROUP_MAPPINGS
DBA_SCHEDULER_JOB_CLASSES
DBA_THRESHOLDS
17.Viewing Information About the Database
DATABASE_PROPERTIES
GLOBAL_NAME
V$DATABASE
18.Starting an Instance, Mounting a Database, and Starting Complete Media Recovery
If you know that media recovery is required, you can start an instance, mount a
database to the instance, and have the recovery process automatically start by using
the STARTUP command with the RECOVER clause:
STARTUP OPEN RECOVER
If you attempt to perform recovery when no recovery is required, Oracle Database
issues an error message.
19.Placing a Database into a Quiesced State
To place a database into a quiesced state, issue the following statement:
ALTER SYSTEM QUIESCE RESTRICTED;
Non-DBA active sessions will continue until they become inactive.
20. You can determine the sessions that are blocking the quiesce operation by querying the V$BLOCKING_QUIESCE
view:
select bl.sid, user, osuser, type, program
from v$blocking_quiesce bl, v$session se
where bl.sid = se.sid;
21.You cannot perform a cold backup when the database is in
the quiesced state, because Oracle Database background processes
may still perform updates for internal purposes even while the
database is quiesced. In addition, the file headers of online datafiles
continue to appear to be accessible. They do not look the same as if
a clean shutdown had been performed. However, you can still take
online backups while the database is in a quiesced state.
22.Restoring the System to Normal Operation
The following statement restores the database to normal operation:
ALTER SYSTEM UNQUIESCE;
23.Viewing the Quiesce State of an Instance
You can query the ACTIVE_STATE column of the V$INSTANCE view to see the current
state of an instance. The column values has one of these values:
NORMAL: Normal unquiesced state.
QUIESCING: Being quiesced, but some non-DBA sessions are still active.
QUIESCED: Quiesced; no non-DBA sessions are active or allowed.
24.Suspending and Resuming a Database
The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles (file header and file data) and
control files. The suspended state lets you back up a database without I/O interference. When the database is
suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a
queued state.
The following statements illustrate ALTER SYSTEM SUSPEND/RESUME usage. The
V$INSTANCE view is queried to confirm database status.
SQL> ALTER SYSTEM SUSPEND;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
--------SUSPENDED
SQL> ALTER SYSTEM RESUME;
System altered
SQL> SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
--------ACTIVE
25.The DB_WRITER_PROCESSES initialization parameter specifies the number of DBWn processes.
Oracle Database allows a maximum of 20 database writer processes
(DBW0-DBW9 and DBWa-DBWj).
obsolete and/or deprecated parameter(s) specified
ORA-32004 obsolete and/or deprecated parameter(s) specified
Cause
One or more obsolete and/or parameters were specified in the SPFILE or the PFILE on the server side.
Action
See alert log for a list of parameters that are obsolete. or deprecated. Remove them from the SPFILE or the server
side PFILE.
So somebody, somewhere has put obsolete and/or deprecated parameter(s) in my initDB.ora file. To find out which
you could from SQL*PLUS, issue the following statement, to find the sinner.
SQL> select name, isspecified from v$obsolete_parameter where isspecified='TRUE';
Or if you are the one, who has made the changes to initDB.ora, you might know which one. In my case somebody
had been messing around with the parameter log_archive_start;
In order to remove this, you should create a pfile from spfile and back to spfile? Thats the way to do it.
Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 88081764 bytes
Database Buffers 75497472 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL>
BDUMP, UDUMP, ALERT LOG FILES IN ORACLE 11G
The 11g New Features Guide notes important OFA changes, namely the removal of $ORACLE_HOME as an anchor
for diagnostic and alert files:
"The database installation process has been redesigned to be based on the ORACLE_BASE environment variable.
Until now, setting this variable has been optional and the only required variable has been ORACLE_HOME.
With this feature, ORACLE_BASE is the only required input, and the ORACLE_HOME setting will be derived from
ORACLE_BASE."
Let's take a look at changes to the Oracle11g OFA standard.
Enter new admin subdirectories
New in Oracle 11g we see the new ADR (Automatic Diagnostic Repository) and Incident Packaging System, all
designed to allow quick access to alert and diagnostic information.
The new $ADR_HOME directory is located by default at $ORACLE_BASE/diag, with the directories for each instance
at $ORACLE_HOME/diag/$ORACLE_SID, at the same level as the traditional bdump, udump and cdump directories
and the initialization parameters background_dump_dest and user_dump_dest are deprecated in 11g.
You can use the new initialization parameter diagnostic_dest to specify an alternative location for the diag directory
contents.
In 11g, each $ORACLE_HOME/diag/$ORACLE_SID directory may contain these new directories:
*
alert - A new alert directory for the plain text and XML versions of the alert log.
*
incident - A new directory for the incident packaging software.
*
incpkg - A directory for packaging an incident into a bundle.
*
trace - A replacement for the ancient background dump (bdump) and user dump (udump) destinations.
*
cdump - The old core dump directory retains it's 10g name.
Let's see how the 11g alert log has changed.
Alert log changes in 11g
Oracle now writes two alert logs, the traditional alert log in plain text plus a new XML formatted alert.log which is
named as log.xml.
"Prior to Oracle 11g, the alert log resided in $ORACLE_HOME/admin/$ORACLE_SID/bdump directory, but it now
resides in the $ORACLE_BASE/diag/$ORACLE_SID directory".
Fortunately, you can re-set it to the 10g and previous location by specifying the BDUMP location for the
diagnostic_dest parameter.
But best of all, you no longer require server access to see your alert log since it is now accessible via standard SQL
using the new v$diag_info view:
select name, value from v$diag_info;
For complete details, see MetaLink Note:438148.1 - "Finding alert.log file in 11g".
ENABLE ARCHIVELOG AND FLASHBACK IN RAC DATABASE
Step by step process of putting a RAC database in archive log mode and then enabling the flashback Database
option.
Enabling archive log in RAC Database:
A database must be in archivelog mode before enabling flashback.
In this example database name is test and instances name are test1 and test2.
step 1:
creating recovery_file_dest in asm disk
SQL> alter system set db_recovery_file_dest_size=200m sid='*';
System altered.
SQL> alter system set db_recovery_file_dest='+DATA' sid='*';
set the LOG_ARCHIVE_DEST_1 parameter. since these parameters will be identical for all nodes, we will use
sid='*'. However, you may need to modify this for your situation if the directories are different on each node.
SQL> alter system set log_archive_dest_1='LOCATION=USE-DB_RECOVERY_FILE_DEST';
System altered.
step 3:
set LOG_ARCHIVE_START to TRUE for all instances to enable automatic archiving.
SQL> alter system set log_archive_start=true scope=spfile sid='*';
System altered.
Note that we illustrate the command for backward compatibility purposes, but in oracle database 10g onwards, the
parameter is actually deprecated. Automatic archiving will be enabled by default whenever an oracle database is
placed in archivelog mode.
step 4:
Set CLUSTER_DATABASE to FALSE for the local instance, which you will then mount to put the database into
archivelog mode. By having CLUSTER_DATABASE=FALSE, the subsequent shutdown and startup mount will actually
do a Mount Exclusive by default, which is necessary to put the database in archivelog mode, and also to enable the
flashback database feature:
SQL> alter system set cluster_database=false scope=spfile sid='test1';
System altered.
step 5;
Shut down all instances. Ensure that all instances are shut down cleanly:
SQL> shutdown immediate
step 6:
Mount the database from instance test1 (where CLUSTER_DATABASE was set to FALSE) and then put the database
into archivelog mode.
Archive destination USE-DB_RECOVERY_FILE_DEST
Oldest online log sequence 13
Next log sequence to archive 15
Current log sequence 15
step 8
Confirm the location of the RECOVERY_FILE_DEST via a SHOW PARAMETER.
SQL> show parameter recovery_file
NAME TYPE VALUE
------------------------------------ ----------- -----------------------------db_recovery_file_dest string +DATA
db_recovery_file_dest_size big integer 200M
Step 9:
Once the database is in archivelog mode, you can enable flashback while the database is still mounted in Exclusive
mode (CLUSTER_DATABASE=FALSE).
SQL> alter database flashback on;
Database altered.
Step 10:
Confirm that Flashback is enabled and verify the retention target:
SQL> select flashback_on,current_scn from v$database;
Convert single instance to RAC instance Database
converting a single instance database to rac instance database:
Oracle provides following methods to convert a single instance database to RAC:
Grid Control
DBCA
Manual
RCONFIG(from 10gR2)
here is an example of converting a single instance asm file database to rac instance database using rconfig,
for converting a normal database file system single instance to rac instance, before following
the steps
for converting the non-asm files to asm files using the steps as shown in the link
Following illustrate how to convert single instance database to RAC using the RCONFIG tool:
The Convert verify option in the ConvertToRAC.xml file has three options:
Convert verify="YES": rconfig performs checks to ensure that the prerequisites for single-instance to RAC
conversion have been met before it starts conversion
Convert verify="NO": rconfig does not perform prerequisite checks, and starts conversion
Convert verify="ONLY" : rconfig only performs prerequisite checks; it does not start conversion after completing
prerequisite checks
modify the convertdb.xml file according to your environment. Following is the example:
sample ConvertToRAC.xml file edit as follows
here my database name to convert is "test"
-------------------------------------------------------------------------------------------------------------------------------
xml version="1.0" encoding="UTF-8"?
--n:RConfig xmlns:n="http://www.oracle.com/rconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/rconfig"---n:ConvertToRAC---!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable
values are: YES|NO|ONLY -----n:Convert verify="YES"---!--Specify current OracleHome of non-rac database for SourceDBHome -----n:SourceDBHome--/u01/app/oracle/product/10g/db_1--/n:SourceDBHome-----your source database home
--!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome -----n:TargetDBHome--/u01/app/oracle/product/10g/db_1--/n:TargetDBHome-- ---your target database home
--!--Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion -----n:SourceDBInfo SID="test"---------------your database name
--n:Credentials---n:User--sys--/n:User--
--n:Password--oracle--/n:Password---n:Role--sysdba--/n:Role---/n:Credentials---/n:SourceDBInfo---!--ASMInfo element is required only if the current non-rac database uses ASM Storage -----n:ASMInfo SID="+ASM1"---------------------your ASM Instance name
--n:Credentials---n:User--sys--/n:User---n:Password--oracle--/n:Password-- ----your ASM instance password
--n:Role--sysdba--/n:Role---/n:Credentials---/n:ASMInfo---!--Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this
nodelist. -----n:NodeList---n:Node name="rac1"/-------your rac1 hostname
--n:Node name="rac2"/------your rac2 hostname
--/n:NodeList---!--Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The
instance number will be attached to this prefix. -----n:InstancePrefix--test--/n:InstancePrefix-----your database name
--!--Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be
used for rac database.The listener will be extended to all nodes in the nodelist -----n:Listener port="1551"/-----listener port number
--!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database
should have same storage type. -----n:SharedStorage type="ASM"------your storage type
--!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will
be used for rac database. For CFS, this field will have directory path. -----n:TargetDatabaseArea-- --/n:TargetDatabaseArea------leave blank
--!--Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area
of non-rac database will be configured for rac database. If current database is not using recovery Area, the
resulting rac database will not have a recovery area. -----n:TargetFlashRecoveryArea-- --/n:TargetFlashRecoveryArea---leave blank
Once you modify the convert.xml file according to your environment, use the following command to run the tool:
go to $ORACLE_HOME/bin and run
./ rconfig /u01/convertdb.xml
finally, change sid in /etc/oratab as test1 in rac1 machine and test2 in rac2 machine
thats it.
then check
srvctl config database -d test
srvctl status database -d test
crs_stat -t
hope it will help you.
RAC FILE SYSTEM OPTIONS ( BASIC CONCEPT BEFORE LEARNING RAC)
its important to know the rac filesystem options ,
RAC Filesystem Options
Submitted by Natalka Roshak on orafaq website.
DBAs wanting to create a 10g Real Applications Cluster face many configuration decisions. One of the more
potentially confusing decisions involves the choice of filesystems. Gone are the days when DBAs simply had to
choose between "raw" and "cooked". DBAs setting up a 10g RAC can still choose raw devices, but they also have
several filesystem options, and these options vary considerably from platform to platform. Further, some storage
options cannot be used for all the files in the RAC setup. This article gives an overview of the RAC storage options
available.
RAC Review
Let's begin by reviewing the structure of a Real Applications Cluster. Physically, a RAC consists of several nodes
(servers), connected to each other by a private interconnect. The database files are kept on a shared storage
subsystem, where they're accessible to all nodes. And each node has a public network connection.
In terms of software and configuration, the RAC has three basic components: cluster software and/or Cluster Ready
Services, database software, and a method of managing the shared storage subsystem.
The cluster software can be vendor-supplied or Oracle-supplied, depending on platform. Cluster Ready Services, or
CRS, is a new feature in 10g. Where vendor clusterware is used, CRS interacts with the vendor clusterware to
coordinate cluster membership information; without vendor clusterware, CRS, which is also known as Oracle OSD
Clusterware, provides complete cluster management.
The database software is Oracle 10g with the RAC option, of course.
Finally, the shared storage subsystem can be managed by one of the following options: raw devices; Automatic
Storage Management (ASM); Vendor-supplied cluster file system (CFS), Oracle Cluster File System (OCFS), or
vendor-supplied logical volume manager (LVM); or Networked File System (NFS) on a certified Network Attached
Storage (NAS) device.
Storage Options
Let me clarify the foregoing alphabet soup with a table:
Table 1. Storage options for the shared storage subsystem.
Storage----------- Option Raw
Raw devices, no filesystem
ASM
Automatic Storage Management
CFS
Cluster File System
OCFS
Oracle Cluster File System
LVM
Logical Volume Manager
NFS
Network File System (must be on certified NAS device)
Before I delve into each of these storage options, a word about file types. A regular single-instance database has
three basic types of files: database software and dump files; datafiles, spfile, control files and log files, often
referred to as "database files"; and it may have recovery files, if using RMAN. A RAC database has an additional
type of file referred to as "CRS files". These consist of the Oracle Cluster Registry (OCR) and the voting disk.
Not all of these files have to be on the shared storage subsystem. The database files and CRS files must be
accessible to all instances, so must be on the shared storage subsystem. The database software can be on the
shared subsystem and shared between nodes; or each node can have its own ORACLE_HOME. The flash recovery
area must be shared by all instances, if used.
Some storage options can't handle all of these file types. To take an obvious example, the database software and
dump files can't be stored on raw devices. This isn't important for the dump files, but it does mean that choosing
raw devices precludes having a shared ORACLE_HOME on the shared storage device.
And to further complicate the picture, no OS platform is certified for all of the shared storage options. For example,
only Linux and SPARC Solaris are supported with NFS, and the NFS must be on a certified NAS device. The
following table spells out which platforms and file types can use each storage option.
Table 2.
Platforms and file types able to use each storage option
Storage option--- Platforms--------------------File types supported---File types not supported Raw All platforms
Database, CRS
Software/Dump files, Recovery
ASM
All platforms
Certified Vendor CFS
Database, Recovery
CRS, Software/Dump
AIX, HP Tru64 UNIX, SPARC Solaris All
LVM
HP-UX, HP Tru64 UNIX, SPARC Solaris
OCFS
Windows, Linux
NFS
Linux, SPARC Solaris
All
None
None
Database, CRS, Recovery Software/Dump files
All
None
(Note: Mike Ault and Madhu Tumma have summarized the storage choices by platform in more detail in this
excerpt from their recent book, Oracle 10g Grid Computing with RAC, which I used as one source for this table.)
Now that we have an idea of where we can use these storage options, let's examine each option in a little more
detail. We'll tackle them in order of Oracle's recommendation, starting with Oracle's least preferred, raw devices,
and finishing up with Oracle's top recommendation, ASM.
Raw devices
Raw devices need little explanation. As with single-instance Oracle, each tablespace requires a partition. You will
also need to store your software and dump files elsewhere.
Pros: You won't need to install any vendor or Oracle-supplied clusterware or additional drivers.
Cons: You won't be able to have a shared oracle home, and if you want to configure a flash recovery area, you'll
need to choose another option for it. Manageablility is an issue. Further, raw devices are a terrible choice if you
expect to resize or add tablespaces frequently, as this involves resizing or adding a partition.
NFS
NFS also requires little explanation. It must be used with a certified NAS device; Oracle has certified a number of
NAS filers with its products, including products from EMC, HP, NetApp and others. NFS on NAS can be a costeffective alternative to a SAN for Linux and Solaris, especially if no SAN hardware is already installed.
Pros: Ease of use and relatively low cost.
Cons: Not suitable for all deployments. Analysts recommend SANs over NAS for large-scale transaction-intensive
applications, although there's disagreement on how big is too big for NAS.
Vendor CFS and LVMs
If you're considering a vendor CFS or LVM, you'll need to check the 10g Real Application Clusters Installation Guide
for your platform and the Certify pages on MetaLink. A discussion of all the certified cluster file systems is beyond
the scope of this article. Pros and cons depend on the specific solution, but some general observations can be
made:
Pros: You can store all types of files associated with the instance on the CFS / logical volumes.
Cons: Depends on CFS / LVM. And you won't be enjoying the manageability advantage of ASM.
OCFS
OCFS is the Oracle-supplied CFS for Linux and Windows. This is the only CFS that can be used with these
platforms. The current version of OCFS was designed specifically to store RAC files, and is not a full-featured CFS.
You can store database, CRS and recovery files on it, but it doesn't fully support generic filesystem operations.
Thus, for example, you cannot install a shared ORACLE_HOME on an OCFS device.
The next version of OCFS, OCFS2, is currently out in beta version and will support generic filesystem operations,
including a shared ORACLE_HOME.
Pros: Provides a CFS option for Linux and Windows.
Cons: Cannot store regular filesystem files such as Oracle software. Easier to manage than raw devices, but not as
manageable as NFS or ASM.
ASM
Oracle recommends ASM for 10g RAC deployments, although CRS files cannot be stored on ASM. In fact, RAC
installations using Oracle Database Standard Edition must use ASM.
ASM is a little bit like a logical volume manager and provides many of the benefits of LVMs. But it also provides
benefits LVMs don't: file-level striping/mirroring, and ease of manageability. Instead of running LVM software, you
run an ASM instance, a new type of "instance" that largely consists of processes and memory and stores its
information in the ASM disks it's managing.
Pros: File-level striping and mirroring; ease of manageability through Oracle syntax and OEM.
Cons: ASM files can only be managed through an Oracle application such as RMAN. This can be a weakness if you
prefer third-party backup software or simple backup scripts. Cannot store CRS files or database software.
Convert RAC instance to SINGLE instance DATABASE
converting RAC instance to SINGLE instance database
--------------------------------------------------In this article, see how the rac instance database is converted into single instance database
step1:stop instance 2 from any node
step 2:change the parameter cluster_database
step 3: [optional]
removing information from clusterware
[root@rac1 bin]# ./srvctl stop instance -i test2 -d test
[root@rac1 bin]# ./srvctl remove instance -i test2 -d test
Remove instance test2 from the database test? (y/[n]) y
[root@rac1 bin]#
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [oracle] ? test1
The Oracle base for ORACLE_HOME=/u01/new/oracle/product/11.1.0/db_1 is /u01/new/oracle
[oracle@rac1 ~]$ sqlplus '/as sysdba'
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Dec 9 11:10:25 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> startup
ORA-01081: cannot start already-running ORACLE - shut it down first
SQL> show parameter cluster_database
NAME TYPE VALUE
------------------------------------ ----------- -----------------------------cluster_database boolean TRUE
cluster_database_instances integer 2
SQL>
SQL> alter system set cluster_database=false scope=spfile;
Total System Global Area 481267712 bytes
Fixed Size 1300716 bytes
Variable Size 167773972 bytes
Database Buffers 306184192 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.
SQL> show parameter cluster_database
NAME TYPE VALUE
------------------------------------ ----------- -----------------------------cluster_database boolean FALSE
cluster_database_instances integer 1
SQL>
removing the database information from clusterware
--------------------------------------------------
[root@rac1 bin]# ./srvctl status database -d test
Instance test1 is not running on node rac1
[root@rac1 bin]# ./srvctl status database -d test
Instance test1 is not running on node rac1
[root@rac1 bin]# ./srvctl stop instance -i test1 -d test
[root@rac1 bin]# ./srvctl remove instance -i test1 -d test
Remove instance test1 from the database test? (y/[n]) y
[root@rac1 bin]# ./srvctl stop database -d test
[root@rac1 bin]#
migrate from database file system to ASM
TO Migrate the database files from disk
to asm disk is as follows,
1.configure flash recovery area.
2.Migrate datafiles to ASM.
3.Control file to ASM.
4.Create Temporary tablespace.
5.Migrate Redo logfiles
6.Migrate spfile to ASM.
step 1:Configure flash recovery area.
SQL> connect sys/sys@prod1 as sysdba
Connected.
SQL> alter database disable block change tracking;
Database altered.
SQL> alter system set db_recovery_file_dest_size=500m;
System altered.
SQL> alter system set db_recovery_file_dest=’+RECOVERYDEST’;
System altered
step 2 and 3: Migrate data files and control file
to ASM.
use RMAN to migrate the data files to ASM disk groups.
All data files will be migrated to the newly created disk group, DATA
SQL> alter system set db_create_file_dest='+DATA';
System altered.
SQL> alter system set control_files='+DATA/ctf1.dbf' scope=spfile;
System altered.
SQL> shu immediate
[oracle@rac1 bin]$ ./rman target /
RMAN> startup nomount
Oracle instance started
RMAN> restore controlfile from '/u01/new/oracle/oradata/mydb/control01.ctl';
Starting restore at 08-DEC-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=146 device type=DISK
channel ORA_DISK_1: copied control file copy
output file name=+DATA/ctf1.dbf
Finished restore at 08-DEC-09
RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1
RMAN> backup as copy database format '+DATA';
Starting backup at 08-DEC-09
If an additional control file is required for redundancy,
you can create it in ASM as you would on any other filesystem.
SQL> connect sys/sys@prod1 as sysdba
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
SQL> alter database backup controlfile to '+DATA/cf2.dbf';
Database altered.
SQL> alter system set control_files='+DATA/cf1.dbf '
,'+DATA/cf2.dbf' scope=spfile;
System altered.
SQL> shutdown immediate;
ORA-01109: database not open
NAME
--------------------------------------+DATA/cf1.dbf
+DATA/cf2.dbf
step 6:Migrate spfile to ASM:
Create a copy of the SPFILE in the ASM disk group.
In this example, the SPFILE for the migrated database will be stored as +DISK/spfile.
If the database is using an SPFILE already, then run these commands:
run {
BACKUP AS BACKUPSET SPFILE;
RESTORE SPFILE TO "+DISK/spfile";
}
If you are not using an SPFILE, then use CREATE SPFILE
from SQL*Plus to create the new SPFILE in ASM.
For example, if your parameter file is called /private/init.ora,
use the following command:
SQL> create spfile='+DISK/spfile' from pfile='/private/init.ora';
After successfully migrating all the data files
over to ASM, the old data files are no longer
needed and can be removed. Your single-instance
database is now running on ASM!