Oracle Questions

Published on June 2016 | Categories: Documents | Downloads: 41 | Comments: 0 | Views: 769
of 56
Download PDF   Embed   Report

Oracle Questions

Comments

Content

Page

1

4.ORACLE DATABASE ARCHITECTURE-OVERVIEW

As an Oracle DBA, you must understand the concepts of Oracle architecture clearly. It is a basic
step or main point that you need before you go to manage your database. By this article, I will try
to share my knowledge about it. Hope it can be useful for you.
What is An Oracle Database?
Basically, there are two main components of Oracle database –– instance and database itself. An
instance consists of some memory structures and the background processes, whereas a database
refers to the disk resources. Figure 1 will show you the relationship.

Figure 1. Two main components of Oracle database
Instance
As we cover above, the memory structures and background processes contitute an instance. The
memory structure itself consists of System Global Area (SGA), Program Global Area (PGA), and an
optional area –– Software Area Code. In the other hand, the mandatory background processes are
Database Writer (DBWn), Log Writer (LGWR), Checkpoint (CKPT), System Monitor (SMON), and
Process Monitor (PMON). And another optional background processes are Archiver (ARCn),
Recoverer (RECO), etc. Figure 2 will illustrate the relationship for those components on an
instance.

Figure 2. The instance components

2
Page

System Global Area

SGA is the primary memory structures. When Oracle DBAs talk about memory, they usually mean
the SGA. This area is broken into a few of part memory –– Buffer Cache, Shared Pool, Redo Log
Buffer, Large Pool, and Java Pool.
Buffer Cache
Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when
user retrieves data from database, the data will be stored in buffer cache. Its size can be
manipulated via DB_CACHE_SIZE parameter in init.ora initialization parameter file.
Shared Pool
Shared pool is broken into two small part memories –– Library Cache and Dictionary Cache. The
library cache is used to stores information about the commonly used SQL and PL/SQL statements;
and is managed by a Least Recently Used (LRU) algorithm. It is also enables the sharing those
statemens among users. In the other hand, dictionary cache is used to stores information about
object definitions in the database, such as columns, tables, indexes, users, privileges, etc.
The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization
parameter file.
Redo Log Buffer
Each DML statement (insert, update, and delete) executed by users will generates the redo entry.
What is a redo entry? It is an information about all data changes made by users. That redo entry
is stored in redo log buffer before it is written into the redo log files. To manipulate the size of
redo log buffer, you can use the LOG_BUFFER parameter in init.ora initialization parameter file.
Large Pool
Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the
shared pool. It is also used for I/O processes. The large pool size can be set by
LARGE_POOL_SIZE parameter in init.ora initialization parameter file.
Java Pool
As its name, Java pool is used to services parsing of the Java commands. Its size can be set by
JAVA_POOL_SIZE parameter in init.ora initialization parameter file.
Program Global Area
Although the result of SQL statemen parsing is stored in library cache, but the value of binding
variable will be stored in PGA. Why? Because it must be private or not be shared among users.
The PGA is also used for sort area.
Software Area Code
Software area code is a location in memory where the Oracle application software resides.
Oracle Background Processes
Oracle background processes is the processes behind the scene that work together with the
memories.
DBWn
Database writer (DBWn) process is used to write data from buffer cache into the datafiles.
Historically, the database writer is named DBWR. But since some of Oracle version allows us to
have more than one database writer, the name is changed to DBWn, where n value is a number 0
to 9.
LGWR
Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer into
the redo log files.

Page

3

CKPT
Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into
datafiles. It will also updates datafiles and control files header when log file switch occurs.
SMON
System Monitor (SMON) process is used to recover the system crach or instance failure by
applying the entries in the redo log files to the datafiles.
PMON
Process Monitor (PMON) process is used to clean up work after failed processes by rolling back the
transactions and releasing other resources.
Database
The database refers to disk resources, and is broken into two main structures –– Logical
structures and Physical structures.
Logical Structures
Oracle database is divided into smaller logical units to manage, store, and retrieve data
effeciently. The logical units are tablespace, segment, extent, and data block. Figure 3 will
illustrate the relationships between those units.

Figure 3. The relationships between the Oracle logical structures
Tablespace
A Tablespace is a grouping logical database objects. A database must have one or more tablespaces. In the
Figure 3, we have three tablespaces –– SYSTEM tablespace, Tablespace 1, and Tablespace 2. Tablespace is
composed by one or more datafiles.
Segment
A Tablespace is further broken into segments. A segment is used to stores same type of objects. That is,
every table in the database will store into a specific segment (named Data Segment) and every index in the
database will also store in its own segment (named Index Segment). The other segment types are Temporary
Segment and Rollback Segment.
Extent
A segment is further broken into extents. An extent consists of one or more data block. When the database
object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent cannot be
named.

4
Page

Data Block
A data block is the smallest unit of storage in the Oracle database. The data block size is a specific number of
bytes within tablespace and it has the same number of bytes.
Physical Structures
The physical structures are structures of an Oracle database (in this case the disk files) that are not directly
manipulated by users. The physical structure consists of datafiles, redo log files, and control files.
Datafiles
A datafile is a file that correspondens with a tablespace. One datafile can be used by one tablespace, but one
tablespace can has more than one datafiles.
Redo Log Files
Redo log files are the files that store the redo entries generated by DML statements. It can be used for
recovery processes.
Control Files
Control files are used to store information about physical structure of database, such as datafiles size and
location, redo log files location, etc.

11.Backup and Recovery Interview Questions
Some of the Common Backup and Recovery Interview Questions for Oracle Database
Administrator. These questions are common for both Senior Oracle DBA or Junior DBA. I have
compiled these questions based upon the feedback I got from many candidates who have
attended interviews in various MNC's
1. Which types of backups you can take in Oracle?
2. A database is running in NOARCHIVELOG mode then which type of backups you can take?
3. Can you take partial backups if the Database is running in NOARCHIVELOG mode?
4. Can you take Online Backups if the the database is running in NOARCHIVELOG mode?
5. How do you bring the database in ARCHIVELOG mode from NOARCHIVELOG mode?
6. You cannot shutdown the database for even some minutes, then in which mode you should run the database?
7. Where should you place Archive logfiles, in the same disk where DB is or another disk?
8. Can you take online backup of a Control file if yes, how?
9. What is a Logical Backup?
10. Should you take the backup of Logfiles if the database is running in ARCHIVELOG mode?
11. Why do you take tablespaces in Backup mode?
12. What is the advantage of RMAN utility?
13. How RMAN improves backup time?
14. Can you take Offline backups using RMAN?
15. How do you see information about backups in RMAN?
16. What is a Recovery Catalog?
17. Should you place Recovery Catalog in the Same DB?
18. Can you use RMAN without Recovery catalog?
19. Can you take Image Backups using RMAN?
20. Can you use Backupsets created by RMAN with any other utility?

Page

5

21. Where RMAN keeps information of backups if you are using RMAN without Catalog?
22. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
23. You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?
24. Which is more efficient Incremental Backups using RMAN or Incremental Export?
25. Can you start and shutdown DB using RMAN?
26. How do you recover from the loss of datafile if the DB is running in NOARCHIVELOG mode?
27. You loss one datafile and it does not contain important objects. The important objects are there in other datafiles which
are intact. How do you proceed in this situation?
28. You lost some datafiles and you don't have any full backup and the database was running in NOARCHIVELOG mode.
What you can do now?
29. How do you recover from the loss of datafile if the DB is running in ARCHIVELOG mode?
30. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week old and
partial backup of this datafile which is just 1 day old. From which backup should you restore this file?
31. You loss controlfile how do you recover from this?
32. The current logfile gets damaged. What you can do now?
33. What is a Complete Recovery?
34. What is Cancel Based, Time based and Change Based Recovery?
35. Some user has accidentally dropped one table and you realize this after two days. Can you recover this table if the DB is
running in ARCHIVELOG mode?
36. Do you have to restore Datafiles manually from backups if you are doing recovery using RMAN?
37. A database is running in ARCHIVELOG mode since last one month. A datafile is added to the database last week. Many
objects are created in this datafile. After one week this datafile gets damaged before you can take any backup. Now can you
recover this datafile when you don't have any backups?
38. How do you recover from the loss of a controlfile if you have backup of controlfile?
39. Only some blocks are damaged in a datafile. Can you just recover these blocks if you are using RMAN?
40. Some datafiles were there on a secondary disk and that disk has become damaged and it will take some days to get a
new disk. How will you recover from this situation?
41. Have you faced any emergency situation. Tell us how you resolved it?
42. At one time you lost parameter file accidentally and you don't have any backup. How you will recreate a new parameter
file with the parameters set to previous values.
58.USEFUL SQL COMMANDS FOR DBA'S
SQL QUERIES TO EXTRACT INFORMATION
1.List Tablespace Fragmentation Information.
SELECT tablespace_name,COUNT(*) AS fragments,
SUM(bytes) AS total,
MAX(bytes) AS largest
FROM dba_free_space
GROUP BY tablespace_name;
2.List Free and used space in database.

6
Page

SELECT sum(bytes)/1024 "free space in KB"
FROM dba_free_space;
SELECT sum(bytes)/1024 "used space in KB"
FROM dba_segments;
3.List names and default storage parameters for tablespace.

SELECT TABLESPACE_NAME, INITIAL_EXTENT, NEXT_EXTENT, MAX_EXTENTS, PCT_INCREASE, MIN_EXTLEN
FROM DBA_TABLESPACES;
4. Which Tablespace is belongs to which datafiles.
SELECT FILE_NAME,TABLESPACE_NAME,BYTES,AUTOEXTENSIBLE,
MAXBYTES,INCREMENT_BY
FROM DBA_DATA_FILES;
5. Check the current number of extents and blocks allocated to a seg
SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS;
6. Extent Information
SELECT segment_name, extent_id, blocks, bytes
FROM dba_extents
WHERE segment_name = TNAME ;
7. List segments with fewer than 5 extents remaining
SELECT segment_name,segment_type,max_extents, extents
FROM dba_segments
WHERE extents+5 > max_extents
AND segment_type<>'CACHE';
8. List segments reaching extent limits
SELECT s.segment_name,s.segment_type,s.tablespace_name,s.next_extent
FROM dba_segments s
WHERE NOT EXISTS (SELECT 1
FROM dba_free_space f
WHERE s.tablespace_name=f.tablespace_name
HAVING max(f.bytes) > s.next_extent);
9. List table blocks, empty blocks, extent count, and chain block count
SELECT blocks as BLOCKS_USED, empty_blocks
FROM dba_tables
WHERE table_name=TNAME;
SELECT chain_cnt AS CHAINED_BLOCKS
FROM dba_tables
WHERE table_name=TNAME;
SELECT COUNT(*) AS EXTENT_COUNT
FROM dba_extents
WHERE segment_name=TNAME;

7
Page

10. Information about all rollback segments in the database

SELECT SEGMENT_NAME,TABLESPACE_NAME,OWNER,STATUS
FROM DBA_ROLLBACK_SEGS;
/* General Rollback Segment Information */
SELECT t1.name , t2.extents, t2.rssize, t2.optsize, t2.hwmsize, t2.xacts, t2.status
FROM v$rollname t1, v$rollstat t2
WHERE t2.usn = t1.usn ;
/* Rollback Segment Information - Active Sessions */
select t2.username, t1.xidusn, t1.ubafil, t1.ubablk, t2.used_ublk
from v$session t2, v$transaction t1
where t2.saddr = t1.ses_addr

11. Statistics of the rollback segments currently used by instance
SELECT T1.NAME , T2.EXTENTS, T2.RSSIZE, T2.OPTSIZE, T2.HWMSIZE,
T2.XACTS, T2.STATUS
FROM V$ROLLNAME T1, V$ROLLSTAT T2
WHERE T1.USN = T2.USN AND
T1.NAME LIKE '%RBS%';
12. Active sorts in instance
SELECT T1.USERNAME, T2.TABLESPACE, T2.CONTENTS, T2.EXTENTS, T2.BLOCKS
FROM V$SESSION T1, V$SORT_USAGE T2
WHERE T1.SADDR = T2.SESSION_ADDR ;
13. Index & constraint information
SELECT index_name,table_name,uniqueness
FROM dba_indexes
WHERE index_name in
(SELECT constraint_name
FROM dba_constraints
WHERE table_name = TNAME
AND constraint_type in ('P','U')) ;
14. List tables and synonyms
set pagesize 0;
select 'TABLE:',table_name,'current' from user_tables
union
select 'SYNONYM:',synonym_name,table_owner from user_synonyms
order by 1,2 ;

SELECT constraint_name,table_name, column_name
FROM dba_cons_columns
WHERE table_name = TNAME
ORDER BY table_name, constraint_name, position
END IF;

8
Page

15. Constraint columns

16. Tuning: library cache
Glossary: pins = # of time an item in the library cache was executed
reloads = # of library cache misses on execution
Goal:get hitratio to be less than 1
Tuning parm: adjust SHARED_POOL_SIZE in the initxx.ora file, increasing by small increments
SELECT SUM(PINS) EXECS,
SUM(RELOADS)MISSES,
SUM(RELOADS)/SUM(PINS) HITRATIO
FROM V$LIBRARYCACHE ;
17. Tuning: data dictionary cache

Glossary:
gets = # of requests for the item
getmisses = # of requests for items in cache which missed
Goal:
get rcratio to be less than 1
Tuning parm:
adjust SHARED_POOL_SIZE in the initxx.ora file, increasing by small increments
SELECT SUM(GETS) HITS,
SUM(GETMISSES) LIBMISS,
SUM(GETMISSES)/SUM(GETS) RCRATIO
FROM V$ROWCACHE ;
18. Tuning: buffer cache
Calculation:
buffer cache hit ratio = 1 - (phy reads/(db_block_gets + consistent_gets))
Goal:
get hit ratio in the range 85 - 90%
Tuning parm:
adjust DB_BLOCK_BUFFERS in the initxx.ora file, increasing by small increments
SELECT NAME, VALUE
FROM V$SYSSTAT WHERE NAME IN
('DB BLOCK GETS','CONSISTENT GETS','PHYSICAL READS');

9
Page

19. Tuning: sorts
Goal:
Increase number of memory sorts vs disk sorts
Tuning parm:

adjust SORT_AREA_SIZE in the initxx.ora file, increasing by small increments

SELECT NAME, VALUE
FROM V$SYSTAT
WHERE NAME LIKE '%SORT%';
20. Tuning: physical file placement
Informational in checking relative usages of the physical data files.

SELECT NAME, PHYRDS,PHYWRTS
FROM V$DATAFILE DF, V$FILESTAT FS
WHERE DF.FILE#=FS.FILE# ;
21. Tuning: rollback segments
Goal:
Try to avoid increasing 'undo header' counts
Tuning method:
Create more rollback segments, try to reduce counts

SELECT CLASS,COUNT
FROM V$WAITSTAT
WHERE CLASS LIKE '%UNDO%' ;
22. Archive Log Mode Status
/* Status of Archive Log Subsystem */
ARCHIVE LOG LIST
/* log mode of databases */
SELECT name, log_mode FROM v$database;
/* log mode of instance */
SELECT archiver FROM v$instance;
23. List log file information
These queries list the status / locations of the redo log files.

select group#, member, status from v$logfile ;
select group#,thread#,archived,status from v$log ;
24. A Simple Monitoring Tool
This tool loops a specified number of times, displaying memory
usage along with user process counts for a specific username.

10
Page

--=================================================
--- proc_ora_monitor
--- parm1: username to count
-- parm2: number of loops, 5 sec duration
-----=================================================
set serveroutput on ;
create or replace procedure
proc_ora_monitor ( user1 in varchar, reps1 in integer )is
i number ;
usercount1 number ;
memory1 number ;
date1 varchar(20) ;
msg varchar(99) ;
begin
i := 0 ;
while ( i <> ' || to_char(SYSDATE, 'HH:MM:SS PM');
select count(1)
into usercount1
from sys.v_$session
where username = user1 ;
msg := msg || ', ' || user1 || ': ' || usercount1 ;
select round(sum(bytes)/1024/1024 ,2)
into memory1
from sys.v_$sgastat
where pool = 'shared pool' and
name = 'free memory' ;
msg := msg || ', free mb = ' || memory1 ;
select round(sum(bytes)/1024/1024 ,2)
into memory1
from sys.v_$sgastat
where pool = 'shared pool' and
name = 'processes' ;
msg := msg || ', processes mb = ' || memory1 ;
dbms_output.put_line(msg) ;
dbms_lock.sleep(5) ;
i := i + 1 ;
end loop ;
end;
/
show errors ;
execute proc_ora_monitor('SILVERUSER',2) ;
exit
25. List Space Allocated by Table

set pagesize 500
set linesize 77

11
Page

For partitioned tables, you may see more than one tablespace
assigned to the table name. Note that phantom usage will appear,
if the recycle bin (10g only) is not cleared.

column segmentname format a35
select segmentname, tablespacename, sum(bytes/(1024*1024) ) "MB"
from dbasegments
where owner='HOLTDW' and segmentname not like 'BIN'
group by segmentname, tablespacename
order by segmentname, tablespacename ;
26. Tablespace types, and availability of data files
SELECT TABLESPACE_NAME, CONTENTS, STATUS
FROM DBA_TABLESPACES;
27. List sessions with active transactions
SELECT s.sid, s.serial#
FROM v$session s
WHERE s.saddr in
(SELECT t.ses_addr
FROM V$transaction t, dba_rollback_segs r
WHERE t.xidusn=r.segment_id
AND r.tablespace_name='RBS');

28. Tuning: dynamic extension
SELECT NAME, VALUE
FROM V$SYSSTAT
WHERE NAME='RECURSIVE CALLS' ;
29. Check the extents for a given segment
SELECT TABLESPACE_NAME, COUNT(*), MAX(BLOCKS), SUM(BLOCKS)
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NUMBER ;
30. Explain Plan: syntax
Below is sample syntax for explain plan ( getting output from the optimizer )

delete from plan_table
where statement_id = '9999';
commit;
COL operation FORMAT A30
COL options FORMAT A15
COL object_name FORMAT A20
/* ------ Your SQL here ------*/
EXPLAIN PLAN set statement_id = '9999' for
select count(1) from asia_monthly_pricing_data where order_id > 5000
/
/*----------------------------*/
select operation, options, object_name
from plan_table
where statement_id = '9999'
start with id = 0
connect by prior id=parent_id and prior statement_id = statement_id;

12
Page

exit
/

6.DBA-BASIC QUESTIONS

1. What is the System Global Area?
The System Global Area (SGA) is a structure created in memory when the Oracle instance is started. It consists of individual memory
structures called the Shared Pool, Large Pool, Java Pool, Database Buffer Cache, Redo Log buffer and Streams Pool.

2. What does the Compatibility parameter determine.
It determines the compatibility of the database with a certain version of Oracle. If you set the compatibility of a database to say 10.1, the
database features that will be enabled and available will as of Oracle version 10.1.

3. What does the Library cache contain?
The library cache is a part of the shared pool. When users issue statements in the database, a parse tree and execution are created. The
execution plan determines the steps that will be followed to obtain the requested data. The library cache, stored recently executed SQL
statements (the SQL Text), the parse code and execution plans. This is done, so that when similar statements are repeated, Oracle already
has the parse code and execution plan available. This can speed the process of statement execution.

4. What does a user need to be able to create objects such as tables, procedures, sequences
etc in the database.
A user must be granted the required system privileges by the DBA in order to create objects in the database. Without the privilege a user
will be displayed an "insufficient privileges" error.

5. What is a role and explain a benefit of creating a role.
A role is an object that may be created to simplify privilege management. Roles contain privileges. Initially when created, a role has a
name and does not contain any privileges. You can add privileges to a role by using the GRANT privilege TO rolename; command. If
many users require the same set of privileges, instead of issuing GRANT statements to grant the privileges to the users, you can grant the
privileges to the role, and then grant the role to the user. On similar lines, if you wish to revoke a privilege from a group of users, revoking
it from the role will in a single command revoke it from all users who possessed that role.

6. Name some networking files that may be found on the client-side.
Networking files found on the client side include Tnsnames.ora and SQLnet.ora

7. For client-server connectivity to work, which process must be started on the server side for
a remote client connection such as " connect scott/tiger@sales " to work.
The process that must be started on the server side to listen for incoming client connection requests in the listener process. The listener
should be started and listening on behalf of certain databases.

8. When creating a user what does the DEFAULT TABLESPACE clause indicate?
The default tablespace for a user, is the tablespace in which all objects created by the user will reside in. That is, when a user creates say a
table without mentioning explicitly the tablespace in which it must be created, the table will automatically be in the default tablespace
assigned to the user.

13
Page

9. In Oracle 10g, what is a BIGFILE tablespace?
The BIGFILE tablespace feature was introduced in Oracle 10g. A BIGFILE tablespace can at a physical level be associated with only a
single datafile. This datafile can be very large depending on the block size. The advantage being the number of datafiles that have to be
managed would be fewer.

10. What command can you issue, to permanently delete the data from a table, without
having the ability to rollback the action.
The command that permanently deletes a table without giving you the ability to rollback is the TRUNCATE TABLE command. This is a
Data Definition Language command that cannot be undone.

11. In Oracle 10g, what happens when a table is dropped?
When a table is dropped in Oracle 10g it is transferred to a recycle bin. As long as it is in the recycle bin you can restore the table by
issuing the FLASHBACK TABLE tablename TO BEFORE DROP command.

12.Why is it necessary and important to analyze tables? How has analyzing become simpler
in Oracle 10g?
Analyzing is the process of generating statistics on objects of the database. Statistics help the optimizer in determining optimal execution
plans. In Oracle 10g, statistics gathering is automatically performed by Scheduler. A scheduled job is performed by the scheduler every
night and during the weekends.

13. Explain briefly the logical structure of a database.
The logical structure of a database is a structure defined only within the Oracle database. This structure is not visible at an operating
system level. The logical structure, indicates that a database is made up of tablespaces. A tablespace is a means of separating the different
types of data in a database. For example a database may contain permanent, temporary, undo, LOB data etc. To logically separate these
types of data you may create a tablespace. Tablespaces are made up of segments. Segments are objects of the database. For eg. a table is a
data segment, an index is an index segment and so on. Segments are made up of extents. An extent is a unit of space allocation. Space is
allocated to segments in the form of extents. Extents in turn are made up of Oracle blocks. Oracle blocks are the smallest unit of
input/output in a database.

14. What is an Oracle block?
An Oracle block is the smallest unit of read/write or I/O in a database. Oracle block size is defined by the DB_BLOCK_SIZE
initialization parameter. This parameter is constant and cannot be modified once set, without re-creating the database. The Oracle block
size should be a multiple to the Operating system block size. Every block has a header containing identifying information. The data in a
block grows in a bottom-up manner.

15. What is the shared memory management feature in Oracle 10g?
The Oracle Shared memory management allows Oracle to dynamic size many of the memory structures in the System Global Area
(SGA). Such memory structures are called auto-tuned and are the Shared Pool, Database Buffer cache, Large Pool, Streams Pool and Java
Pool. The sizes of these memory structures may vary during database operation, depending on the need at that instance. If for example a
user(s) are performing actions that require more large pool size, Oracle will dynamically increase the size of the large pool, by decreasing
another auto-tuned memory structure that is not in need of this space. This features simplifies memory management, in that , static sizes
that could cause insufficient memory errors or over allocation does not occur.

Page

14

16. Explain archiving, it is important in a production environment?
Archiving is the process of transferring the contents of the redo log file into an offline file called the "archive log file". This is done before
the contents of the redo log file can be overwritten. Archiving can be performed automatically by the ARCH background process or can
be done manually by the DBA. Archiving is mandatory in a production environment, because changes made to the database need to be
available in the event of media recovery. Without these changes complete recovery will not be possible.

17.What is a segment?
A segment is an object of the database. A segment can be a data segment, undo segment, temporary segment, undo segment , LOB
segment etc. Segments are created in a tablespace and are part of a users schema.

18.What is the meant by tablespaces with a non-standard blocksize? Name one purpose for
this functionality.
A tablespace whose block size is different from the default blocksize for the database (defined by the DB_BLOCK_SIZE initialization
parameter) is called a tablespace of non-standard block size. To be able to create such a tablespace, you must configure a corresponding
buffer pool to hold the blocks that are read from the objects of this tablespace. The tablespace is created with a non-standard block size
using the BLOCKSIZE keyword. The feature was introduced to facilitate the transportable tablespace features that allows the movement
of tablespaces across databases.

19. Briefly the ASM feature in Oracle 10g.
ASM stands for Automatic Storage Manager. It is a feature introduce in Oracle 10g to manage the physical files of the database more
efficiently. Prior to Oracle 10g, files when created on a filesystem, required manual administering such as looking out for hotspot files,
moving files around on the disks to avoid I/O contention etc. ASM removes many of these menial tasks, by allowing Oracle automatically
manage the physical files of the disk. The DBA is responsible for grouping a number of disks together (called a diskgroup). These disks
would then be utilized by the ASM feature, ensuring striping and mirroring. The disks that belong to a diskgroup in ASM are called ASM
disks. The files created in ASM are called ASM files.

20. What is the Automatic Workload Repository (AWR)?
Statistical data about the current and previous state of the database is monitored constantly and statistical data gathered by the Oracle
database. This statistical data can be in the form of raw statistics, metrics, SQL statistics, active session history. Statistical data gathered in
memory is periodically transferred by a background process called MMON to repository tables on disk. These repository tables that hold
statistical information are collectively called the Automatic Workload Repository. The contents of the repository are used by the
components of the "Common Manageability Infrastructure", namely Automatic Database Diagnostic Monitor(ADDM), Server Generated
Alerts, Advisory Framework etc.

60.TABLESPACES n DATAFILES-INTERVIEW ORIENTED PRACTICE QUESTIONS
1.How will you create a locally managed temporary tablespace?
2.how will you create dictionary managed temporary tablespace?
3.How will you find the current users who are using temporary tablespace segments ?
4.Is it possible for multiple users/transactions to share single temporary segment?
5.Is it possible for multiple users/transactions to share single temporary extent?
6.How will you find the datafiles of a temporary tablespace?
7.All locally managed temporary tablespaces are of uniform size.true or false?
8.Is it possible to make a temporary tablespace as Autoallocate?
9.write the command to take a tablespace online/offline?
10.How will you drop a temporary datafile?
11.how will you drop a permanent datafile?Give two methods.
12.How will you convert an existing dictionary managed permanent tablespace to temporary tablespace?
13.How will you allocate non-standard block size for a table in a tablespace? And what is its pre-requisite?

Page

15

14.what are the views to check the free spaces in database?
15.How will you check the redo generated for temporatry tablespace?
16.How will you take system and temporary tablespace offline?
17.Is media recovery requird if a tablespace is taken offline immediate?
18.How will you convert a tablespace as read only?
19.If you have given command to make a tablespace offline normal,but its not happening.it is in transactional read-only mode.
How will you find which are the transactions which are preventing theconversion?
20.How will you drop a tablespace?
21.If you drop a tablespace containing 4 datafiles, how many datafiles will be droped at a time by giving a single drop tablespace
command?
22.If database is not in OMF,How will you drop all the datafiles of a tablespace along with dropping the tablespace itself?
23.If database is not in OMF,How will you drop all the datafiles of a tablespace without dropping the tablespace itself?(HInt:see if there is
any relationship between tablespace dropping and OMF , can a tablespace exist without any dayafile?)
24.How will you convert a dictionary managed tablespace to locally managed?
25.How will you convert the locally managed tablespace to dictionay managed?What are the limitations?
26.If the system tablespace is locally managed,can any user tablespacebe dictinary managed?
27.If you are given a database with locally managed system tablespace,is it sure that there is no dictinary managed tablespace in the
database?
28.Is it possible to create dictionary managed tablespace in 10g?
29.If a databse is created without default temporary tabledpace, which tablespace is used for sorting purpose for a user who is not
allocated any temporary tablespace?
30.Which parameter define the max number of datafile in database?
31.Can a single datafile be allocated to two tablespaces?Why?
32.How will you check if a datafile is Autoextinsible?
33.If you find that a certain tablepsace is 90% full,what are the options (write commands) you have to deal with it?
34. What is the relation between db_files and maxdatafiles parameters?
35.Write command to make all datafiles of a tablespace offline without making the tablspace offline itself.
36.Is it possible to make undo tablespace offline?
37.In 10g,How to allocate more than one temporary tablespace as default temporary tablespace to a single user?
38.While creating locally manmaged tablespace,what will happen if you give storage parameter as - minextent=initial=next and
pctincrease=0?
39.While creating locally manmaged tablespace,what will happen if you give storage parameter as- minextent=initial=next and
pctincrease=2?
40.Is it possible to make tempfiles as read only?
41.What is the relationship between initial extents, next extent,initial extent and pctincrease?
42.Is there any relationship between pctfree,pctused and pctincrease?
43.how will you find tha system wide
1)default permanent tablespace
2)default temporary tablespace
3) Database time zone (write two methods)

44.If you make a tablespace offline immediate and that particular redo log get corrupted, will it affect the procedure to make that
tablespace online?
45.Is it possible to make system and temporary tablespace with non-standard blocksize?
46.How will you list all the tablespaces and their status in a database?
47.If you are given a database,how will you know whether it is locally managed or dictionary managed?
48.If you are given a databse,how will you know how many datafiles each tablespace contain?
49.How will you know which temporaray tablepsace is allocated to which user?
50.Write two parameters you have to give to make OMF dadafiles and logfiles in a database?
51.What is the common column between dba_tablespaces and dba_datafiles?

16
Page

62.ORACLE 10g DATABASE-Performance Tuning FAQ

Why and when should one tune?
One of the biggest responsibilities of a dba is to ensure that the oracle database is tuned properly. The Oracle RDBMS is highly tunable
and allows the database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:


The speed of computing might be wasting valuable human time (users waiting for response);



Enable your system to keep-up with the speed business is conducted; and



Optimize hardware usage to save money (companies are spending millions on hardware).
Although this site is not overly concerned with hardware issues, one needs to remember than you cannot tune a Buick into a Ferrari.

Where should the tuning effort be directed?
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For
example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.


Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective
denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also
look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.



Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also
consider proper scheduling of batch tasks after peak working hours.



Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system
swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.



Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk
sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.



Eliminate Database Contention:
Study database LOCKS,LATCHESand wait events carefully and eliminate where possible.



Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with
your specific os.

17
Page

What tools/utilities does Oracle provide to assist with performance
tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:


ADDM (Automated Database Diagnostics Monitor) introduced in ORACLE10G



Tkprof



statspack



Oracle Enterprise Manager -Tuning Pack (cost option)



Old UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring

When is cost based optimization triggered?
It's important to have statistics on all tables for the CBO(Cost Based Optimizer) to work correctly. If one table involved in a statement
does not have statistics, and OPTIMIZER DYNAMIC SAMPLING isn't performed, Oracle has to revert to rule-based optimization for
that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:


Change statistics of objects by doing an ANALYZE;



Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).

How can one optimize %XYZ% queries?
It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the
table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the
entire table.

Where can one find I/O statistics per table?
The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the
most I/O operations.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_ioprocedure and table to gather the required information. After
executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
For more details, look at the header comments in the catio.sql script.

My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it
to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:

Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)



Has OPTIMIZER_MODE been changed in INIT.ORA?



Has the DEGREE of parallelism been defined/changed on any table?



Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?



Have the statistics changed?



Has the SPFILE/ INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?



Has the INIT.ORA parameter SORT_AREA_SIZE been changed?



Have any other INIT.ORA parameters been changed?

Page

18



What do you think the plan should be? Run the query with hints to see if this produces the required performance.
It can also happen because of a very high high water mark. Typically when a table was big, but now only contains a couple of records.
Oracle still needs to scan through all the blocks to see it they contain data.

Does Oracle use my index or not?
One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property
is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example:
SQL> CREATE TABLE t1 (c1 NUMBER);
Table created.
SQL> CREATE INDEX t1_idx ON t1(c1);
Index created.
SQL> ALTER INDEX t1_idx MONITORING USAGE;
Index altered.
SQL>
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME INDEX_NAME MON USE
------------------------------ ------------------------------ --- --T1 T1_IDX YES NO
SQL> SELECT * FROM t1 WHERE c1 = 1;
no rows selected
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME INDEX_NAME MON USE

19

T1 T1_IDX YES YES

Page

------------------------------ ------------------------------ --- ---

To reset the values in the v$object_usage view, disable index monitoring and re-enable it:
ALTER INDEX indexname NOMONITORING USAGE;
ALTER INDEX indexname MONITORING USAGE;

Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is
because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things
that can be checked are:


USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.



USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a
FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.



USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR
approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are
randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.



Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL
TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP
SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If
from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the
query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the
statistics. Compare this to the explain plan when not using an index.

When should one rebuild an index?
You can run the ANALYZE INDEX VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates
a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the
view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to
'LF_ROWS'.
For example, you may decide that index should be rebuilt if more than 20% of its rows are deleted:
select del_lf_rows * 100 / decode(lf_rows,0,1,lf_rows) from index_stats
where name = 'index_ name';

How does one tune Oracle Wait event XYZ?
Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:


db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.

buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH



log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks



log file sync: If this event is in the top 5, you are committing too often (talk to your developers)



Page

20



log file parallel write: deals with flushing out the redo log buffer to disk. Your disks may be too slow or you have an I/O
bottleneck.

What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in
100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with
each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.

db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single
block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers
(P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous
buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be
fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';

How does one tune the Redo Log Buffer?
The size of the redo log buffer is determined by the LOG_BUFFER parameter in your SPFILE/INIT.ORA file. The default setting is
normally 512 KB or (128 KB * CPU_COUNT), whichever is greater. This is a static parameter and its size cannot be modified after
instance startup.
SQL> show parameters log_buffer
NAME TYPE value
------------------------------------ ----------- -----------------------------log_buffer integer 262144

Page

21

When a transaction is committed, info in the redo log buffer is written to a Redo Log File. In addition to this, the following conditions will
trigger LGWR to write the contents of the log buffer to disk:


Whenever the log buffer is MIN(1/3 full, 1 MB) full; or



Every 3 seconds; or



When a DBWn process writes modified buffers to disk (checkpoint).
Larger LOG_BUFFER values reduce log file I/O, but may increase the time OLTP users have to wait for write operations to complete. In
general, values between the default and 1 to 3MB are optimal. However, you may want to make it bigger to accommodate bulk data
loading, or to accommodate a system with fast CPUs and slow disks. Nevertheless, if you set this parameter to a value beyond 10M, you
should think twice about what you are doing.
SQL> SELECT name, value
2 FROM SYS.v_$sysstat
3 WHERE NAME in ('redo buffer allocation retries',
4 'redo log space wait time');
NAME value
---------------------------------------------------------------- ---------redo buffer allocation retries 3
redo log space wait time 0

Statistic "REDO BUFFER ALLOCATION RETRIES" shows the number of times a user process waited for space in the redo log buffer.
This value is cumulative, so monitor it over a period of time while your application is running. If this value is continuously increasing,
consider increasing your LOG_BUFFER (but only if you do not see checkpointing and archiving problems).
"REDO LOG SPACE WAIT TIME" shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log
buffer. If this value is low, your log buffer size is most likely adequate.

63.STEPS TO MIGRATION ASM-NON ASM INSTANCE VICE VERSA
Steps To Migrate Database From Non-ASM to ASM And Vice-Versa
This article describes the steps to migrate a database from Non-ASM to ASM and vice-versa.
Step 1: Edit your init.ora to point the new control_file location to ASM
E.g. : if your disk group name say '+ASM_Disk_group'.
Control_file="+ASM_Disk_group"
Step 2: Startup the database in nomount state
SQL> Startup nomount
Step 3: From RMAN session copy the control file from old location to new location
RMAN> RESTORE CONTROLFILE FROM '/u01/TST/control01.ctl';

22

Step 4: From SQL session mount the database
SQL> ALTER DATABASE MOUNT;

Page

Here /u01/TST/control01.ctl is the old location of control file.

Step 5: Using RMAN copy the datafile from NON-ASM to ASM
RMAN>BACKUP AS COPY DATABASE FORMAT '+ASM_Disk_group';
Step 6: Using RMAN rename the datafile , using the following command
RMAN> SWITCH DATABASE TO COPY;
Step 7: Open database in resetlogs.
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 8: Do the following maintenance
SQL> ALTER DATABASE DROP LOGFILE ’’
SQL> ALTER DATABASE ADD LOGFILE '+ASM_Disk_group';
SQL> ALTER DATABASE SWITCH LOGFILE;
SQL> ALTER DATABASE DROP LOGFILE '';
SQL> ALTER DATABASE ADD LOGFILE '+ASM_Disk_group';
... repeat for *all* online redo log members.
STEP of MIGRATION from ASM to NON-ASM
1. Start your database with ASM.
2. Create pfile from spfile.
3. Edit pfile to reflect controlfile name in file system location.
4. SQL> Startup nomount.
5. Use RMAN to copy the control file from ASM to NON-ASM.
RMAN> RESTORE CONTROLFILE FROM 'file>;
6. SQL> alter database mount;
7. Use RMAN to copy the database from ASM to NON-ASM.
RMAN> BACKUP AS COPY DATABASE format '/u01/roy1out/%U';
allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk;
copy datafile '+DATABASE_DG/roy/datafile/system.256.1' to '/u01/roy1out/system.256.1';
copy datafile '+DATABASE_DG/roy/datafile/undotbs1.258.1' to '/u01/roy1out/undotbs1.258.1';
copy datafile '+DATABASE_DG/roy/datafile/sysaux.257.1' to '/u01/roy1out/sysaux.257.1';
copy datafile '+DATABASE_DG/roy/datafile/users.259.1' to '/u01/roy1out/users.259.1';
copy datafile '+DATABASE_DG/roy/example01.dbf' to '/u01/roy1out/example01.dbf';
copy datafile '+DATABASE_DG/roy/datafile/undotbs2.265.1' to '/u01/roy1out/undotbs2.265.1';
copy datafile '+DATABASE_DG/roy/datafile/asm_ts.269.1' to '/u01/roy1out/asm_ts.269.1';

23

RMAN> SWITCH DATABASE TO COPY;

Page

8. From RMAN

9. Recreate the redo logs as before. See step 8: Do the following maintenance in the above. .

64.COMMON USED UNIX COMMANDS FOR DBA'S
Common UNIX Commands Available on Most UNIX Platforms
======================================================
1. man - manual pages The man command displays information from the reference manuals. It displays complete manual pages that you
select by name or one-line summaries selected either by keyword (-k), or by the name of an associated file (-f). If no manual page is
located, man prints an error message.
2. passwd - change login password and password attributes
3. date - The date utility writes the date and time to standard output or attempts to set the system date and time. By default, the current
date and time will be written.
4. who - The who utility can list the user's name, terminal line,login time, elapsed time since activity occurred on the line, and the processID of the command interpreter (shell)for each current UNIX system user.
5. whoami - lists who you are (HP, AIX..)
who am i -> on SUN
6. cal - The cal utility writes a Gregorian calendar to standard output.If the year operand is specified, a calendar for that year is written. If
no operands are specified, a calendar for the current month is written.
7. pwd - pwd writes an absolute path name of the current working directory to standard output.
8. cd - The cd utility will change the working directory of the current shell execution environment.
9. ls - For each file that is a directory, ls lists the contents of the directory; for each file that is an ordinary file, ls repeats its name and any
other information requested.
10. more - The more utility is a filter that displays the contents of a text file on the terminal, one screenful at a time. It normally pauses
after each screenful.
11. cat - cat reads each file in sequence and writes it on the standard output.
12. mkdir - The mkdir command creates the named directories in mode 777 (possibly altered by the file mode creation mask umask(1)

Page

24

13. mv - In the first synopsis form, the mv utility moves the file named by the source operand to the destination specified by the
"target_file". Source and "target_file" may not have the same name.
14. cp - The cp utility will copy the contents of source_file to the destination path named by "target_file".
15. rm - The rm utility removes the directory entry specified by each file argument.
16. rmdir - The rm utility removes the directory entry specified by each file argument.
17. chmod - change the permission of a file.
Ex: chmod +x tempfile (add execute permission)
chmod u+x tempfile (add execute for user only)
chmod 6755 oracle (set the setuid bit on)
18. grep - The grep utility searches files for a pattern and prints all lines that contain that pattern. It uses a compact nondeterministic algorithm.
19. find - search for files.
Ex: find . -name sqlplus -print (find the full pathname of sqlplus starting from the current directory)
find . -name '*sql*' -print (find the full pathname of file where 'sql' is in its name)
20. wc - The wc utility reads one or more input files and, by default, writes the number of newline characters, words and
bytes contained in each input file to the standard output.
21. ps - The ps command prints information about active processes.Without options, ps prints information about processes associated with
the controlling terminal.
22. kill - send a signal to terminate a process.
Ex: kill -9 (signal will always be caught)
23. id - If no user operand is provided, the id utility will write the user and group IDs and the corresponding user and group
names of the invoking process to standard output:
$ id
uid=1017(oracle7) gid=101(dba)
24. df - The df command displays the amount of disk space occupied by mounted or unmounted file systems, directories, or mounted
resources, the amount of used and available space, and how much of the file system's total capacity has been used.
$ df
/ (/dev/md/dsk/d0 ): 1816668 blocks 1683304 files
/proc (/proc ): 0 blocks 3721 files

/u02 (/dev/dsk/c1t2d0s0 ): 2981898 blocks 4108008 files

25

/u01 (/dev/dsk/c1t1d0s0 ): 3207990 blocks 4099598 files

Page

/dev/fd (fd ): 0 blocks 0 files

25. du - The du utility writes to standard output the size of the file space allocated to, and the size of the file space allocated to each
subdirectory of, the file hierarchy rooted in each of the specified files.
$ du
214 ./oravw/install
216 ./oravw
26. lpr - The lpr utility submits print requests to a destination.
lpr prints files (file) and associated information,collectively called a print request.
27. uname - The uname utility prints information about the current system on the standard output. When options are specified, symbols
representing one or more system characteristics will be written to the standard output.
Ex. uname -a (list all info) SunOS supsunm3 5.6 Generic_105181-08 sun4u sparc SUNW,Ultra-5_10
28. nm - The nm utility displays the symbol table of each ELF object file that is specified by file.
29. ar - The ar utility maintains groups of files combined into a single archive file. Its main use is to create and update library files.
/usr/ccs/bin/ar -d [ -Vv ] archive file...
/usr/ccs/bin/ar -m [ -abiVv ] [ posname ] archive file...
/usr/ccs/bin/ar -p [ -sVv ] archive [file...]
/usr/ccs/bin/ar -q [ -cVv ] archive file...
/usr/ccs/bin/ar -r [ -abciuVv ] [ posname ] archive file...
/usr/ccs/bin/ar -t [ -sVv ] archive [file...]
/usr/ccs/bin/ar -x [ -CsTVv ] archive [file...]
30. ipcs - The utility ipcs prints information about active interprocess communication facilities. The information that is displayed is
controlled by the options supplied.
-m Print information about active shared memory segments.
-q Print information about active message queues.
-s Print information about active semaphores.
31. ipcrm - ipcrm removes one or more messages, semaphores or shared memory identifiers.
-m shmid Remove the shared memory identifier shmid from the system. The shared memory segment and data structure associated with it
are destroyed after the last detach.

Page

26

-q msqid Remove the message queue identifier msqid from the system and destroy the message queue and data structure associated with
it.
-s semid Remove the semaphore identifier semid from the system and destroy the set of semaphores and data structure associated with it.
32. chown - The chown utility will set the user ID of the file named by each file to the user ID specified by owner and, optionally,
will set the group ID to that specified by group.
33. chgrp - The chgrp utility will set the group ID of the file named by each file operand to the group ID specified by the group
operand.
34. newgrp - The newgrp command logs a user into a new group by changing a user's real and effective group ID. The user remains
logged in and the current directory is unchanged.
Ex: newgrp dba (switch group to dba)
35. file - The file utility performs a series of tests on each file supplied by file and, optionally, on each file listed in ffile in an attempt to
classify it.
36. ln - the ln utility creates a new directory entry (link) for the file specified by source_file at the destination path specified by target. If
target is not specified, the link is made in the current directory.
37. su - su allows one to become another user without logging off. The default user name is root (super user).
38. dd - dd copies the specified input file to the specified output with possible conversions. The standard input and output are used by
default.
Ex: dd if=myfile of=newfile conv=ucase (converts to uppercase)
39. diff - The diff utility will compare the contents of file1 and
file2 and write to standard output a list of changes necessary to convert file1 into file2.
40. umask - The umask utility sets the file mode creation mask of the current shell execution environment to the value specified by the
mask operand. This mask affects the initial value of the file permission bits of subsequently created files. If umask is called in a subshell
or separate utility execution environment, such as one of the following:
(umask 002)
nohup umask ...
find . -exec umask ...
it does not affect the file mode creation mask of the caller's environment.
41. stty - The stty command sets certain terminal I/O options for the device that is the current standard input; without arguments, it reports
the settings of certain options.
Ex:
$ stty
speed 38400 baud; -parity

echo echoe echok echoctl echoke iexten

27

brkint -inpck icrnl -ixany imaxbel onlcr tab3

Page

swtch = ;

42. tty - The tty utility writes to the standard output the name of the terminal that is open as standard input. The name that is used is
equivalent to the string that would be returned by the ttyname(3C) function.
Ex: tty
/dev/pts/0
43. cpio - The cpio command copies files into and out from a cpio archive. The cpio archive may span multiple volumes. The
-i, -o, and -p options select the action to be performed.
Ex: cpio -icBdvmu < /dev/rmt0
44. tar - The tar command archives and extracts files to and from a single file called a tarfile. A tarfile is usually a magnetic tape, but it can
be any file. tar's actions are controlled by the key argument. The key is a string of characters containing exactly one function letter (c, r, t ,
u, or x) and zero or more function modifiers (letters or digits), depending on the function letter used.
Ex: tar xvt /dev/rmt0
45. telnet - telnet communicates with another host using the TELNET protocol. If telnet is invoked without arguments, it enters command
mode, indicated by its prompt telnet>. In this mode, it accepts and executes its associated commands.
46. rlogin - rlogin establishes a remote login session from your terminal to the remote machine named hostname.
47. echo - The echo utility writes its arguments, separated by BLANKs and terminated by a NEWLINE, to the standard output. If there
are no arguments, only the NEWLINE character will be written.
48. ulimit - (att) - The ulimit utility sets or reports the file-size writing limit imposed on files written by the shell and its child processes
(files of any size may be read). Only a process with appropriate privileges can increase the limit.
49. vmstat - vmstat delves into the system and reports certain statistics kept about process, virtual memory, disk, trap and CPU activity.
NOTE: vmstat statistics are only supported for certain devices.
50. make - this is a command generator. All executables used in ORACLE are generated from makefiles. The make utility executes a list
of shell commands associated with each target, typically to create or update a file of the same name. makefile contains entries that
describe how to bring a target up to date with respect to those on which it depends, which are called dependencies. Since each dependency
is a target, it may have dependencies of its own.
51. env - The env utility obtains the current environment, modifies it according to its arguments, then invokes the utility named by the
utility operand with the modified environment.
53. logname - The logname utility will write the user's login name to standard output.
54. swap -l swap provides a method of adding, deleting, and monitoring the system swap areas used by the memory manager. Use the
commands listed below for the associated operating system:
SUN Solaris #/usr/sbin/swap -l
HP 9000 Series HP-UX # /etc/swapinfo

28

IBM RS/6000 AIX % /etc/lsps -a

Page

Digital UNIX % /usr/sbin/swapon -s

55. whatis (/usr/ucb/whatis) commands -> looks up one or more commands in the on-line man pages, and display a brief description. i.e
$ whatis man
man man (1) - find and display reference manual pages
man man (5) - macros to format Reference Manual pages
56. bfs [option] file -> big file scanner. Read a large file, using ed-like sysntax. Files can be up to 1024 KB.
57. hostname - The hostname command prints the name of the current host, as given before the login prompt. The superuser can set the
hostname by giving an argument.
58. hostid - The hostid command prints the identifier of the current host in hexadecimal. This numeric value is likely to differ when hostid
is run on a different machine.
59. nohup - The nohup utility invokes the named command with the arguments supplied. When the command is invoked, nohup arranges
for the SIGHUP signal to be ignored by the process. The nohup utility can be used when it is known that command will take a long time
to run and the user wants to logout of the terminal; when a shell exits, the system sends its children SIGHUP signals, which by default
cause them to be killed. All stopped, running, and background jobs will ignore SIGHUP and continue running, if their invocation is
preceded by the nohup command or if the process programmatically has chosen to ignore SIGHUP.
60. pg - The pg command is a filter that allows the examination of filenames one screenful at a time on a CRT. If the user types a
RETURN, another page is displayed; other possibilities are listed below.
61. printenv - printenv prints out the values of the variables in the environment. If a variable is specified, only its value is printed.
62. rwho - The rwho command produces output similar to who(1), but for all machines on your network. If no report has been received
from a machine for 5 minutes, rwho assumes the machine is down, and does not report users last known to be logged into that machine
63 sed - The sed utility is a stream editor that reads one or more text files, makes editing changes according to a script of editing
commands, and writes the results to standard output
-e script script is an edit command for sed. See USAGE below for more information on the format of script. If there is just one -e option
and no -f options, the flag -e may be omitted.
-f script_file Take the script from script_file.
script_file consists of editing commands, one per line.
-n Suppress the default output.
64. talk - The talk utility is a two-way, screen-oriented communication program. When first invoked, talk sends a message similar to:
Message from TalkDaemon@ her_machine at time...
talk: connection requested by your_address
talk: respond with: talk your_address to the specified address. At this point, the recipient of the message can reply by typing:
65. uptime - The uptime command prints the current time, the length of time the system has been up, and the average number of jobs in
the run queue over the last 1, 5 and 15 minutes.

29

10:47am up 27 day(s), 50 mins, 1 user, load average: 0.18, 0.26, 00

Page

example% uptime

66. vi - vi (visual) is a display-oriented text editor based on an underlying line editor ex. It is possible to use the command mode of ex
from within vi and to use the command mode of vi from within ex.
67. which - which takes a list of names and looks for the files which would be executed had these names been given as commands.Each
argument is expanded if it is aliased, and searched for along the user's path. Both aliases and path are taken from the user's .cshrc file
Ex: $ which svrmgrl
/u01/app/oracle/product/7.3/bin/svrmgrl
68. shutdown - shutdown is executed by the super-user to change the state of the machine. In most cases, it is used to change from the
multi-user state (state 2) to another state. By default, shutdown brings the system to a state where only the console has access to the
operating system. This state is called single-user.
69. reboot - reboot restarts the kernel. The kernel is loaded into memory by the PROM monitor, which transfers control to the loaded
kernel. Although reboot can be run by the super-user at any time, shutdown(1M) is normally used first to warn all users logged in of the
impending loss of service.
70. init is a general process spawner. Its primary role is to create processes from information stored in the file /etc/inittab.

7.BACKUP-SCENARIOS

Case Studies for Oracle Backup and Recovery
Scenario : The scenario presents the kinds of backups taken at the site, their frequency and other
background information, including version of the database.
Problem: This section describes the kind of failure that occurred or the situation the DBA is facing while
operating the database.
Solution; This section gives all the possible alternatives to recover the database for a specified failure.
CASE 1 :
Scenario : John uses an Oracle Database to maintain the inventory of his grocery store. Once every week,
he runs a batch job to insert, update and delete data in his database. He uses a stand-alone UNIX machine
running Oracle 8.1.7. Johns starts the database up in the morning at 8A.M., and shuts it down at 5 P.M., and
operates the database all day in NOARCHIVELOG mode. He takes an offline backup (cold backup) of the
database once a week, or every Sunday by copying all the datafiles, log files and control files to tape.
Problem : On a Wednesday morning, John realized that he had lost a datafile that contained all the user data.
He tried to start up the database using the STARTUP OPEN command got the following error;
ORA-01157 :cannot identify data file 4 - file not found
ORA-01110: data file 4: '/home/oracle/orahome1/oradata/ora1/users01.dbf'
He realized that he had accidentally deleted one of the data files while trying to free some space on the
disk. How would he resolve this problem? How much data would you lose?
Write down the steps you would perform to recover your database.

Page

Scenario : Same as in Case 1.

30

Case 2 : Dropping datafiles in NOARCHIVELOG mode

Problem : The disk crashed and one of the datafiles was lost. In this case, the data file belonged to a
temporary tablespace.
Solution : The TEMPORARY tablespace is used by Oracle to do the intermediate work while executing
certain commands, that require sorting of data. If no permanent objects are stored in this datafile, it is okay
to drop the datafile and start up the database. To drop the datafile, you need to use the
'ALTER DATABASE DATAFILE datafilename OFFLINE DROP;
Note that after opening the database, the tablespace is online but the data file is offline. Any other data files
that belong to this tablespace are online and can be used. Oracle re-commends re-creating the tablespace.
Simulation :
SQL> connect internal
SQL> startup mount;
SQL> archive log list;
SQL> alter database noarchivelog;
SQL> alter database open;
SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> shutdown abort;
SQL> host rm /home/oracle/orahome1/oradata/ora1/temp.dbf (deleting a file to simulate a loss)
SQL>startup mount;
SQL> alter database open;
ORA-01157 :cannot identify data file 6 : file not found
ORA-01110 :datafile 6 : '/home/oracle/orahome1/oradata/ora1/temp.dbf'
SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/temp.dbf' offline;
ORA-01145: offline immediate disallowed unless media recovery is enabled (this option can be used only
in ARCHIVELOG mode)
SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/temp.dbf' offline drop;
SQL> alter database open;

31

SQL> drop tablespace temp including contents;

CASE 3:

Page

SQL> create tablespace temp datafile '/home/oracle/orahome1/oradata/ora1/temp.dbf' size 1M;

Scenario : Moe is the DBA of a real-time call-tracking system. He uses Oracle 8.1.7 on a VAX/VMS system
and takes an online backup of the database every night.
(Which mode of the database would he be running in ?) The total database size is 50GB and the real-time
call tracking system is a heavy OLTP system primarily with the maximum activity between 9 a.m and 9
p.m., everyday. At 9 p.m. a batch job runs a command procedure that puts the tablespaces in hot backup
mode, takes the backup of all the data files to tape at the operating system level, and then issues the alter
tablespace end backup command.
Problem: One afternoon a disk crashed, losing the SYSTEM tablespace residing on the disk. As this
happened at the peak processing time, Moe had to keep the down time to a minimum and open the database
as soon as possible. He wanted to start the database first and then restore the datafile that was lost, so he
took the SYSTEM data file offline. When he tried to open the database he got the following error:
ORA-01147 : SYSTEM tablespace file 1 is offline
How would you solve this problem.
Solution : The only solution here is to restore the SYSTEM datafile from the night's online backup, and then
perform database recovery. Note that if a disk crash has damaged several data files then all the damaged
data files need to be restored from the online backup. The database needs to be mounted and the RECOVER
DATABASE command issued before it could be opened.
CASE 4 :
Scenario : Use the same scenario for Case 3 again.
Problem : Let's assume that instead of a system data file, a non-system data file is lost due to the disk crash.
We also assume that this data file doesn't contain any active rollback segments.
Simulation:
SQL> connect internal
SQL> create table case4 ( c1 number) tablespace users;
SQL> insert into case4 values (3);
SQL> insert into case4 values(3);
SQL> commit;
SQL> alter system switch logfile;
SQL> host rm /home/oracle/orahome1/oradata/ora1/users01.dbf
SQL> shutdown abort;
Identify three ways in which you could solve this problem.

32

How do you determine when to use data file recovery versus tablespace recovery?
Page

Solution : When a non-system data file is lost, there are three methods by which the data file can be
recovered.
a) The RECOVER DATABASE command can be used. This requires the database to be mounted, but not
open, which means offline recovery needs to be recovered.
SQL> host cp /home/oracle/backup/users01.dbf /home/oracle/orahome1/oradata/ora1/users01.dbf
SQL>startup open
ORA-01113 file 4 needs media recovery
SQL> startup mount;
SQL> recover database;
SQL> alter database open;
SQL> select * from case4;
b) The second method, is to use the RECOVER DATABASE command.
Here the datafile needs to be offline but the database can be open ormounted.
SQL> host cp /home/oracle/backup/users01.dbf /home/oracle/orahome1/oradata/ora1/users01.dbf
SQL>startup open
ORA-01113 file 4 needs media recovery
SQL> startup mount;
SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/users01.dbf' offline;
SQL> alter database open;
SQL> recover datafile '/home/oracle/orahome1/oradata/ora1/users01.dbf' ;
SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/users01.dbf' online;
SQL> select * from case4;
c) The third method is to use the RECOVER TABLESPACE command which requires the tablespace to be
offline and the database to be open.
SQL> host cp /home/oracle/backup/users01.dbf /home/oracle/orahome1/oradata/ora1/users01.dbf
SQL>startup open
ORA-01113 file 4 needs media recovery

33

SQL> startup mount;

SQL> alter database open;

Page

SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/users01.dbf' offline;

SQL> alter tablespace users offline;
SQL> recover tablespace users ;
SQL> select * from case4 ;
(error will be generated)
SQL> alter tablespace users online;
SQL> select * from case4;
(Note: while doing tablespace recovery all the datafiles belonging to the tablespace should be offline).
CASE 5
Scenario : Anita is a DBA in a banking firm. She administers an Oracle 8.1.7 database on a Unix Server.
She stores all the user data in the USERS tablespace, index data in the INDEXES tablespace, and all the
rollback segments in the RBS tablespace. In addition, she has other tablespaces to store data for various
banking applications. Since the database operates 24*7, she has an automated procedure to take online
backups every night. In addition, she takes an export once a month of all the important tables in the
database.
Problem: On Monday morning, due to a media failure, all the data files that belong to the rollback segment
tablespace RBS was lost. It was the beginning of the week and a lot of applications needed to be run against
the database, so she decided to do an online recovery. Once she took the datafile offline and opened the
atabase, she tried to select from a user table and got the following error:
ORA-00376 : file 2 cannot be read at this time
(File 2 happens to be one of the datafiles that belong to the rollback segment tablespace.)
Simulation:
SQL> connect internal;
SQL> create table case5
(c1 number) tablespace users;
SQL> select * from case5;
SQL> commit;
SQL> set transaction use rollback segment r01;
SQL> insert into case5 values (5);

34

SQL> shutdown abort;

How would you perform a recovery ?

Page

SQL> host rm /home/oracle/orahome1/oradata/ora1/rbs01.dbf

Solution for case 5:
An important step you need to perform is modify the initialization file and comment out the
ROLLBACK_SEGMENTS parameter. If this is not done, Oracle will not be able to find the rollback
segments and will not be able to open the database.
SQL> startup mount;
SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/rbs01.dbf' offline;
SQL> alter database open;
SQL>select * from case5;
(error indicating file cannot be read at this time)
SQL> select segment_name, status from dba_rollback_segs;
(some of the rollback segments will indicate they need recovery)
SQL> host cp /home/oracle/backup/rbs01.dbf /home/oracle/orahome1/oradata/ora1/rbs01.dbf
SQL>recover tablespace rbs;
SQL>alter tablespace rbs online;
SQL>select * from case5 ;
No rows selected
SQL>select segment_name, status from dba_rollback_segs;
(the segments continue to show that that they need recovery. To take of this issue)
SQL> alter rollback segment r01 online;
SQL> alter rollback segment r02 online ;
(Bring online all rollback segments that indicate that they need recovery).
CASE 6 :
Scenario : Sara works in a software company as a DBA to administer a small development database on a
UNIX machine. She created a 500MB database. She decided to mirror the control files but not the online
redo logs, so created the database with three log groups with one member each. Her backup strategy
includes taking online backups twice a week and a full database export once a week.

Page

35

Problem: A power surge caused the database to crash and also caused a media failure, losing all the online
log files. All the data files and the current control files are intact. Although the data files are OK, after the
crash they cannot be used because instance recovery cannot be performed (since all the log files are lost).If
any of the unarchived log files are lost, crash recovery cannot be performed and instead media recovery
needs to be performed.
Simulation:
SQL> shutdown abort;
SQL> host rm /home/oracle/orahome1/oradata/ora1/*.log
Solution to Case 6 :
You would need to perform a Cancel base Incomplete recovery Restore all the datafiles:
$cp /home/oracle/orahome1/backup/*.dbf /home/oracle/orahome1/oradata/ora1/
SQL>connect internal
SQL> startup mount;
SQL>recover database until cancel;
(Cancel the recovery when oracle asks to apply the redo logs that are lost)
SQL>alter database open resetlogs;
SQL>shutdown
SQL>exit
$ls
(The online redo logs will be automatically created by Oracle as part of the database open. This is required
for normal operation of the database).
CASE 7
Scenario : Kevin is one of the DBAs of a Fortune 500 Financial Company, and maintains one of the
company's most crucial databases. A UNIX machine is used to store a 500 gigabyte database using Oracle
8.1.6. The database operates 24*7 with 200 to 250 concurrent users on the system at any one time. There are
250 tablespaces and the backup procedure involves keeping the tablespaces in hot backup mode and taking
an online backup. Each log file is 10MB. Between issuing the BEGIN BACKUP and END BACKUP oracle
generates about 50 archive log files.
Problem : On Friday afternoon, while taking hot backups, the machine crashed bringing the database down.
As this is a mission critical workshop, Kevin needed to bring the database up as fast as possible. Once the
machine was booted, he tried to start the database and Oracle asked for media recovery starting from log
sequence number 2300. The current online log file has a sequence number 2335, which means 35 log files
needed to be applied before the database could be opened.
Simulation:
SQL>connect internal

36

SQL>archive log list

Page

SQL>startup

(Database is in archivelog mode, with automatic archiving enabled)
SQL>alter tablespace test begin backup;
SQL>host cp /home/oracle/orahome1/oradata/ora1/test1.dbf /home/oracle/hbackup/test1.dbf
SQL> create table case 7 (c1 number) tablespace test;
SQL> insert into case7 values (7);
SQL>commit;
SQL> alter system switch logfile;
SQL>shutdown abort;
SQL> startup mount;
SQL> alter database open;
{Error : indicating file 5 needs recovery }
Solution : How would you solve this problem ?
Solution to Case 7
Alter database datafile '/home/oracle/orahome1/oradata/ora1/test1.dbf' end backup;
CASE 8
Scenario : Jane uses an Oracle 8i database for windows on her PC for her home business. She maintains a
small 20MB database and takes regular cold backups. Her backup procedure involves shutting down the
database and copying the data files, log files and control file to floppy disks. She maintains only one copy
of the control file and doesn't mirror the control file because she thinks mirroring the control file doesn't
make sense since she has only one hard disk.
Problem : Jane accidentally deleted her control file. Since she didn't have a copy of the control file, she
copied the backup control file and tried to start up the database. While copying the database, Oracle
complained that an old control file was being used.
Simulation :
SQL> connect internal
SQL> startup open;
SQL> select name, status, enabled from v$datafile;
Name

/home/oracle¼/rbs01.dbf

37

/home/oracle¼/system01.dbf

Page

-------

/home/oracle¼/tools01.dbf
/home/oracle¼/users01.dbf
/home/oracle¼/test1.dbf
/home/oracle¼/temp.dbf
Status
---------SYSTEM
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
Enabled
-------------------READ WRITE
READ WRITE
READ WRITE
READ WRITE
READ ONLY
READ WRITE
SQL> create table case8 (c1 number) tablespace users;
SQL> insert into case8 values (8);
SQL> commit;
SQL>alter system switch logfile;

SQL>shutdown abort;

38

SQL>alter system switch logfile;

Page

SQL>alter system switch logfile;

Solution : How would you recover your database ?
Solution to Case 8
SQL> host cp /home/oracle/hbackup/control01.ctl /home/oracle/orahome1/oradata/ora1/control01.ctl
SQL> startup mount;
SQL> alter database open;
{Error : indicating that an old control file is being used }
SQL> recover database;
{Error : indicating correct syntax must be used }
SQL> recover database using backup controlfile;
ORA-00283 : Recovery session cancelled due to errors
ORA-01233 : file 5 is read only - cannot recover using backup control file
ORA-01110 : data file 5 : '/home/oracle/orahome1/oradata/ora1/test1.dbf'
SQL>alter database datafile '/home/oracle/orahome1/oradata/ora1/test1.dbf' offline;
SQL>recover database using backup controlfile;
SQL>alter database open;
ORA-01589 : must use the RESETLOGS or NORESETLOGS for database
open.
SQL>alter database open resetlogs;
SQL>select * from case8;
C1
---6
CASE 9
Scenario : Matt, the DBA of a financial firm administers a 100GB database on an IBM mainframe running
Oracle 8, release 8.1.7. Matt operates the database in ARCHIVELOG mode. Every night, the system

Page

39

manager takes an OS backup of the system. As part of this backup, all Oracle database files are copied from
DASD to tape. The Oracle database is shutdown before the backups are taken. Matt takes a full database
export every three months and incremental exports once a month.
Problem : One day, while doing space management, Matt added a small datafile to a tablespace, then
decided that he really needed more space. He didn't want to add another datafile, but instead decided to
replace the smaller datafile with a new, bigger datafile. Since a datafile cannot be dropped, he merely took
the new datafile offline and added a larger datafile to the same tablespace. He deleted the datafile at the OS
level, assuming Oracle would never need the file since he hadn't added any data to it, and also because it
was offline. Shortly after he started running an application, he got the error:
ORA-00376 : file 6 cannot be read at this time (file 6 is the same datafile that he had taken offline and
deleted earlier)
Solution : Identify three methods in which you could recover from this problem. What would have been the
most appropriate solution to the problem Matt had?
Solution to Case 9
When you take a datafile offline and open the database, you can apply one of the following three methods.
a) Restore the datafile that was taken offline from a backup and do a data file recovery.
b) If no backups exist, create a datafile using the 'alter database create datafile' command and then
recover it. In this method, you would require all the log files that were generated since the time the datafile
was created.
c) Rebuild the tablespace (This method involves dropping the tablespace to which the offline datafile
belongs and re-creating it) Write down the steps you would issue in all the three cases. Appropriate solution
to the problem Matt faced is;
SQL> connect internal;
SQL>startup mount;
SQL>archive log list;
SQL>alter database open;
SQL>alter tablespace users add datafile '/home/oracle/orahome1/oradata/ora1/users02.dbf' size 40k;
SQL> alter database datafile '/home/oracle/orahome1/oradata/ora1/users02.dbf' resize 1M;
CASE 10
Scenario :Nancy administers a large database of 150 GB at a factory. She uses Oracle 8.1.7 on a Unix server
and takes weekly offline backups of the database. She triple mirrors her disk drives, and once a week she
shuts the database down, unlinks one of the mirrors and starts up the database. At this point the database is
double mirrored. She then uses tape drives to copy the database files onto the tape. She also keeps a copy of
the database on a separate set of disk drives. Once the copying is done, she connects the third mirror to the
double mirror. Nancy runs the database in ARCHIVELOG mode. Every day, about 100 archive log files are
generated. An automated process copies the archived log files to tape at regular intervals, and one week's
worth of archived log files are kept online on disk. The control files and online log files are multiplexed.
Problem : On Sunday, an offline backup of the database was taken.

Page

40

Nancy observed that the current log sequence number was 100. Thursday morning, one of the tablespaces
(TS1) was taken offline and the current log sequence number was at that time was 450. On
Thursday
afternoon, due to a disk controller problem, some of the data files were lost. The current log sequence
number at the time of the failure was 500. Nancy decided to delete all the data files, restore the data files
from the offline backup from Sunday and roll forward. She restored all the data files from the cold backup
and used the current control file to do the database recovery. Nancy issued the recover database command
and applied around 400 archived log files. Since all the archived log
files were in the archive destination, Nancy issued the auto command
and Oracle automatically applied all 400 archived log files. The
recovery took about 13 hours and Nancy could finally bring the database to normal operation. Once the
database was open, she decided to bring tablespace TS1 online. Oracle asked for recovery for all the data
files that belong to the tablespace TS1. Nancy expected Oracle to ask for recovery starting at log sequence
number 450, since that's when the tablespace was taken offline. However when she issued the recover
tablespace command, she realized that Oracle asked for recovery starting from log sequence number 100,
all the way from when the backup was taken.
Simulation:
SQL>connect internal
SQL>startup open;
SQL>archive log list;
Database log mode

ARCHIVELOG

Automatic archival

ENABLED

Archival destination

/home/oracle/orahome1/archives

Oldest online log sequence

60

Next log sequence to archive

62

Current log sequence

62

SQL>alter system switch logfile;
SQL>alter system switch logfile;
SQL>alter tablespace users offline;
SQL>alter system switch logfile;
SQL>archive log list;
Database log mode

ARCHIVELOG

Automatic archival

ENABLED

/home/oracle/orahome1/archives

Oldest online log sequence

63

Next log sequence to archive

65

Current log sequence

65

Page

41

Archival destination

SQL>shutdown abort
SQL>host rm /home/oracle/orahome1/oradata/ora1/*.dbf (Simulates a loss of all datafiles)
SQL>host cp /home/oracle/orahome1/backup/*.dbf /home/oracle/orahome1/oradata/ora1
(Restoring all the datafiles)
Solution : How would you recover from the above problem.
Solution to CASE 10
We present two recovery methods. The first method is the recovery procedure used by Nancy in this
example. The second method is a better way of doing recovery, and is recommended by Oracle since the log
file(s) need to be applied only once.
Method 1
SQL>startup mount;
SQL> recover database;
(Applying the logs)
SQL>alter database open;
SQL>alter tablespace users online;
ORA-01113 : file 4 needs media recovery
ORA-01110 : data file 4 : '/home/oracle/orahome1/oradata/ora1/users01.dbf'
SQL> recover tablespace users; (Reapplying all the logfiles)
SQL>alter tablespace users online;
Method 2
SQL> startup mount;
SQL>select * from v$datafile;
SQL> select name, status, enabled from v$datafile;
Name

/home/oracle¼/rbs01.dbf

42

/home/oracle¼/system01.dbf

Page

-------

/home/oracle¼/tools01.dbf
/home/oracle¼/users01.dbf
/home/oracle¼/test1.dbf
/home/oracle¼/temp.dbf
SQL> alter database datafile
Status
---------SYSTEM
ONLINE
ONLINE
OFFLINE
ONLINE
ONLINE
Enabled
-------------------READ WRITE
READ WRITE
READ WRITE
DISABLED
READ ONLY
READ WRITE
'/home/oracle/orahome1/oradata/ora1/users01.dbf' online;
SQL>recover database;
SQL> select name, status, enabled from v$datafile;

/home/oracle¼/system01.dbf

43

-------

Page

Name

/home/oracle¼/rbs01.dbf
/home/oracle¼/tools01.dbf
/home/oracle¼/users01.dbf
/home/oracle¼/test1.dbf
/home/oracle¼/temp.dbf
Datafile is online)
SQL> alter database open;
Status
---------SYSTEM
ONLINE
ONLINE
ONLINE
ONLINE
ONLINE
Enabled
-------------------READ WRITE
READ WRITE
READ WRITE
DISABLED
READ ONLY
READ WRITE
SQL>select tablespace_name, status from dba_tablespaces;

SYSTEM

44

------------------------------

Page

TABLESPACE_NAME

RBS
TOOLS
USERS
TEST
TEMP
STATUS
----------ONLINE
ONLINE
ONLINE
OFFLINE
ONLINE
ONLINE
SQL>create table case10 (c1 number) tablespace users;
ORA-01542 : tablespace users is offline, cannot allocate space in it.
SQL> alter tablespace users online;
SQL>create table case10 (c1 number) tablespace users;
Table created.

70.FREE DBA&APPSDBA PRACTICE INTERVIEW QUESTIONS
THESE ARE THE ANSWERS FOR POST NO: 67. CHECK IT OUT
1)What are the prerequisites for connecting to a database
> 1) oracle net services should be available in both server and client.
2) listner should up and running. In case of remote connection.
[oracle listiner starts up a dedicated server process and passes the server protocal adress to client using that address the clients connect to
the server. Once the connection is established the listiner connection is terminated]
***********************************************************************
[AND]
1) check wether database server is installed on server or not.
2) client software should be installed on client machine.

Page

45

3) check database and client are running on the same network or not.(with the help of ping
4) ensure thar oracle listiner is up and running
5) connect to server using server protocal address
2) Create a User "TESTAPPS" identified by "TESTAPPS"
> create user identified by ;
3) Connect to DB using TESTAPPS from DB Node and MT Node
> first give grant options to the user....
grant connect,resource to ;
4)How do you identify remote connections on a DB Server
> ps -ef|grep -i local [where local=no it is a remote connection... in the os level]
5)How do you identify local connections on a DB Server
> ps -ef|grep -i local [where local=yes it is a local connection... in the os level]
6)Can you connect remotely as a user on DB Server. If so, how?
> /@ [with the help of connecting string]
7) Do you need to acess to DB Server to connect to a system schema?
> NO, just knowing the username&password u connect from the client...
8)What is the difference between "SYS" & "SYSTEM" Schema
> SYS is a super most user...
SYS has additional roles sysdba, sysoper
SYS can do only startup, shudown options
> SYSTEM schema has owns certain additional data dictonary tables..
SYSTEM donot use startup and shutdown options....
9)What are the roles/priviliges for a "SYS" Schema
>***ROLES***[select granted_role from dba_role_privs where grantee='SYS']
IMP_FULL_DATABASE, DELETE_CATALOG_ROLE, RECOVERY_CATALOG_OWNER, DBA, EXP_FULL_DATABASE,
HS_ADMIN_ROLE, AQ_ADMINISTRATOR_ROLE, OEM_MONITOR, RESOURCE, EXECUTE_CATALOG_ROLE,
LOGSTDBY_ADMINISTRATOR, AQ_USER_ROLE,
SCHEDULER_ADMIN, CONNECT, SELECT_CATALOG_ROLE, GATHER_SYSTEM_STATISTICS,
OEM_ADVISOR
***PRIVILAGES****[select privileges from dba_sys_privs where grantee='SYS';]
CREATE ANY RULE
CREATE ANY EVALUATION CONTEXT
MANAGE ANY QUEUE
EXECUTE ANY PROCEDURE
ALTER ANY RULE
CREATE RULE SET
EXECUTE ANY EVALUATION CONTEXT
INSERT ANY TABLE
SELECT ANY TABLE
LOCK ANY TABLE
UPDATE ANY TABLE
DROP ANY RULE SET
ENQUEUE ANY QUEUE
EXECUTE ANY TYPE
CREATE RULE
ALTER ANY EVALUATION CONTEXT
CREATE EVALUATION CONTEXT
ANALYZE ANY
EXECUTE ANY RULE
DROP ANY EVALUATION CONTEXT

46
Page

EXECUTE ANY RULE SET
ALTER ANY RULE SET
DEQUEUE ANY QUEUE
DELETE ANY TABLE
DROP ANY RULE
CREATE ANY RULE SET
SELECT ANY SEQUENCE

10)What are the role/privileges for a SYSTEM Schema
> **ROLES**
[select granted_role from dba_role_privs where grantee='SYSTEM';]
AQ_ADMINISTRATOR_ROLE
DBA
>**PRIVILEGES***
[select privilege from dba_sys_privs where grantee='SYSTEM';]
GLOBAL QUERY REWRITE
CREATE MATERIALIZED VIEW
CREATE TABLE
UNLIMITED TABLESPACE
SELECT ANY TABLE
11)What is the difference between SYSDBA & DBA
> SYSDBA has startup and shutdown options
> DBA has no startup and shutdown options
12)What is the difference between X$ , V$ ,V_$,GV$
> X$ is permenent views
> GV$ are used in RAC environment....
> v$, V_$ are the temporary views which exist during the run time....
13)How do you verify whether your DB is a single node or Multinode
> sho parameter cluster;
its show false ..means single node.
14)From MT connect to db using "connect / as sysdba"
> /as sysdba cannot connect to database from MT ...
or
u can connect to DB from MT by creating password file
15)Is a Listener required to be up and running for a Local Connection
> NO
16)Is a Listener required to be up and running for a remote Connection
> YES
17)How do you verify the Background processes running from the Database
> desc v$bgprocess
select * from v$bgprocess;
18)How do you verify whether a init.ora parameter is modifiable or not.
> desc v$parameter
select name,value,ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE from v$parameter;
19)What are the various ways to modify an init.ora parameter
> Two ways ... static & dynamic

20)Why is init.ora file required
> For starting instance...
defining the parameters values...[memory structures]
defining the control files location....

Page

47

static... editing text in the init.ora file......
dynamic... alter system set = scope=both (or) scope=spfile (or) scope=memory;

21)Why is a DB required to be in archive Log
> To recover the database
22)List the total No. of objects available in an apps database with respect to a owner,object type,status
23)When an DB is being started where is the information being recorded
> Alert logfile.....
24)What is the information that is being recorded at the time of db Start
....
25)What is the difference between instance and Database
> INSTANCE is group of memory structures and background processes, its a volatile memory.
> DATABASE is physical memory.. collection of C, R, D files.....
26)How is an instance created
> when ever issue the startup command....
server process reads the inti.ora file...
and its internally reads the SGA allocated size, memory structures values, parametrs values... using these parameters instance is created
27)What are the files essential to start an instance
> init.ora file.... and internally its parameters...
> if its remote connection needs init.orafile & password file..
28)While the instance is being created can users connect to Database.
> NO, normal users cannot connect the database.. only sys user can connect
> normal users has no privileges... to connect the database in nomunt & mount stages....
29)Startup an instance. Connect as user Testapps. Verify the data dictionary table dba_data_files. What are the data dictionary objects that
can be viewed
> normal users cannot connect to database...
30)After completing step 31, exit out of sql session, connect as "apps"
31)When the instance is created how many Unix processes are created and how do you view them
> startup nomount
ps -ux....
or alert log file...
PMON started with pid=2, OS id=4482
PSP0 started with pid=3, OS id=4484
MMAN started with pid=4, OS id=4486
DBW0 started with pid=5, OS id=4488
LGWR started with pid=6, OS id=4490
CKPT started with pid=7, OS id=4492
SMON started with pid=8, OS id=4494
RECO started with pid=9, OS id=4496
MMON started with pid=10, OS id=4498
MMNL started with pid=11, OS id=4500
ten background process and one server and cleint process...total 12..
32)When the database is mounted how many Unix processes are created and how do you view them
> ps -ux or alert logfile...

48
Page

> In mount stage (alter database mount)
no extra process are not created....

33)How do you mount a database after an instance is created. What are the messages recorded while changing to mount stage
> alter database mount
> Setting recovery target incarnation to 1
> Successful mount of redo thread 1, with mount id 4186671158
> Database mounted in Exclusive Mode
> Completed: alter database mount
34) What are the data dictionary objects that can be viewed in a mount stage.
35)How do you open a database after an instance is mounted. What are the messages recorded while changing to open stage
> alter database open
> opening redolog files.
> Successful open of redo files.
> MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
> SMON: enabling cache recovery
> Successfully onlined Undo Tablespace 1
> SMON: enabling tx recovery
> Database Characterset is US7ASCII
> Completed: alter database open
****************************************************************************
No answer qus: 13, 22, 29, 30, 34

78.FREE ONLINE LATEST OCA DUMPS 1Z0 042 QUESTION NO 11 TO 30
QUESTION NO: 11
You work as a database administrator for Certkiller .com. You find that users with DBA role are using more CPU resources than what is
allocated in their profiles. Which action would you take to ensure that resources limits are imposed on these users?
A.Assign the DEFAULT profile to the users
B.Set the RESOURCE_LIMIT parameter to TRUE in the parameter file
C.Create a new profile with CPU restrictions and assign it to the users
D.Specify the users as members of the DEFAULT_CONSUMER_GROUP
E.Revoke the DBA role and grant CONNECT and RESOURCE role to the users
Answer: B
QUESTION NO:12
You work as a database administrator for Certkiller .com. In your Oracle database 10g installation you have set ORACLE_BASE to
/u01/app/oracle. Which objective will be achieved by this setting?
A.The Oracle kernel will be placed in this location.
B.The Oracle software will be placed in this location.

49

C.The server parameter file (SPFILE) will be placed in this location.

Page

D.The database files will be placed in this location, if not specified explicitly.
E.The location will be considered for the base of Oracle Managed Files (OMF).
F.The location will be considered for the base of Optimal Flexible Architecture (OFA).
Answer: F
QUESTION NO:13
The operating system filecratabin the Linux platform gets updated whenever you create a new database on the same host machine. What
kind of information is stored is stored in this file?
A.OracleSIDsonly
B.Oracle homes only
C.Oracle install timestamp
D.Oracle inventory pointer files
E.Oracle database creation timestamp
F.OracleSIDsand Oracle homes only
G.OracleSIDs, Oracle homes and flag for auto startup
Answer: G
QUESTION NO:14
You work as a database administrator for Certkiller .com. Your database is configured for automatic undo management.
UNDO_RETENTION is set to 3 hours.You want to flash back a table that was created last year. How far back can the flashback query
go?
A.3 hours
B.6 months
C.until last year
D.until last commit
E.until the point when the undotablespacewas refreshed
F.until the database is shut down and the memory erased
Answer: A
QUESTION NO:15
In your Certkiller .com production database, you find that the database users are able to create and read files with unstructured data,
available in any location on the host machine from an application. You want to restrict the database users to access files in a specific
location on the host machine. What could do to achieve this?

50

A.Modify the value for the UTL_FILE_DIR parameter in the parameter file

Page

B.Grant read and write privilege on the operating system path to the database users
C.Modify the value for the LDAP_DIRECTORY_ACCESS parameter in the parameter
file
D.Modify the value for the PLSQL_NATIVE_LIBRARY_DIR parameter in the
parameter file
E.Create a directory object referring to the operating system path, and grant read and
write privilege on the directory object to the database users
Answer: A
QUESTION NO:16
Your boss at Certkiller .com wants you to clarify Oracle 10g. What statement about the Shared Server configuration is valid?
A.Program Global Area (PGA) is stored in Shared pool.
B.User session data and Cursor state are stored inLargepool and Stack space is stored
Shared pool.
C.User session data is stored in Shared pool and Stack space and Cursor state are stored
inLargepool.
D.User session data and Cursor state are stored inLargepool and Stack space is stored
outside the System Global (SGA).
E.User session data and Cursor state are stored outside the System Global Area (SGA)
and Stack space is stored inside the SGA.
Answer: D
QUESTION NO:17
You work as a database administrator for Certkiller .com. On a Monday morning, you find the database instance aborted. After inspecting
the alert log file, you execute the STARTUP command in SQL*Plus to bring the instance up. What statement is true?
A.PMON coordinates media recovery.
B.SMON coordinates instance recovery.
C.PMON coordinates instance recovery.
D.Undo Advisor would roll back all uncommitted transactions.

QUESTION NO:18

Page

Answer: B

51

E.SQL*PLUS reports an error with the message asking you to perform instance recovery.

In your Certkiller .com database server the parameter PLSQL_CODE_TYPE has
been set to NATIVE. Which object would be achieved by the setting?
A.The source PL/SQL code will be stored in native machine code.
B.The source PL/SQL code will be stored in interpreted byte code.
C.The compiled PL/SQL code will be stored in native machine code.
D.The compiled PL/SQL code will be stored in interpreted byte code.
Answer: C
PLSQL_CODE_TYPE specifies the compilation mode for PL/SQL library units. Values:
*INTERPRETED PL/SQL library units will be compiled to PL/SQL bytecode format. Such modules are executed by the PL/SQL
interpreter engine. *NATIVE PL/SQL library units (with the possible exception of top-level anonymous PL/SQL blocks) will be compiled
to native (machine) code. Such modules will be executed natively without incurring any interpreter overhead
QUESTION NO:19
Exhibit:
You work as a database administrator for Certkiller .com. You have started the
database instance and you want to manage your database remotely with Enterprise
Manager through a Web browser.
Which two URLs would you use to access the Database Control? (Choose two.)
A.http://162.67.17.123:5500/em
B.http://www.162.67.17.123:5500/em
C.http://fubar.europe. Certkiller .com:5500/em
D.http:// Certkiller 13.162.67.17.123:5500/em
E.http:// Certkiller 13.fubar.europe. Certkiller .com:5500
F.http://www. Certkiller 13.fubar.europe. Certkiller .com:5500/em
G.http:// Certkiller 13.fubar.europe. Certkiller .com:5500/em
Answer: A, C
QUESTION NO:20

Page

52

You work as a database administrator for Certkiller .com. While loading data into the Certkiller STAFF table using Oracle Enterprise
Manager 10g Database Control, you find the status of the job as failed. On further investigation, you find the following error message in
the output log: ORA-01653 unable to extend table HR. Certkiller STAFF by 8 intablespaceUSERS
Which task would you perform to load the data successfully without affecting the users who are accessing the table?
A.Restart the database instance and run the job
B.Truncate the Certkiller STAFF table and run the job
C.Delete all rows from the Certkiller STAFF table and run the job
D.Increase the size of the USERStablespacethe and run the job
E.Increase the size of the database default permanenttablespaceand run the job
Answer: D
QUESTION NO:21
Exhibit
Which statement regarding thedeptandemptables are true?
A.When you delete a row from theemptable, you would receive a constraint violation error.
B.When you delete a row from the dept table, you would receive a constraint violation error.
C.When you delete a row from theemptable, automatically the corresponding rows are deleted from the dept table.
D.When you delete a row from the dept table, automatically the corresponding rows are deleted from theemptable.
E.When you delete a row from the dept table, automatically the corresponding rows are updated with null values in theemptable.
F.When you delete a row from theemptable, automatically the corresponding rows are updated with null values in the dept table.
Answer: D
QUESTION NO:22
You work as a database administrator for Certkiller .com. Users in the Certkiller .com PROD database complain about the slow response
of transactions. While investigating the reason you find that the transactions are waiting for the undo segments to be available, and undo
retention has been set to zero.
What would you do to overcome this problem?
A.Increase the undo return
B.Create more undo segments
C.Create another undotablespace
D.Increase the size of the undotablespace
Answer: D

53

QUESTION NO:23

Page

You are working on a test database where instance recovery takes a considerable amount of time. How can reduce the recovery time?
Choose two.
A.By multiplexing the control files
B.By multiplexing the redo log files
C.By decreasing the size of redo log files
D.By configuring mean time to recover (MTTR) to a lower value
E.By setting the UNDO_RETENTION parameter to a higher value
Answer: C, D
QUESTION NO:24
Exhibit #1
Exhibit #2, command
Exhibit #3, error
You work as a database administrator for Certkiller .com. You have created a database link, devdb.uk. Certkiller .com, between the
database PRODDB and DEVDB.You want to import schema objects of the HR user using Oracle Data Pump from the development
database, DEVDB, to the productiondatagbase, PRODDB. View Exhibit #1 to see the source and targetdatabase.l You execute the code in
Exhibit #2. The codefailsandproduces the error displayed in Exhibit #3. What would you do to overcome the error?
A.Remove thedumpfileoption in the command
B.Remove theflashback_timeoption in the command
C.Add the user,SYSTEM, to the schemas option in the command
D.Add thenetwork_link= devdb.uk. Certkiller .comoption in the command
E.Remove the schemasoptions and add thenetwork_link= devdb.uk. Certkiller .comoption in the command
F.Remove thedumpfileoptions and add thenetwork_link= devdb.uk. Certkiller .comoption in the command
Answer: F
QUESTION NO:25
You work as a database administrator for Certkiller .com. The database is open. A media failure has occurred, resulting in loss of all the
control files in your database.Which statement regarding the database instance is true in this scenario?
A.The instance would hang.
B.The instance needs to be shut down.
C.The instance would be in the open state.
D.The instance would abort in such cases.

54

E.The instance would be in the open and invalid state.

Answer: D

Page

F.The instance would in the open state, but all the background processes will be restarted.

QUESTION NO:26
You work as a database administrator for Certkiller .com. In a production environment, users complain about the slow response time when
accessing the database. You have not optimized the memory usage of the Oracle instance and you suspect the problem to be with the
memory.
To which type of object would you refer to determine the cause of the slow response?
A.The trace file
B.The fixed views
C.The data dictionary views
D.The operating system log fields
E.The dynamic performance views.
Answer: E
QUESTION NO:27
You are working on the Certkiller database.
What is the default name of the alert log file in this database?
A.alert_ Certkiller .log
B.alertlog_ Certkiller .log
C.alert_log_ Certkiller .log
D. Certkiller _alert_log.log
E.log_alert_: Certkiller .log
F.trace_alert_ Certkiller .log
Answer: A
QUESTION NO:28
You work as a database administrator for Certkiller .com. You have set the retention period for Automatic Repository (AWR) statistics to
four days and collection interval to 15 minutes. You want to view the statistics collected and stored in AWR snapshot. Which two methods
would you use to view the AWR statistics? Choose two
A.use enterprise manager
B.use DBMS_SQL package

E.query the AWR snapshot repository objects

55

D.use PRVT_WORKLOAD package

Page

C.use DBMS_AWR package

F.use DBMS_WORKLOAD_REPOSITORY package
Answer: A, F

QUESTION NO:29
You work as a database administrator for Certkiller .com. As a result of performance analysis, you created an index on
theprod_namecolumn of the Certkiller prodtable, which contains about ten thousand rows. Later, you updated a product name in the table.
How does this change affect the index?
A.A leaf will be marked as invalid.
B.An update in a leaf row takes place.
C.The index will be updated automatically at commit.
D.A leaf row in the index will be deleted and inserted.
E.The index becomes invalid when you make any updates
Answer: D
QUESTION NO:30
Two database users, Jack and Bill, are accessing the Certkiller STAFF table of the Certkiller DB database. When Jack modifies a value in
the table, the new value is invisible to Bill. Which is the modified value invisible to Bill?
A.The modified data are not available on disk.
B.The modified data have been flushed out from memory.
C.The modified rows of the Certkiller STAFF table have been locked.
D.Jack has not committed the changes after modifying the value.
E.Both users are accessing the database from two different machines.
Answer: D

Page

56

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close