Trace pl

Published on February 2017 | Categories: Documents | Downloads: 91 | Comments: 0 | Views: 475
of 39
Download PDF   Embed   Report

Comments

Content


Trace pl/sql performance with DBMS_PROFILER
Since Oracle 8i we can trace pl/sql like we can trace sql with tkprof. With DBMS_PROFILER you can
measure the execution time of a pl/sql program unit.

DBMS_PROFILER gives insight into the following statistics:
- The number of times a piece of pl/sql was executed
- The total time spend on a piece of pl/sql code, including sql statements.
- Minimum and maximum time spend on a piece of pl/sql code
- Which code is executed during profiling.


Example code
This is an example on how to use DBMS_PROFILER:

Prerequisites.
The DBMS_PROFILER package is not automatically created on install of the database. Before you can
use it, you should run the following scripts: as sys user run the
$ORACLE_HOME/rdbms/admin/profload.sql script. As user that uses the profiler (or as sys with grants)
run: $ORACLE_HOME/rdbms/admin/proftab.sql
Procedures
The DBMS_PROFILER package has the following procedures: Start_Profiler: begin data collection
Stop_Profiler: stop data collection. Data is not automatically stored when the user disconnects.
Flush_Data: flush data collected in user session. Can call at points in a run to get incremental data.
Pause_Profiler: pause user data collection Resume_Profiler: resume data collection Get_Version (proc):
gets the version of this api. Internal_Version_Check: verify that the DBMS_Profiler version works with this
DB version.
Tables
The profiler-information is stored in the following tables: plsql_profiler_runs - information on profiler runs
plsql_profiler_units - information on each lu profiled plsql_profiler_data - profiler data for each lu profiled
Run profile session
set serverout on enabled

declare
run_id number;
begin
run_id := dbms_profiler.start_profiler(
to_char(sysdate,'DD-MM-YYYY HH24:MI:SS'));
..
call pl/sql
..

/* Clear data from memory and store it in profiler tables.*/
dbms_profiler.flush_data;
dbms_profiler.stop_profiler;
end;
Report on profile session
In Oracle 8i, Oracle supplied a sql ${ORACLE_HOME}/plsql/demo/profrep.sql to report of the profiling
results.

In Oracle 10g a sql ${ORACLE_HOME}/plsql/demo/profsum.sql is provided.

-- show procedures
SELECT substr(u.unit_type,1,30),substr(u.unit_name,1,30)
, ROUND(d.total_time/10000000,2) total, d.total_occur
, d.min_time, d.max_time
FROM plsql_profiler_units u,
plsql_profiler_data d
WHERE u.runid = &1
AND u.unit_owner 'SYS'
AND d.runid = u.runid
AND d.unit_number = u.unit_number
AND ROUND(d.total_time/1000000000,2) > 0.00
ORDER BY
d.total_time DESC;



-- Top 10 slow statements
SELECT * FROM (
select trim(decode(unit_type,'PACKAGE SPEC','PACKAGE',unit_type)||
' '||trim(pu.unit_owner)||'.'||trim(pu.unit_name))||
' (line '|| pd.line#||')' object_name
, pd.total_occur
, pd.total_time
, pd.min_time
, pd.max_time
, src.text
, rownum sequencenr
from plsql_profiler_units pu
, plsql_profiler_data pd
, all_source src
where pu.unit_owner = user
and pu.runid = &1
and pu.runid=pd.runid
and pu.unit_number = pd.unit_number
and src.owner = pu.unit_owner
and src.type = pu.unit_type
and src.name = pu.unit_name
and src.line = pd.line#
) where sequencenr

=============
Trace with TKProf
PARAMETERS
You need 2 databaseparameters to trace sessions: TIMED_STATISTICS and USER_DUMP_DEST.

TIMED_STATISTICS should be TRUE to use statistics.
Also possible so set this in a session:
SQL> ALTER SESSION SET TIMED_STATISTICS=TRUE;

USER_DUMP_DEST points to the directory on the server where the tracefiles are being written.


Enable trace
You can enable tracing in the following ways:
SQL*Plus:
SQL> alter session set sql_trace true;


PL/SQL:
dbms_session.set_sql_trace(TRUE);

DBA
SQL> execute sys.dbms_system.set_sql_trace_in_session(sid,serial#,TRUE);
with: sid en serial# from the query:
Select username, sid, serial#, machine from v$session;

Oracle forms:
start forms with f45run32.exe statistics=yes
or make a PRE-FORM trigger with the statement:
forms_ddl('alter session set sql_trace true');

Oracle reports:
BEFORE-REPORT trigger with statement:
srw.do_sql('alter session set sql_trace true');

PRO*C
EXEC SQL ALTER SESSION SET SQL_TRACE TRUE;
Use TKPROF
To make a tracefile readable, you need TKProf. Use the following command on the server:
TKPROF tracefile exportfile [explain=username/password] [table= …] [print= ] [insert= ] [sys=
] [record=..] [sort= ]

Example:
tkprof ora_12345.trc output.txt explain=scott/tiger

The statements between brackets are optional. Their meaning is:
explain=username/password: show an executionplan.
table= schema.tabelnaam : use this table for explain plan
print=integer restrict the number of shown SQL-statements.
insert=bestandsnaam Show SQL-statements and data within SQL statements
sys = NO Don't show statements that are executed under the SYS-schema. Most of the times
these are recursive SQL-statements that are less interesting.
Aggregate=NO Don't aggregate SQL-statments that are executed more than once.
sort=

How to use DBMS_PROFILER package?
DBMS_PROFILER: It is the PL/SQL code tuning technique. It allows you to check the run time behavior
of your PL/SQL code and helps you in identifying the areas where the performance issue is. The output of
DBMS_PROFILER package is very easy to read as it gives the execution time for each line of code and
from there you can easily
identify the bottleneck.

Profiling is for the developer to understand where the PL/SQL code is spending the most time, so they
can detect and optimize it. DBMS_PROFILER is to PL/SQL, what tkprof and Explain Plan are to SQL.

DBMS_PROFILER package has some subprograms like:
FLUSH_DATA Function and Procedure: Flushes profiler data collected in the user's session.
GET_VERSION Procedure: Gets the version of this API.
INTERNAL_VERSION_CHECK Function: Verifies that this version of the DBMS_PROFILER package can
work with the implementation in the database.
PAUSE_PROFILER Function and Procedure: Pauses profiler data collection.
RESUME_PROFILER Function and Procedure: Resumes profiler data collection.
START_PROFILER Functions and Procedures: Starts profiler data collection in the user's session.
STOP_PROFILER Function and Procedure: Stops profiler data collection in the user's session.
To execute the DBMS_PROFILER package we need to do certain settings in our oracle environment.
There are two files that create the environment setting for DBMS_PROFILER:
.proftab.sql : This file creates three tables (PLSQL_PROFILER_RUNS, PLSQL_PROFILER_UNITS,
PLSQL_PROFILER_DATA) and sequence and must be executed first from the oracle user from which
the profiling is to be done. This needs to be executed before profload.sql.
.profload.sql : This file creates the specification and the body of DBMS_PROFILER package. This needs
to be executed from sys user only. Some public synonyms also needs to be created for the tables to for
the other oracle user (from which the profiling is to be done).
Following are the steps to do the settings and the execution of the DBMS_PROFILER (using a UNIX
platform):

(We will consider an example, Oracle user "TRACETEST" needs profiling to be done on a sample
package "TEST_PROFILER")

Step 1: Go to the admin directory under ORACLE_HOME/rdbms using the command "cd
$ORACLE_HOME/rdbms/admin".

Step 2: Connect as tracetest (Please make sure that the user has CREATE SEQUENCE, CREATE
TABLE and CREATE PROCEDURE privilege) the following commands:
sqlplus /nolog
connect tracetest/tracetest

Step 3: Run proftab.sql file using the command
@$ORACLE_HOME/rdbms/admin/proftab.sql

Step 4: Grant some privileges to PUBLIC for the tables created using the following commands:
GRANT SELECT ON plsql_profiler_runnumber TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_data TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_units TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_runs TO PUBLIC;

Step 5: Create a test_profiler procedure:
CREATE OR REPLACE PROCEDURE test_profiler AS
l_dummy NUMBER;
BEGIN
FOR i IN 1 .. 50 LOOP
SELECT l_dummy + 1
INTO l_dummy
FROM dual;
END LOOP;
END;
/

Step 6: Connect to SYSDBA user to run proftab.sql file using command "connect / as sysdba".

Step 7: Run profload.sql using the command
@$ORACLE_HOME/rdbms/admin/profload.sql

Step 8: Create PUBLIC SYNONYM using the following commands for "tracetest user":
CREATE PUBLIC SYNONYM plsql_profiler_runs FOR tracetest.plsql_profiler_runs;
CREATE PUBLIC SYNONYM plsql_profiler_units FOR tracetest.plsql_profiler_units;
CREATE PUBLIC SYNONYM plsql_profiler_data FOR tracetest.plsql_profiler_data;
CREATE PUBLIC SYNONYM plsql_profiler_runnumber FOR tracetest.plsql_profiler_runnumber;

Now, we are all set to run the DBMS_PROFILER package to check the runtime execution of the
procedure TEST_PROFILER.

Step 9: To run the profiler:

DECLARE
l_result BINARY_INTEGER;
BEGIN
l_result := DBMS_PROFILER.start_profiler(run_comment => 'test_profiler_execution: ' || SYSDATE);
test_profiler;
l_result := DBMS_PROFILER.stop_profiler;
END;
/

Step 10: Syntax to see which run happened: (RUNID is the unique identifier associated to each run)

SET LINESIZE 200
SET TRIMOUT ON

COLUMN runid FORMAT 99999
COLUMN run_comment FORMAT A50
SELECT runid,
run_date,
run_comment,
run_total_time
FROM plsql_profiler_runs
ORDER BY runid;



Step 11: Syntax to see the details of runid of the runid you got from the above query:

COLUMN runid FORMAT 99999
COLUMN unit_number FORMAT 99999
COLUMN unit_type FORMAT A20
COLUMN unit_owner FORMAT A20

SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number
WHERE u.runid = 1
ORDER BY u.unit_number, d.line#;





Here we can see that the line no. 5 executed less times then line no. 4 but took lots of time.

Step 12: To check the line numbers of the source code:

SELECT line||' : ' ||text
FROM all_source
WHERE owner = 'TRACETEST'
AND type = 'PROCEDURE'
AND name = 'TEST_PROFILER';

LINE||' : '||TEXT
---------------------------------------------------
1 : CREATE OR REPLACE PROCEDURE test_profiler AS
2 : l_dummy NUMBER;
3 : BEGIN
4 : FOR i IN 1 .. 50 LOOP
5 : SELECT l_dummy + 1
6 : INTO l_dummy
7 : FROM dual;
8 : END LOOP;
9 : END;
http://gautampartap.blogspot.in/2010/04/how-to-use-dbmsprofiler-package.html

We can easily identify that the loop didn't took much time in execution while the SQL query itself
took more time to execute. So, we can conclude that SQL_TRACE or other facilities can be used to tune
the SQL further.

PS: Syntax for other DBMS_PROFILER options are:

DBMS_PROFILER.FLUSH_DATA;
DBMS_PROFILER.PAUSE_PROFILER;
DBMS_PROFILER.RESUME_PROFILER;
DBMS_PROFILER.INTERNAL_VERSION_CHECK
RETURN BINARY_INTEGER;
DBMS_PROFILER.GET_VERSION (
major OUT BINARY_INTEGER,
minor OUT BINARY_INTEGER);
What is High Water Mark in Oracle database?
This topic comes under “Segment Management concept”. Which describes about how to manage
storage of data in segments effectively? How to manage waste space? After Oracle 10g release this has
become very easy and somewhat automated also.

High Water Mark is applicable to the Segments or even we can say DB blocks (at granule level) attached
to the database table. This indicates the highest level up to, which the space occupied in the blocks by
the table data.

This can be illustrated with a simple example from our day-to-day life.

You might have seen a glass filled half with milk. Now, the level where the milk is available in the glass is
the high water mark. Even if you pour out some milk out of the glass still the mark will be there. This
indicates that at one time the milk was filled up to that level.

Similarly, in the Oracle database high water is the level, which indicates the last block that held data.

Let’s say when a table gets created the following of DB blocks gets associated to it.Now there is no data
into the table so, the high water mark will be set at the first block.When the data is populated into the
table then the HWM is set to the DB block up to, which the data is stored.When the few rows gets
deleted from the table Still the HWM remains at the last level only. This is because the HWM is not reset
after the deletion of rows. This may result in two major problems:

1) During the full table scan, Oracle always scans the segment up to the level of
HWM. If we don’t have the data in those blocks then the time spends in scanning those block is useless.

2) These blank blocks will not appear in the Freelists of the database so, when new rows will be inserted
with Direct path using APPEND hint or SQL*Loader direct path the data will be stored in the free blocks
above HWM only. Therefore the empty blocks below the HWM will be wasted.

How to reset High Water Mark?

If we execute the “Truncate table” command then the HWM gets reset automatically.

In Oracle 9i and below, you can use the “ALTER TABLE….MOVE….” command to reset the HWM and use
the empty block effectively. In Oracle 10g release this has become more effective. Now, you can
“Shrink” tables, segments and indexes to use the space wasted due to HWM not reset but and make
them available to database for other use. Your tablespace must have “ASSM (Automatic Segment Space
Management)” enabled before you shrink them. We will look into both the options one by one.

In Oracle 9i and below, the space resetting is done by “ALTER TABLE….MOVE….” command in the
following way:

Syntax:
Alter table move storage () tablespace ;

Alter table t1 move storage (initial 10K next 10K) tablespace new_tablespace;

In case you don’t want to move storage to a different tablespace then you can use the command as:

Syntax:
Alter table move storage ();

Alter table t1 move storage (initial 10K next 10K);

There are certain restrictions on doing so. Which are as follows:

1) Table should not have LOB, LONG or LONG RAW columns.

2) Entire partitioned table cannot be moved. Individual partition/sub-partition has to be moved
separately.

3) Indexes associated to the table will beocme invalid after move so, rebuild the index using “alter index
rebuild”.

4) This cannot be done online. The table has to be making unusable before moving.

In Oracle 10g release1, a new feature added to this process. It is called segment shrinking. Segment
shrinking is allowed only to those segments that use “Automatic Segment Space Management (ASSM)”.
This means the tablespace should be enabled with ASSM feature. Following are the steps to use HWM
re-setting:

1) Before we start shrinking segments we need to tell Oracle to re-set the rowIDs of these rows by
issuing the following command:

Alter table enable row movement;

2) Now, we are ready to shrink the segment using:

Alter table shrink space; (If you want to shrink space for only specific table)

Alter table shrink space cascade; (If you want to shrink space for all dependent objects also)

In this case:
1) We don’t need to do it offline.
2) Indexes will not become invalid or need not to re-create the indexes.
How does MERGE Statement works in Oracle ?
MERGE was included in the Oracle 9i version and above. MERGE statement is used when the user
wants to update and insert some values into a table based on selecting the values from another table.
This statement allows user to use join to two different statements into one statement instead of going for
multiple inserts/updates statements.

MERGE is a deterministic statement, a row which is updated cannot be updated again in the same
MERGE statement. Prerequisites for this is that user should have INSERT and UPDATE table/view
privileges on the target table and SELECT privilege on the source table.

INTO Clause:
INTO clause tells the target table which user wants to update and insert.

USING Clause:
USING clause is to specify the source of the data, which will be updated or inserted in the target table.
The source can be a table, view, or the result of a sub-query.

ON Clause:
ON clause is to specify the condition based on which the MERGE statement will either
updates or inserts records in the target table. For each row in the target table for which the search
condition is true, corresponding source table data will be updated.
If the condition is not true for any rows, then corresponding source table data will be inserted into the
target table.

WHEN MATCHED | NOT MATCHED:
These clauses tells Oracle how to respond to the results of the join condition in the ON clause. Oracle
performs update on the target table if the condition of the ON clause is true. When the update clause is
executed on the target table, then all update triggers defined on that table will also gets executed.

Points to ponder while updating a View:
You cannot specify DEFAULT when updating a view.
You cannot update a column that is referenced in the ON condition clause.
Oracle performs insert on the target table if the condition of the ON clause is false. When the insert
clause is executed on the target table, then all insert triggers defined on that table will also gets executed.

Following example will make it more clear:

We have an employee table with few records:

Select * from emp;
Emp_id Salary
******* ******
1 100
2 200
3 300
4 400
5 500

We will create our new target table and will populate with emp table data.

Create table t1(emp_id number, salary number);

Insert into t1 select * from emp;

Commit;

Lets assume after some time few more records added to emp table.

Select * from emp;

Emp_id Salary
****** ******
1 100
2 200
3 300
4 400
5 500
6 600
7 700
8 800

Before Oracle 9i, if we need to update the salary of the existing employees with 20% hike with the data of
the new joined employees also from emp table into our target table t1.

Update
(Select a.emp_id empl_id, a.salary sal from t1 a, emp b where a.emp_id = b.emp_id) x
set sal = sal*.20;

Insert into t1 (emp_id, salary)
select emp_id, salary from emp where emp_id not in (select emp_id from t1);

After using MERGE statement this can be done in single statement which is as follows:

MERGE INTO t1 a
USING emp b
ON (a.emp_id = b.emp_id)
WHEN MATCHED THEN UPDATE SET salary = a.salary*.20
WHEN NOT MATCHED THEN INSERT (a.emp_id,a.salary)
VALUES(b.emp_id,b.salary);

In addition to this, In Oracle 10g we can use optional DELETE clause also in MERGE statement. But,
there are following few constraints:
DELETE clause cannot be used independently in MERGE statement. It has to be embedded with
UPDATE or INSERT statement.
DELETE clause will work only on the rows which are filtered based on the join condition mentioned in ON
clause.
DELETE clause will affect only those rows which are updated in MERGE statement.
for e.g if user wants to delete the records where still the salary is less then 2000 after update.

MERGE INTO t1 a
USING emp b
ON (a.emp_id = b.emp_id)
WHEN MATCHED THEN
UPDATE SET a.salary = b.salary*.20
DELETE WHERE (a.salary < 2000)
WHEN NOT MATCHED THEN INSERT (a.emp_id,a.salary)
VALUES(b.emp_id,b.salary);

In this, first UPDATE clause will update the matched rows with 20% increase in salary and then updated
employees with less then 2000 salary will be deleted from the target table.
Difference Between Invoker Rights and Definer Rights
Invoker Rights Vs Definer Rights

In Oracle database, the stored PL/SQL procedures executes by default with the owner's privileges. This
means that these subprograms execute only for the schema where they exists or created.

Let me explain this concept clearly using a scenario:

Let's say there are two schema schema1 and schema2. Schema1 has procedure proc1 and table
emp_tab and Schema2 has table emp_tab only.

The structure of the proc1 procedure in Schema1 is as follows:

Create or replace procedure proc1 (p_empid number, p_ename varchar2, p_salary number)
as
begin
insert into emp_tab (eid, emp_name,emp_sal)
values (p_empid,p_ename,p_salary);
commit;
end;
/

Now, User Schema1 granted EXECUTE permission to User Schema2 on this procedure. If user Schema2
wants to insert data into his emp_tab table using this procedure he cannot do because when user
Schema2 executes the procedure, the procedure will be executed with the privileges granted to user
Schema1.

Though the user Schema2 has the same table in his schema and does not have the permissions for table
emp_tab in Schema1, still the procedure will insert the new values into Schema1 table instead of his
schema table.

Before Oracle 8i release, there were following two ways out to resolve this problem :

First either to copy the procedure into Schema2 as well which leads to code replication and hampers the
maintenance.

Secondly use the schema references to the objects used in the procedure (under schema1) likeinsert into
schema2.emp_tab which leads to hamper the code portability. To resolve this issue you can pass the
schema name as parameter to the procedure and associate in sql.

To overcome this problem, in Oracle 8i and higher releases the AUTHID CURRENT_USER clause
included. This invoker rights enables the procedure to execute with the privileges of the current user.

The syntax of the procedure in Schema1 will be like this:


Create or replace procedure proc1
(p_empid number, p_ename varchar2, p_salary number) AUTHID CURRENT_USER
as
begin
insert into emp_tab (eid, emp_name,emp_sal)
values (p_empid,p_ename,p_salary);
commit;
end;

Now, if the user Schema2 executes the procedure which is residing under Schema1 user the procedure
will update Schema2 emp_tab table only. If the table is not existing in Schema2 user it will throw an error.
By default, Oracle assumes AUTHID DEFINER (definer rights) if you don't use the clause.
Difference Between Number Datatypes
The number data types in oracle database are used to store numeric values/data.

There are BINARY_INTEGER, NUMBER and PLS_INTEGER data types which have small differences
w.r.t PL/SQL code performance point of view.

Let's have a look on all three and find which one is better to use and why....

NUMBER Data type: This is the very common data type used to store numeric data (fixed-point and
floating-point). Its magnitude range starts from 1E-130 .. 10E125. Oracle throws error if the value exceed
or under the specified range.

The syntax is NUMBER(Precision,scale).

Precision: This is the value equal to the total no. of digits.
Scale: This is the value equal to the digits after the decimal point.

e.g. If you want to store a value 1234.56 Then you need to specify NUMBER(6,2).

BINARY_INTEGER Data type: This data type is used to store signed integers. Its magnitude range is -
2**31 .. 2**31. BINARY_INTEGER values require less storage space than NUMBER values. This uses
library arithmetic hence BINARY_INTEGER operations are slower than PLS_INTEGER operations. If the
BINARY_INTEGER calculation overflows then no error/exception is raised.

PLS_INTEGER Data type: This data type is also used to store the signed integers. Its magnitude is
similar to BINARY_INTEGER only. If the PLS_INTEGER calculation overflows then an exception is
raised. PLS_INTEGER uses machine arithmetic hence operations are faster then BINARY_INTEGER.

NOTE: In new applications, always try to use the PLS_INTEGER as it is faster.
Oracle's DBMS_PROFILER: PL/SQL Performance Tuning
By Amar Kumar Padhi
An application can always be fine-tuned for better performance with the use of better
alternatives or with the new features introduced with every release of Oracle.
Simply inspecting the code can bring out the bottlenecks eating up your processing time.
Using explain plan to fine tune the SQL statements resolves issues most of the time.
However, sometimes it may not be that simple. It is baffling when all the SQL statements
are well tuned but the routine still takes noticeable time to execute.
DBMS_PROFILER Package
Oracle 8i provides a new tool called PL/SQL Profiler. This is a powerful tool to analyze a
Program unit execution and determine the runtime behavior. The results generated can
then be evaluated to find out the hot areas in the code. This tool helps us identify
performance bottlenecks, as well as where excess execution time is being spent in the code.
The time spent in executing an SQL statement is also generated. This process is
implemented with DBMS_PROFILER package.
The possible profiler statistics that are generated:
1. Total number of times each line was executed.
2. Time spent executing each line. This includes SQL statements.
3. Minimum and maximum duration spent on a specific line of code.
4. Code that is actually being executed for a given scenario.
DBMS_PROFILER.START_PROFILER
The DBMS_PROFILER.START_PROFILER tells Oracle to start the monitoring process. An
identifier needs to be provided with each run that is used later to retrieve the statistics.
e.g.:
l_runstatus := dbms_profiler.start_profiler('am' ||
to_char(sysdate));
DBMS_PROFILER.STOP_PROFILER
The DBMS_PROFILER.STOP_PROFILER tells Oracle to stop the monitoring.
e.g.:
l_runstatus := dbms_profiler.stop_profiler;
DBMS_PROFILER.FLUSH_DATA
The data collected for an execution is held in the memory. Calling the
DBMS_PROFILER.FLUSH_DATA routine tells Oracle to save this data in profiler tables and
clear the memory.
e.g.:
l_runstatus := dbms_profiler.flush_data;
The above functions return the following status'.
0 : Successful completion
1 : Incorrect parameters passed (error_parm).
2 : data flush operation failed (error_io).
-1 : mismatch between package and database implementation (error_version).
Oracle's DBMS_PROFILER: PL/SQL Performance Tuning - Page 2
By Amar Kumar Padhi
EXAMPLE on using DBMS_PROFILER
This is a simple example that I am providing just as a reference on how to use the Profiler. I
will run profiler and debug the following routine for performance. Customized scripts that
are used in the example can be found at the end of this article.
1. Creating my procedure.
E.g.:
create or replace procedure am_perf_chk (pi_seq in number,
pio_status in out nocopy varchar2) is
l_dat date := sysdate;
begin
if trunc(l_dat) = '21-sep-02' and pi_seq = 1 then
pio_status := 'OK';
else
pio_status := 'Invalid tape loaded';
end if;
exception
when others then
pio_status := 'Error in am_perf_chek';
end;
2. Calling the routine with profiler.
The above routine will be placed and called in the call_profiler.sql (script details given
below). The pi_seq value is passed as 2.
SQL> @d:\am\call_profiler.sql
Profiler started
Invalid tape loaded
PL/SQL procedure successfully completed.
Profiler stopped
Profiler flushed
runid:8
3. Evaluating the execution time.
The evalute_profiler_results.sql is called to get the time statistics.
SQL> @d:\am\evaluate_profiler_results.sql
Enter value for runid: 8
Enter value for name: am_perf_chk
Enter value for owner: scott
Line Occur Msec Text
---------- ---------- ---------- -------------------------------------------------------------------
1 procedure am_perf_chk (pi_seq in number,
2 pio_status in out nocopy varchar2) is
3 2 43.05965 l_dat date := sysdate;
4 begin
5 1 86.35732 if trunc(l_dat) = '21-sep-02' and pi_seq = 1 then
6 0 0 pio_status := 'OK';
7 else
8 1 8.416151 pio_status := 'Invalid tape loaded';
9 end if;
10 exception
11 when others then
12 0 0 pio_status := 'Error in am_perf_chek';!
13 1 2.410361 end;
13 rows selected.

Code% coverage
--------------
66.6666667
As you can see, line 3 shows execution time as 86 msec which can be improved on. The if
statement is altered (if pi_seq = 1 and trunc(l_dat) = '21-sep-02' then) and the above
process is repeated. The following is the new result:
Line Occur Msec Text
---------- ---------- ---------- -------------------------------------------------------------------
1 procedure am_perf_chk (pi_seq in number,
2 pio_status in out nocopy varchar2) is
3 2 17.978816 l_dat date := sysdate;
4 begin
5 1 8.419503 if pi_seq = 1 and trunc(l_dat) = '21-sep-02' then
6 0 0 pio_status := 'OK';
7 else
8 1 7.512684 pio_status := 'Invalid tape loaded';
9 end if;
10 exception
11 when others then
12 0 0 pio_status := 'Error in !am_perf_chek';
13 1 .731657 end;
13 rows selected.

Code% coverage
--------------
66.6666667
As you can see, line 3 execution time is reduced from 86 msec to 8 msec for the tested
scenario. The excess time was taken due to the trunc() built-in. Shifting this to the right
prevents its execution if the first condition is false. This is a small example and you will be
thrown more challenges when debugging bigger routines.

The profiler result also shows how much of the code was covered during execution. This
would give us an idea of the extent of the code that was performance monitored. The idea is
to try out various scenarios for executing the code and check on the profiler results to find
out if any PL/SQL performance issues are encountered.

Logical analysis can be carried out if a particular piece of code is executed for a given
scenario, when it should not be executing at all.
Go to page: Prev 1 2 3 Next
Oracle's DBMS_PROFILER: PL/SQL Performance Tuning - Page 3
By Amar Kumar Padhi
Creation of the environment
The DBMS_PROFILER package is not automatically created during default installation or
creation of the database. Ask the DBA to create the package using the profload.sql script.
Create tables for storing statistics either in one central user or in each individual user,
using proftab.sql. If tables are created in one central user, like SYS, then grant DML
privileges to all other users. Create public synonym on the tables with the same name.
The tables created are:
PLSQL_PROFILER_RUNS: Run-specific information for the PL/SQL profiler
PLSQL_PROFILER_UNITS: Information about each library unit in a run
PLSQL_PROFILER_DATA: Accumulated data from all profiler runs.
A sequence PLSQL_PROFILER_RUNNUMBER provides the run id.
Running and Interpreting Profiler Data
Oracle provides three tables where statistics are populated for a run id. There are many
third party tools available to provide customized reports based on this data. Oracle
provides profrep.sql and profsum.sqlto evaluate data (present in
<oracle_home>\plsql\demo\). Below I have provided two simple scripts used in the
examples above, to check instantly on a program unit execution time. The execution time is
stored in milli-seconds.
-----------------------------------------------------------
Script: call_profiler.sql
-----------------------------------------------------------
set head off
set pages 0
select decode(dbms_profiler.start_profiler, '0', 'Profiler started', 'Profiler error')
from dual;

--< place your routine in the below block >--
declare
l_status varchar2(200);
begin
am_perf_chk(2, l_status);
dbms_output.put_line(l_status);
end;
/

select decode(dbms_profiler.stop_profiler, '0', 'Profiler stopped', 'Profiler error')
from dual;
select decode(dbms_profiler.flush_data, '0', 'Profiler flushed', 'Profiler error')
from dual;
select 'runid:' || plsql_profiler_runnumber.currval
from dual;
set head on
set pages 200

-----------------------------------------------------------
Script: evaluate_profiler_results.sql
-----------------------------------------------------------
undef runid
undef owner
undef name
set verify off
select s.line "Line", p.total_occur "Occur", p.total_time "Msec", s.text "Text"
from all_source s, (select u.unit_owner, u.unit_name, u.unit_type, d.line#,
d.total_occur, d.total_time/1000000 total_time
from plsql_profiler_data d, plsql_profiler_units u
where u.runid = &&runid
and u.runid = d.runid
and u.unit_number = d.unit_number) p
where s.owner = p.unit_owner (+)
and s.name = p.unit_name (+)
and s.type = p.unit_type (+)
and s.line = p.line# (+)
and s.name = upper('&&name')
and s.owner = upper('&&owner')
order by s.line;
select exec.cnt/total.cnt * 100 "Code% coverage"
from (select count(1) cnt
from plsql_profiler_data d, plsql_profiler_units u
where d.runid = &&runid
and u.runid = d.runid
and u.unit_number = d.unit_number
and u.unit_name = upper('&&name')
and u.unit_owner = upper('&&owner')) total,
(select count(1) cnt
from plsql_profiler_data d, plsql_profiler_units u
where d.runid = &&runid
and u.runid = d.runid
and u.unit_number = d.unit_number
and u.unit_name = upper('&&name')
and u.unit_owner = upper('&&owner')
and d.total_occur > 0) exec;
undef runid
undef owner
undef name

Conclusion
DBMS_PROFILER is a very powerful tool and the first of its kind to identify performance
issues on the PL/SQL front. This utility can be best used in the development stages to fine
tune code based on various applicable scenarios. It can also be used to fine tune routines
that are already in production and are taking noticeable time to execute. Overall, this utility
gives statistics for each line of code that will help us in evaluating and tuning at a finer
level. Just as SQL statements are checked for performance, PL/SQL code should not be
ignored but should be tuned for optimal results as well.
» See All Articles by Columnist Amar Kumar Padhi
Monitoring Index Usage in Oracle Databases
By David Fitzjarrell
With the plethora of database-centric applications available today, and with the performance
problems they can generate, it can be a worthy effort to determine which vendor-created
indexes are and are not being used. This is especially helpful if you're working closely with
the application vendor to improve their product.
Of course one way to do this is to set event 10046 at level 8 or 12 and let the trace files fly
so they can be analyzed later for which indexes are being used by the application queries.
And that could be a long and tedious process. One would think there is a better way to
accomplish this.
There is.
Oh, I suppose you'd like to know this better way ... it's really rather simple:
Let Oracle do the work for you.
So let's see how we tell Oracle to do this task for us so our job is much easier.
Oracle has provided a mechanism (since at least Oracle 8.1.6) to monitor an index for
usage using
alter index <index_name> monitoring usage;
The results of that monitoring are found in the V$OBJECT_USAGE view, in a column,
strangely enough, named USED. This isn't a long, boring thesis on how, when, where, who
and why the index in question was used, only that it either is or is not used. The 'window'
spans the time period starting with the execution of the above-listed command and ends
when the following is issued:
alter index <index_name> nomonitoring usage;
The data remains in the V$OBJECT_USAGE view until another monitoring 'window' is started
at which point it is replaced.
So, let's see an example of how this works. We'll use the EMP table from the SCOTT/TIGER
demonstration schema:

SQL>
SQL> --
SQL> -- Create an index on the EMPNO column
SQL> -- of the EMP table
SQL> --
SQL> create index emp_eno_idx
2 on emp(empno);

Index created.

SQL>
SQL> --
SQL> -- Let's monitor the index to see if
SQL> -- it's being used
SQL> --
SQL> alter index emp_eno_idx monitoring usage;

Index altered.

SQL>
SQL> --
SQL> -- Now, let's run some queries
SQL> --
SQL> -- First, let's get everything from the
SQL> -- EMP table
SQL> --
SQL> select * from emp;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 09-DEC-82 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 12-JAN-83 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
14 rows selected.

SQL>
SQL> --
SQL> -- Obviously the index hasn't yet been
SQL> -- used
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME TABLE_NAME USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX EMP NO
1 row selected.

SQL>
SQL> --
SQL> -- So let's run a qualified query and
SQL> -- see if things change
SQL> --
SQL> -- Since the DEPTNO column isn't indexed
SQL> -- the monitored index still shouldn't be
SQL> -- used
SQL> --
SQL> select * from emp where deptno = 30;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7900 JAMES CLERK 7698 03-DEC-81 950 30
6 rows selected.

SQL>
SQL> --
SQL> -- And we see it isn't
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME TABLE_NAME USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX EMP NO
1 row selected.

SQL>
SQL> --
SQL> -- Yet another qualified query, this time
SQL> -- using the indexed column
SQL> --
SQL> select * from emp where empno < 7400;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
1 row selected.

SQL> --
SQL> -- We see the index is now being used, or at
SQL> -- least it was for that last query
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME TABLE_NAME USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX EMP YES
1 row selected.

SQL>
SQL> --
SQL> -- We'll try another query using that column
SQL> --
SQL> -- Let's set autotrace on to see if the index
SQL> -- is being used in this example
SQL> --
SQL> set autotrace on
SQL> select * From emp where empno is null;
no rows selected
Execution Plan
----------------------------------------------------------
Plan hash value: 3712041407
---------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
---------------------------------------------------------------------------
0 SELECT STATEMENT 1 87 0 (0)
* 1 FILTER
2 TABLE ACCESS FULL EMP 14 1218 3 (0) 00:00:01
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NULL IS NOT NULL)
Note
-----
- dynamic sampling used for this statement

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
353 bytes sent via SQL*Net to client
239 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed

SQL> set autotrace off
SQL>
SQL> --
SQL> -- Since the index has been marked as used
SQL> -- it remains in the USED state for this
SQL> -- monitoring window even though the last
SQL> -- query didn't use the index at all
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME TABLE_NAME USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX EMP YES
1 row selected.

SQL>
SQL> --
SQL> -- Turn off the usage monitoring
SQL> --
SQL> alter index emp_eno_idx nomonitoring usage;
Index altered.
SQL>
SQL> --
SQL> -- And the last generated data remains
SQL> --
SQL> select index_name, table_name, used from v$object_usage;

INDEX_NAME TABLE_NAME USE
------------------------------ ------------------------------ ---
EMP_ENO_IDX EMP YES
1 row selected.
SQL>
Letting Oracle monitor index usage is much easier than traipsing through piles of event
10046 trace files looking for index scans. I'm happy they've provided such a tool. But you
may run across an index which is used but won't be marked as such in V$OBJECT_USAGE
(this is a rare occurrence but it can happen). How can this be? Oracle can use the statistics
from the index in determining the best query plan, and when those statistics are gone (as
when the index has been dropped) performance can suffer; the optimizer generates a
decision tree when each query is hard parsed, and missing index statistics may direct the
optimizer down a path it might not have taken when the statistics existed.
Oracle, in one respect, is correct in that the index in question hasn't been read, but it did
use the statistics to perform path elimination. So before heading straight for the 'drop index'
command, it would be prudent to verify the index in question really isn't being used in any
way -- this is why we have test systems, correct? Dropping the index on a test database,
then verifying that no performance degradation occurs is, in my mind, a good idea. If --
after the tests indicate an index may truly be unused -- performance problems arise
because that index is missing, it can be recreated to restore the application to its original
lustre.
Some suggest that simply setting an index to UNUSABLE would provide the same conditions
as dropping it, but disabling an index in that fashion doesn't remove the statistics generated
on that index, and if a query or process is using those statistics but is not actually accessing
the index, the same conditions don't exist and one could be led into a false sense of security
that the index in question is truly unused. Yes, actual access to the index is not allowed, but
since the index wasn't being read to begin with (only the statistics were used by the CBO for
cost calculations), I can't accept that the same run-time conditions exist. Eventually the
statistics will be outdated and no longer will be used but it could take a week, a month or
longer for this to occur (depending upon system activity). For those DBAs in a hurry (and,
face it, sometimes management IS in a hurry for results), setting an index to UNUSABLE
may not be a valid course of action to discover whether it's actually used or not.
Of course database administration cannot be ruled by rash acts, and relying upon a sole
source of information (such as V$OBJECT_USAGE) can result in problems down the line. So,
careful attention to detail is necessary, especially when managing the existence (or not) of
an index [or table, or view, or ...]. I like to follow a few simple rules:
1. Test, test, test.
2. Keep testing.
3. Never do anything you can't undo.
Keeping to that methodology usually ensures I'm not in trouble later. And it keeps the end
users happier.
I like happy end users.

David Fitzjarrell has more than 20 years of administration experience with various releases
of the Oracle DBMS. He has installed the Oracle software on many platforms, including
UNIX, Windows and Linux, and monitored and tuned performance in those environments.
He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait
Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace –
and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE,
ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the
quality of the production system. He has also set up scripts to regularly monitor available
space and set thresholds to notify DBAs of impending space shortages before they affect the
production environment. These scripts generate data which can also used to trend database
growth over time, aiding in capacity planning.
He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full
recoverability and failover capabilities as well as high availability, and has configured a
'cascading' set of DR databases using the primary DR databases as the source, managing
the archivelog transfers manually and montoring, through scripts, the health of these
secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance
and manage storage and shared memory.
See all articles by David Fitzjarrell
Top 5 Query Tactics Questions for the PL/SQL Developer Job Interview
By James Koopmann
James Koopmann shares five common issues that could get you into trouble when writing
PL/SQL (and SQL), and how you might answer those questions within the confines of a
PL/SQL job interview.
An interviewer should see your attention to detail and desire to improve the environment,
even if they, themselves, have tendencies to crank out code without regard to standards.
It is amazing to me that many writers of PL/SQL never give much thought as to how they
access (query) data from within the database. For this reason, an overwhelming phrase that
rings from many DBAs goes something like this, "All applications would be perfect if they
didn't access my data "or" My database wouldn't have any performance problems if we just
eliminated the applications. Either way we all understand that applications are a necessity.
However, it is not necessarily true that applications cause or should cause database
performance issues. This article looks at some of the more common issues when writing
PL/SQL (and SQL), in the confines of a PL/SQL job interview, that could get you into trouble
and how you might answer those questions.
1. How do you go about tuning your PL/SQL code?
This really hits at the core of this article. We must all understand, and relate this to our
interviewer that we know that it is the SQL that will always cause the most difficulty,
performance wise, integrity wise, bug wise, within our PL/SQL code. We can always talk
about EXPLAIN plan usage, TKPROF, gathering runtime statistics, index optimization, and
the list goes on, but let me suggest another tactic here that might get you noticed. Try
working in the fact that you understand that data can change drastically within an
organization and a static application (PL/SQL code) often does not cut it. What is needed,
and what you will bring to the table is an ability to place an abstraction layer, using views,
functions, triggers, procedures, etc. that maintains the integrity of the PL/SQL logic but
allows for simplified maintenance to the data the PL/SQL code requires.
As a very simplistic example, imagine you needed to select a number of employees within
your PL/SQL code. A very simple solution would be to SELECT directly all the employees
form the EMP table. However, let's say we acquired another company and wanted this code
to work with two different EMP tables. The old code would have to be modified to possibly
perform a join. The better solution, one not affecting the code, would be to always use a
view and then modify the view when the new company was acquired. A little abstraction
goes a long way when requirements change.
2. How might you get around hard coding the elements in a fetch cursor?
I'd have to say that this is one of the most common forms of hard coding, other than actual
values/IDs being used in a SQL statement. Practitioners will often use the %TYPE notation
for individual variables, which is fine and well, within the declaration section but seem to
lose sight of the %ROWTYPE. When fetching a cursor INTO variables those variables are
often strung out in a list such as: FECTH empId, empFname, empLname INTO vempId,
vempFname, vempLname; clearly requiring the addition of another variable in the
declaration section and at the end of the INTO clause. What should happen here is use the
%ROWTYPE and just issue something like: FETCH empId, empFname, empLname INTO
empRowtype; removing all hard coding in the body of the PL/SQL code.
3. How do you get around repeating SQL code?
The answer seems simple and many would agree that repeating code is an accident waiting
to happen; increasing the probability of changing all but one code segment and having
a very difficult bug to find. Instead, we should always, for straight code or SQL statements,
ensure we never perform the same function in two different places in our code. Instead, we
should hide the SQL behind subprograms and then call those subprograms repeatedly. Not
only will this make your code more efficient and maintainable but these subprograms can be
called by other applications; creating a much more flexible environment.
4. How many COMMIT statements do you put in your code?
This is somewhat of a tricky question and I hope you are following the general theme of this
article, that of making your PL/SQL code flexible and more importantly conveying to your
interviewer that you have this mindset. However, the real answer here is that you should
really have no COMMIT statements within your application code. The better way is to call a
procedure to do the commit for you. I can see a lot of funny faces while you are reading this
but the example I draw upon is very simple. Just ask yourself how many times you've
commented out the commit statement for testing purposes. It is our duty to make our
applications as flexible as possible and with hard coded commit points in our applications,
we are telling ourselves we know exactly how the application will run, when we need to
commit, and it will never change. I have all too often had to modify the commit frequency
within an application that I hold to this rule very strictly.
5. What are the four dynamic SQL methods?
This is the first distinction you should make when analyzing the type of dynamic SQL you
should be implementing. Understand what these are and how you might code them. You
should note that as the method number increases, so does the complexity or generality of
the type of statement.
1: non-query without host variables; use EXECUTE IMMEDIATE
2: non-query with known number of input host variables; use EXECUTE IMMEDIATE with
USING
3: query with known number of select-list items and input host variables; use EXECUTE
IMMEDIATE with USING and INTO for single row but EXECUTE IMMEDIATE with USING and
BULK COLLECT INTO or OPEN FOR with dynamic string for multi-row
4: query with unknown number of select-list items or input host variables; use DBMS_SQL
Writing PL/SQL code is easy to some extent. We can easily drop in SQL code around some
logic and we will have an application that will more than likely satisfy the requirements we
have before us. The problem with this is that our query tactics within that code can very
easily fail if we are unaware of some common pitfalls or coding practices. Take these five
questions, they are just the tip of the iceberg, and think about making your PL/SQL code
more general and dynamic. An interviewer should see your attention to detail and desire to
improve the environment, even if they have tendencies to crank out code without regard to
standards.
» See All Articles by Columnist James Koopmann
>>Script Language and Platform: Oracle
Purpose: To read raw session trace file with option to format trace file with TKPROF
and read it in SQLPLUS Session.
Author: Neaman Ahmed


#!/bin/sh
#########################################################
########
#########################################################
########
## Copyright (c)2004 Kalson Systems ##
## Author :Neaman Ahmed ##
## ScriptName:trace ##
## Purpose: To read raw session trace file with option to ##
## format trace file with TKPROF and read it in SQLPLUS Session##
## You must set TRACE_DIR and ORACLE_SID in ##
## this script Put trace in < $HOME/bin > make it executable ##
## and run it from sqlplus session ##
## Example: SQL> ! trace ##
## Critics, Comment and suggestion are welcome [email protected]##
## Note:You Must Run this script from your sqlplus session ##
#########################################################
########
#########################################################
########
clear
n1=`ps |grep sqlplus|awk '{print $1}' `
n2=`ps -ef|grep $n1|grep oracle"$ORACLE_SID"|awk '{print $2}'`
TRACE_DIR=/u02/app/oracle/admin/$ORACLE_SID/udump

echo " 1.Do you want to view raw trace file"
echo " 2.Do you want to format trace file and view it"
read choice
if [ "$choice" = "1" ];
then cat $TRACE_DIR/"$ORACLE_SID"_ora_$n2.trc|more
else
if [ "$choice" = "2" ];
then
tkprof $TRACE_DIR/"$ORACLE_SID"_ora_$n2.trc $TRACE_DIR/trace_session_$n2
clear
echo "Your TKPROF formated file name is $TRACE_DIR/trace_session_$n2.prf "
echo " Do you want to read it now y/n "
read ans
if [ "$ans" = "y" ];
then
cat $TRACE_DIR/trace_session_$n2.prf|more
else
if [ "$ans" = "n" ];
then
echo "Happy Hunting Perfomance Problems"
fi
fi
fi
fi


Disclaimer: We hope that the information on these script pages is valuable to you. Your use of the information contained in
these pages, however, is at your sole risk. All information on these pages is provided "as -is", without any warranty, whether express
or implied, of its accuracy, completeness, or fitness for a particular purpose... Disclaimer Continued

Back to Database Journal Home
Setting Up Oracle Database's Oracle Text
By Steve Callan
Steve Callan walks through some Oracle Text setup steps and working examples of different
index types used by Oracle Text.
Oracle Text - Expanding Your String Searching Capabilities in Oracle Database discussed
some of Oracle Text's features and functionality. In this article, Steve Callan goes through
some setup steps and worked examples.
First of all, where do you get Oracle Text? In at least 10g and above, it is installed by
default. Where can you see if feature X is installed or not? One place is within
DBA_REGISTRY, query on comp_name and status.
SQL> col comp_name for a40
SQL> select comp_name, status from dba_registry;
COMP_NAME STATUS
---------------------------------------- -------
Oracle Database Catalog Views VALID
Oracle Database Packages and Types VALID
Oracle Workspace Manager VALID
JServer JAVA Virtual Machine VALID
Oracle XDK VALID
Oracle Database Java Packages VALID
Oracle Expression Filter VALID
Oracle Data Mining VALID
Oracle Text VALID
Oracle XML Database VALID
Oracle Rules Manager VALID
Oracle interMedia VALID
OLAP Analytic Workspace VALID
Oracle OLAP API VALID
OLAP Catalog VALID
Spatial VALID
Oracle Enterprise Manager VALID
Since Text is a feature, and as we all know, upgrades can be cause for concern in terms of
extra steps having to be performed, you'll be happy to note that the upgrade of Oracle Text
from one release to another comes along as part of the overall upgrade process. In other
words, you don't have to do anything.
Second, what does it take for a regular user to be able to use Oracle Text? Not a whole lot.
Create a table and insert data as needed, create the appropriate Oracle Text index, and
execute a query against the table.
SQL> conn scott/tiger
Connected.
SQL> create table docs (id number primary key, text varchar2(80));
Table created.
SQL> insert into docs values (1, 'first document');
1 row created.
SQL> insert into docs values (2, 'second document');
1 row created.
SQL> commit;
Commit complete.
SQL> create index doc_index on docs(text)
2 indextype is ctxsys.context;
Index created.
SQL> select id, text
2 from docs
3 where contains(text, 'first') > 0;
ID TEXT
---------- ------------------------------
1 first document
In the above example, only two elements are different from what you would expect to see
in a regular query. The first is the CREATE INDEX statement. Note the syntax where
INDEXTYPE is of CTXSYS.CONTEXT. As mentioned in the introductory article, one of the four
index types in Oracle Text is context-based. The second is the syntax in the SELECT
statement. Other than those two new aspects, using Oracle Text, albeit in a simple
example, is pretty easy to do.
Who is the CTXSYS user? Using Toad for its handy schema browser interface, we can see
the following information with respect to what constitutes this schema (ignoring tables and
indexes for the time being).
What becomes clear about CTXSYS is that its main driver (what makes it tick) is the CTXAPP
role, and looking further into what the role has for grants is a collection of EXECUTE
privileges on several packages.
For the simple query I used earlier, the amount of steps shown in a formatted trace file
(using TKPROF) is pretty amazing. The actual query appears as the 6
th
statement or
procedure call (one of them is dynamic sampling since the table has not been analyzed yet).
Recalling what was mentioned earlier about the indexes in Text being domain indexes? We
can see this as a fact via the execution plan.
select id, text from docs
where contains(text,'first')>0

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 145 0 0
Execute 1 0.01 0.01 0 0 0 0
Fetch 2 0.00 0.00 0 2 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.04 0.05 0 147 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 54

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID DOCS (cr=16 pr=0 pw=0 time=70341 us)
1 DOMAIN INDEX DOC_INDEX (cr=15 pr=0 pw=0 time=70375 us)
When examining the SQL being executed behind the scenes (specifically, DML), you'll see
references to four different items (tables) ending with a distinctive "dollar letter" identifier.
These suffixes are:
 $I
 $K
 $R
 $N
They are further identified as being prefixed with DR, the index name, and then the table
identifier (one of the four suffixes). A white paper at OTN on how Text processes DML
explains them in more detail.
What this all leads up to is that to use Text, a user needs to have (aside from other normal
privileges) been granted the CTXAPP role, some additional EXECUTE grants on other
CTXSYS packages, and sufficient storage space for table-like indexes.
A set of grants in the documentation is shown below.
GRANT EXECUTE ON CTXSYS.CTX_CLS TO scott;
GRANT EXECUTE ON CTXSYS.CTX_DDL TO scott;
GRANT EXECUTE ON CTXSYS.CTX_DOC TO scott;
GRANT EXECUTE ON CTXSYS.CTX_OUTPUT TO scott;
GRANT EXECUTE ON CTXSYS.CTX_QUERY TO scott;
GRANT EXECUTE ON CTXSYS.CTX_REPORT TO scott;
GRANT EXECUTE ON CTXSYS.CTX_THES TO scott;
GRANT EXECUTE ON CTXSYS.CTX_ULEXER TO scott;
The user using Text also needs to be aware of how Text-related indexes are maintained.
Going back to the simple DOCS table, let's insert another record and then query for it.
SQL> insert into docs values (3, 'third document');
1 row created.
SQL> commit;
Commit complete.
SQL> select id, text from docs
2 where contains(text,'third')>0;
no rows selected
You can clearly find this record via regular SQL, but the Oracle Text query fails to return the
record. Why is that? Again, this type of index requires synchronization after DML, and the
user definitely needs EXECUTE privileges on the CTX_DDL package to make that happen.
So, let's sync the index and re-try the query.
SQL> exec ctx_ddl.sync_index('doc_index');
PL/SQL procedure successfully completed.
SQL> select id, text from docs
2 where contains(text,'third')>0;
ID TEXT
---------- ------------------------------
3 third document
Depending on your application, you can begin to see the criticality of this requirement.
Suppose you were managing a no-fly list that is updated once every 24 hours. A new person
of interest is inserted into the no-fly table, but without the index being updated yet, there
will be a window where this person may be allowed to board an aircraft unless some other
step is taken (e.g., manually checking a web page or some other external source of
information). A failure there is not good either.
In the index sync statement, I took advantage of a default parameter (several, actually).
The specification for CTX_DDL is shown below.
PROCEDURE sync_index(
idx_name in varchar2 default NULL,
memory in varchar2 default NULL,
part_name in varchar2 default NULL,
parallel_degree in number default 1
);
Specifying the index name is sufficient, but what comes into play with real world sized data
sets is how much memory you can afford to use to cache the index ahead of time. What
does the SYNC_INDEX procedure do? We don't know (exactly) because Oracle wrapped the
package body. As no value was given for the memory parameter, does that mean no
memory was used? The answer to that is no, and the way to see the default value is to
query a CTXSYS table.
The path to the description of Text packages is a redirection from the PL/SQL Packages and
Types Reference guide to the Oracle Text Reference guide. The description of CTX_DDL
says that the memory parameter uses the system value for DEFAULT_INDEX_MEMORY.
Don't go looking for this via SHOW PARAMETER as "system" does not refer to the instance
initialization parameters. Instead, query the CTX_PARAMETERS table, and in my case, the
value for this parameter is 12582912 bytes, or about 12MB. The max setting is 1GB.
As another example of using Oracle Text, but this time with a different index type, we can
use the catalog search example shown in the Oracle Text Application Developer's Guide.
The example is based on using more than one column for an index, and in this case,
supports searching for string and sorting at the same time. The steps are to create a table,
populate it, create a sub-index (the additional column we'll be using in addition to the main
index), create the catalog index, and they query from the table.
Create table and populate it
CREATE TABLE auction(
item_id NUMBER,
title VARCHAR2(100),
category_id NUMBER,
price NUMBER,
bid_close DATE);
INSERT INTO AUCTION VALUES
(1, 'NIKON CAMERA', 1, 400, '24-OCT-2002');
INSERT INTO AUCTION VALUES
(2, 'OLYMPUS CAMERA', 1, 300, '25-OCT-2002');
INSERT INTO AUCTION VALUES
(3, 'PENTAX CAMERA', 1, 200, '26-OCT-2002');
INSERT INTO AUCTION VALUES
(4, 'CANON CAMERA', 1, 250, '27-OCT-2002');
commit;

Create the sub-index
EXEC CTX_DDL.CREATE_INDEX_SET('auction_iset');
EXEC CTX_DDL.ADD_INDEX('auction_iset','price');

Create the catalog index
CREATE INDEX auction_titlex ON AUCTION(title)
INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set auction_iset');
And now we're ready to query.
SQL> COLUMN title FORMAT a40;
SQL> SELECT title, price FROM auction
2 WHERE CATSEARCH(title, 'CAMERA', 'order by price')> 0;
TITLE PRICE
---------------------------------------- ----------
PENTAX CAMERA 200
CANON CAMERA 250
OLYMPUS CAMERA 300
NIKON CAMERA 400
Overall, pretty easy, except the documentation has a couple of typographical Easter eggs to
make some statements fail. The first CTX_DDL call should be CREATE_INDEX_SET, not
CREATE.INDEXT_SET. The "sub-index A" comment at the end of an EXEC statement does
not work well either, so just omit it as I did above.
Finally, what happens when the base table is updated (or overall, DML is applied against it)?
With the catalog index, the index is synchronized for you. Given that something (DML) is
taking place on the base table and an index is being updated, is there an implicit commit
taking place? The answer is no. You can test this by running the query based on the two
new records, and then running the same query in another session. So, one other potential
gotcha in the documentation is the absence of a COMMIT statement.
In Closing
So far, we've seen two working examples of different index types used by Oracle Text. The
simple things are simple to do; and likewise, the harder things are harder too. Next time,
we'll take a look at some of the more complex features and usage of this tool.
Additional Resources
How to identify PL/SQL performance bottlenecks
using DBMS_PROFILER
Posted by Ramasundaram Perumal at 8:39 pm Add comments
Jun152009

The DBMS_PROFILER package provides an interface to profile existing PL/SQL applications and
identify performance bottlenecks. You can then collect and persistently store the PL/SQL profiler
data.
This package enables the collection of profiler (performance) data for performance improvement or
for determining code coverage for PL/SQL applications. Application developers can use code
coverage data to focus their incremental testing efforts.
With this interface, you can generate profiling information for all named library units that are
executed in a session. The profiler gathers information at the PL/SQL virtual machine level. This
information includes the total number of times each line has been executed, the total amount of time
that has been spent executing that line, and the minimum and maximum times that have been spent
on a particular execution of that line.
You can create profiler tables by running proftab.sql, which is located
under $ORACLE_HOME/rdbms/admin directory.
Let us say, with in a large loop, we decided to insert data into a table
using INSERT…SELECT…CONNECT BY or INSERT…SELECT…FROM all_objects, and would like
to find out, which one will be more efficient. The example is just to illustrate the usage
ofDBMS_PROFILER in its simple form.
 Create a test table t
1
2
3
SQL> CREATE TABLE t (f VARCHAR2(10), n NUMBER(10));

Table created.
 Create a test procedure proc1 and proc2
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
SQL> CREATE OR REPLACE PROCEDURE proc1
2 AS
3 BEGIN
4 DBMS_PROFILER.start_profiler('proc1');
5 DBMS_OUTPUT.put_line('Starting CONNECT BY Insert');
6 FOR i IN 1..1000
7 LOOP
8 INSERT INTO t (f, n) SELECT 'proc1', rownum FROM dual CONNECT BY level <= 100;
9 COMMIT;
10 END LOOP;
11 DBMS_OUTPUT.put_line('Finished CONNECT BY Insert');
12 DBMS_PROFILER.stop_profiler;
13 END;
14 /

Procedure created.

SQL> CREATE OR REPLACE PROCEDURE proc2
2 AS
3 BEGIN
4 DBMS_PROFILER.start_profiler('proc2');
5 DBMS_OUTPUT.put_line('Starting ALL_OBJECTS Insert');
6 FOR i IN 1..1000
7 LOOP
8 INSERT INTO t (f, n) SELECT 'proc2', rownum FROM all_objects WHERE rownum <= 100;
9 COMMIT;
10 END LOOP;
11 DBMS_OUTPUT.put_line('Finished ALL_OBJECTS Insert');
12 DBMS_PROFILER.stop_profiler;
13 END;
14 /

Procedure created.
 Now, Let us execute the procedure proc1 and proc2
01
02
03
04
05
06
07
08
09
10
11
SQL> SET SERVEROUTPUT ON
SQL> EXECUTE proc1;
Starting CONNECT BY Insert
Finished CONNECT BY Insert

PL/SQL procedure successfully completed.

SQL> EXECUTE proc2;
Starting ALL_OBJECTS Insert
Finished ALL_OBJECTS Insert

PL/SQL procedure successfully completed.

12
13
14
SQL>
 Examine the profiler data by running the below query:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
SQL> BREAK ON runid ON run_owner ON run_comment ON run_secs
SQL> SELECT a.runid, a.run_owner, a.run_comment,
2 a.run_total_time / 1000000000 run_secs, c.total_occur,
3 c.total_time / 1000000000 line_total_secs, c.line#, u.text
4 FROM plsql_profiler_runs a,
5 plsql_profiler_units b,
6 plsql_profiler_data c,
7 user_source u
8 WHERE a.runid = b.runid
9 AND a.runid = c.runid
10 AND b.unit_name = u.NAME
11 AND c.line# = u.line
12 /

RUNID RUN_OWNER RUN_COMMEN RUN_SECS TOTAL_OCCUR LINE_TOTAL_SECS LINE# TEXT
----- --------- ---------- ---------- ----------- --------------- ----- -------------------------------------------------------------
1 PERUMAL proc1 15.17891 0 .00000 1 PROCEDURE proc1
0 .00000 4 DBMS_PROFILER.start_profiler('proc1');
1 .00037 5 DBMS_OUTPUT.put_line('Starting CONNECT BY Insert');
1001 .00984 6 FOR i IN 1..1000
1000 13.93305 8 INSERT INTO t (f, n) SELECT 'proc1', rownum FROM dual
CONNECT BY level <= 100;
1000 1.06284 9 COMMIT;
1 .00003 11 DBMS_OUTPUT.put_line('Finished CONNECT BY Insert');
1 .00002 12 DBMS_PROFILER.stop_profiler;
0 .00000 13 END;
2 PERUMAL proc2 121.93940 0 .00000 1 PROCEDURE proc2
0 .00000 4 DBMS_PROFILER.start_profiler('proc2');
1 .00031 5 DBMS_OUTPUT.put_line('Starting ALL_OBJECTS Insert');
1001 .01023 6 FOR i IN 1..1000
1000 119.87532 8 INSERT INTO t (f, n) SELECT 'proc2', rownum FROM all_objects
WHERE rownum <= 100;
1000 1.98504 9 COMMIT;
1 .00003 11 DBMS_OUTPUT.put_line('Finished ALL_OBJECTS Insert');
1 .00002 12 DBMS_PROFILER.stop_profiler;
0 .00000 13 END;
INSERT statement on proc1 took 13.93305 seconds for 1000 executions and proc2 took
119.87532 for 1000 executions. Imagine a PL/SQL procedure or function with thousands of lines of
code, with this you can pinpoint the section of code that deserves your attention for a tuning. Refer
to documentation “Oracle Database PL/SQL Packages and Types Reference 11g Release 1 (11.1)”
for more details on profiler.
You can download profiler.sql for Reporting PL/SQL Profiler data generated by
DBMS_PROFILER in html format. Refer to metalink Document ID 243755.1 titled “Implementing and
Using the PL/SQL Profiler“

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close