Oracle Illustrated


February 21, 2010  7:33 AM

Migrating from 9i to 11g – Hash full / outer joins

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Hash outer joins

This is a new capability of the optimizer is to use a hash algorithm to execute outer joins. Oracle suggests that using this new hash outer join could result in a 50% reduction in LIO. Full outer join would return all matching rows and the non-matching rows in the tables in question.

select
empname,
empno
from
emp full outer join dept using (deptno)
order by 1;

Execution Plan
——————————————————————————————–
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
——————————————————————————————–
| 0 | SELECT STATEMENT | | 40541 | 1187K| | 480 (1)| 00:00:06 |
| 1 | SORT ORDER BY | | 40541 | 1187K| 3192K| 480 (1)| 00:00:06 |
| 2 | VIEW | VW_FOJ_0 | 40541 | 1187K| | 145 (1)| 00:00:02 |
|* 3 | HASH JOIN FULL OUTER | | 40541 | 1464K| | 145 (1)| 00:00:02 |
| 4 | INDEX FAST FULL SCAN| DEPT_IDX | 5 | 20 | | 2 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | EMP | 40541 | 1306K| | 143 (1)| 00:00:02 |
——————————————————————————————–

Notice the HASH JOIN FULL OUTER step in the execution plan. Should keep in mind that there is no hint to force the optimizer to use or not to use the HASH OUTER JOIN method.

Hash Full Joins

With Oracle 11g – a new access method which produces 50% less consistent gets which is called HASH JOIN FULL OUTER

The optimizer uses hash joins for processing an outer join if the data volume is high enough to make the hash join method efficient or if it is not possible to drive from the outer table to inner table. The order of tables is determined by the cost. The outer table, including preserved rows, may be used to build the hash table, or it may be used to probe one.

——————————————————————————————-
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
——————————————————————————————-
| 0 | SELECT STATEMENT | | 40192 | 1256K| | 500 (1)| 00:00:06 |
| 1 | HASH GROUP BY | | 40192 | 1256K| 3184K| 500 (1)| 00:00:06 |
|* 2 | HASH JOIN RIGHT OUTER| | 40541 | 1266K| | 144 (1)| 00:00:02 |
| 3 | INDEX FULL SCAN | DEPT_IDX | 5 | 20 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | EMP | 40541 | 1108K| | 143 (1)| 00:00:02 |
——————————————————————————————-

Predicate Information (identified by operation id):
—————————————————
2 – access(“E”.”DEPTNO”=”D”.”DEPTNO”(+))

Statistics
———————————————————-
1 recursive calls
0 db block gets
522 consistent gets
0 physical reads
0 redo size
1503026 bytes sent via SQL*Net to client
54474 bytes received via SQL*Net from client
2704 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
40541 rows processed

This could be particularly useful when we want to view differences between two tables.

February 21, 2010  7:29 AM

Migrating from 9i to 11g – Fast dual – 10g

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Fast dual is available from Oracle 10g. FAST DUAL performs 0 consistent gets which means it does not perform any I/O. We use DUAL table to perform dummy selects or perform some calculations each time it does logical I/O but, with this new feature as many logical I/O’s are reduced. In Oracle 9i it used to perform a FULL table scan on dual and does logical I/O everytime.

Oracle 9i
Execution Plan
———————————————————-
Plan hash value: 272002086

————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
————————————————————————–

Statistics
———————————————————-
24 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
414 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Second time :

Statistics
———————————————————-
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
414 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Oracle 11g
SQL> select 1 from dual;

Elapsed: 00:00:00.04

Execution Plan
———————————————————-
Plan hash value: 1388734953

—————————————————————–
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
—————————————————————–
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
—————————————————————–

Statistics
———————————————————-
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
411 bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Really nice feature – especially when we use dual in too many places !!


February 21, 2010  7:26 AM

Migrating from 9i to 11g – Flashback

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

This is one of my favorite topics !!

Flashback feature – available since Oracle 9i. There are quite a few new features in Oracle 10g.

The new and improved flashback technique performs recovery in a faster pace. Flashback features offers the capability to query past versions of schema objects, query historical data perform change analysis or perform self-service repair to recover from logical corruptions everything when the database is online.

Advantages:

- 24X7 database availability
- Saves time

Includes

Flashback Database

Flash Recovery Area
Enable Flashback Logging through Enterprise Manager or by issuing SQL command

It is faster than the traditional recovery. Time taken to restore the database is usually based on the number of transaction that needs to be recovered than the size of the database. The older recovery methods uses REDO LOG files to recover the database. Flashback database introduces a new type of log called FLASHBACK DATABASE LOG.

How does it work?

Oracle database periodically logs previous images of blocks into the flashback database logs. These blocks that are stored in the flashback database is used to quickly recover during the flashback phase.

This is DBA activity -

1. The database should be in archive mode -

By executing ARCHIVE LOG LIST command at SQL Plus prompt it will let us know whether the database is in archive mode or not

ARCHIVE LOG LIST;

Database log mode No Archive Mode
Automatic archival Disabled
Archive destination F:\log\archive
Oldest online log sequence 1
Current log sequence 2

If it is in “No Archive Mode” then following needs to be done to set it in archive mode

a. Shut down the database:

SQL> SHUTDOWN IMMEDIATE;

b. Open init.ora file and set following parameters:

log_archive_dest_1=’LOCATION=F:\log\archive’
log_archive_dest_2=’LOCATION=F:\log\archive1’ /* Only if there are two log files – optional */
log_archive_format=’%t_%s.ARCH’

c. Start database in mount exclusive mode

STARTUP MOUNT EXCLUSIVE PFILE=init.ora;

d. Start the database in ARCHIVELOG mode as follows:

ALTER DATABASE ARCHIVELOG;

e. Open the database

ALTER DATABASE OPEN;

2. Assign flashback recovery log file path, size and log retention values in init.ora file:

DB_RECOVERY_FILE_DEST=F:\log\archive \flasharea

Set the following :
DB_RECOVERY_FILE_DEST_SIZE
DB_FLASHBACK_RETENTION_TARGET

3. SYSDBA can only do any of this – open db in MOUNT EXCLUSIVE mode

SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE FLASHBACK ON;

4. check whether flashback is enabled
select log_mode, flashback_on from v$database;
LOG_MODE FLASHBACK ON
———— ——————-
ARCHIVELOG YES

To disable flashback database – ALTER DATABASE FLASHBACK OFF

Now, how to flashback the database

FLASHBACK DATABASE TO TIMESTAMP (SYSDATE);

We can also use SCN number to flashback the database – FLASHBACK DATABASE TO SCN

Oracle 9i
Not possible – only traditional recovery is possible.

Oracle 11g

Flashback Database (10g)
Database needs to be in Archive mode.

Recovery area param needs to be changed – DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE

select flashback_on from v$database;
NO

To enable it the following needs to be done.

STARTUP MOUNT EXCLUSIVE;
ALTER DATABASE FLASHBACK ON;

Set up DB_FLASHBACK_RETENTION_TARGET

FLASHBACK DATABASE TO TIME = TO_DATE (‘02/13/10 12:00:00’,’MM/DD/YY HH:MI:SS’);

– Purely DBA related

****************************************************************************

Flashback Drop

Recycle Bin – automatically enabled with Oracle Database 10g

To check the details

NAME TYPE VALUE
———————————— ———– ——————
buffer_pool_recycle string
db_recycle_cache_size big integer 0
recyclebin string on

Sometimes it happens that we accidently drop objects in the database. Flashback drop provides an option to recover the dropped objects like tables, triggers, indexes, constraints etc. When a table is dropped it is not actually dropped – it is just renamed and the renamed table name is available in the recycle bin. We can either drop the table permanently using purge or recover it using flashback. To drop the table without making it part of recycle bin then we can issue DROP TABLE PURGE command. USER / DBA_RECYCLEBIN contains the list of dropped objects.

I really wasted lot of time working on FLASHBACK DROP as SYSTEM user – it doesn’t work under SYSTEM / SYSDBA user

Flashback Drop (10g) – Does not work form SYSDBA user

DROP TABLE test_readonly;

SHOW recyclebin

– Note when a table is dropped it is just renamed. It is not droped when the RECYCLE mode is ON. So the space occupied is there as it is.

SQL> select object_name, original_name, operation from user_recyclebin;

OBJECT_NAME ORIGINAL_NAME OPERATION
—————————— ——————————– ———
BIN$uCJ5kSosTM+s7chj6cJeGQ==$0 TEST_READONLY DROP
BIN$wl9CUjW7RsapzkrsoHDm+w==$0 TEST_READONLY DROP

Note: TEST_READONLY was dropped, recreated then dropped again.

To retreive the table that was dropped – issue the following command.

SQL> flashback table “BIN$uCJ5kSosTM+s7chj6cJeGQ==$0″ to before drop;

Flashback complete.

Elapsed: 00:00:02.71
SQL> select * from test_Readonly;

A
———-
1

Elapsed: 00:00:00.06
SQL>

Lets try to retrieve the other table also – it should obviously result in error

SQL> select object_name, original_name, operation from user_recyclebin;

OBJECT_NAME ORIGINAL_NAME OPERATION
—————————— ——————————– ———
BIN$wl9CUjW7RsapzkrsoHDm+w==$0 TEST_READONLY DROP

Elapsed: 00:00:00.00
SQL> flashback table “BIN$wl9CUjW7RsapzkrsoHDm+w==$0″ to before drop;
flashback table “BIN$wl9CUjW7RsapzkrsoHDm+w==$0″ to before drop
*
ERROR at line 1:
ORA-38312: original name is used by an existing object

Elapsed: 00:00:00.07
SQL>


To drop a table without storing it in recyclebin -

SQL> DROP TABLE test_Readonly purge;

Table dropped.

Elapsed: 00:00:00.40
SQL> select * from user_recyclebin;

no rows selected

Elapsed: 00:00:00.00

To purge the table that was dropped

PURGE TABLE “BIN$wl9CUjW7RsapzkrsoHDm+w==$0”

Table purged.

Elapsed: 00:00:02.92

To purge the recyclebin

PURGE recyclebin;

Recyclebin purged.

****************************************************************************

UNDO Tablespace dependent Flashback Technologies

Following are possible with UNDO tablespace dependent flashback technology. Description about each can be found in below table.

Flashback Query
Flashback Table
Flashback Versions Query
Flashback Transactions Query

Flashback Table (10g)

When there is an application error or human error we prefer to flashback one or more tables to specific time point. Flashback table enables us to do this. To perform flashback table row movement must be enabled on the table. Also, FLASHBACK ANY TABLE privilege should be granted to that particular user to perform this operation. (GRANT FLASHBACK ANY TABLE TO ).

ALTER TABLE EMP ENABLE ROW MOVEMENT

ALTER TABLE DEPT ENABLE ROW MOVEMENT

FLASHBACK TABLE emp, dept TO TIMESTAMP
to_timestamp (’12-Feb-2010 16:00:00′,’dd-mon-yyyy hh24:mi:ss’)

Flashback complete

Flashback query (10g)

Flashback query (10g)

Enables to read the query as it is in the past.
Enables to quickly compare current vs past data – this is specifically useful for large tables. Sometimes we may need to compare two large tables. This option enables to compare the data without creating a new backup table.
Enables to recover deleted or changed data.

select a.ename, a.sal new_sal, b.sal old_sal,
(b.sal – a.sal) sal_diff
from emp a, emp AS OF TIMESTAMP
to_timestamp (’12-Feb-2010 16:00:00′,’dd-mon-yyyy hh24:mi:ss’) b
where a.empno = b.empno ;

– Same as in 9i
Flashback Version query (10g)

With flash back version query row level flashback is enabled.

Version is transaction based

Commit – Creates a version
Rollback – does not create a version

Flashback Version Query enables to show all the versions of data between two SCN’s or Timestamp’s. This requires UNDO (UNDO_RETENTION dependent).

Query table with new SQL operator => “VERSIONS BETWEEN” part of the FROM clause
Optionally include new pseudo-columns => VERSIONS_STARTTIME / _STARTSCN
Starting TIMESTAMP or SCN when row version was created
=> VERSIONS_ENDTIME / _ENDSCN
Ending TIMESTAMP or SCN when row version get completed
=> VERSIONS_OPERATION
Type of operation that created row version ((I)nsert, (U)update or (D)elete)
=> VERSIONS_XID (Unique transaction identifier for a row version)

insert into emp select rownum*5, ename, job, mgr, hiredate, sal, comm, deptno from emp

select versions_starttime, versions_endtime, versions_xid,
DECODE(versions_operation,’I’,’INSERT’,’U’,’UPDATE’,’D’,’DELETE’,’ORIGINAL’) as operation,
empno, deptno
from emp versions between
timestamp to_timestamp(’17-Feb-2010 21:00:00′,’dd-mon-yyyy hh24:mi:ss’)
and to_timestamp(’17-Feb-2010 21:00:00′,’dd-mon-yyyy hh24:mi:ss’)
order by VERSIONS_STARTTIME
/

ORIGINAL 7369 20
ORIGINAL 7499 30
ORIGINAL 7521 30
ORIGINAL 7566 20
ORIGINAL 7654 30
ORIGINAL 7698 30
ORIGINAL 7934 10
ORIGINAL 7788 20
ORIGINAL 7839 10
ORIGINAL 7844 30
ORIGINAL 7876 20
ORIGINAL 7900 30
ORIGINAL 7902 20
ORIGINAL 7782 10

14 rows selected.

Note: We cannot use temporary table / external tables with versioning query.

Flashback Transaction query (10g)

Sometimes, we realize that the data in a table has been inappropriately changed. To get this rectified, we can use multiple flashback queries and flashback version queries to view row data at specific point in time and its changes

delete from emp where empno in (select rownum*5 from emp);

14 rows deleted.

SELECT xid, operation, table_name, undo_sql FROM
flashback_transaction_query
WHERE xid IN (SELECT versions_xid FROM emp
VERSIONS BETWEEN TIMESTAMP
to_timestamp(’17-Feb-2010 21:00:00′,’dd-mon-yyyy hh24:mi:ss’)
and to_timestamp(’17-Feb-2010 21:00:00′,’dd-mon-yyyy hh24:mi:ss’)
WHERE versions_xid IS NOT NULL)
ORDER BY commit_timestamp
/


February 21, 2010  7:18 AM

Migrating from 9i to 11g – External table

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Prior to Oracle 9i we can only read from external tables – from Oracle 10g we can write to external table !! ie., LOADING & UNLOADING is possible via external tables.
Lets take a look at a quick example for unloading part -
create or replace directory ext_dir as ‘F:\app\Luxananda\oradata\lux’

CREATE TABLE external_test
ORGANIZATION EXTERNAL
(TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY ext_dir LOCATION (‘DEPT.DAT’))
reject limit unlimited
AS select * from dept

As we have not specified the log file it got automatically created:

EXTERNAL_TEST_2672_4200.log

SQL> select * from external_test ;

DEPTNO DNAME LOC
———- ———- ———
380 ACCOUNTING SINGAPORE
630 IT SINGAPORE
120 RESEARCH SINGAPORE
320 OPERATIONS SINGAPORE
550 SALES SINGAPORE

How does DEPT.DAT contents look like ? – Its in XML Format !!

Now, lets query Select dbms_metadata.get_ddl(‘TABLE’,’EXTERNAL_TEST’) from dual; -> returns the Table creation syntax.

DBMS_METADATA.GET_DDL(‘TABLE’,’EXTERNAL_TEST’)
CREATE TABLE “SYSTEM”.”EXTERNAL_TEST”
( “DEPTNO” NUMBER,
“DNAME” VARCHAR2(10),
“LOC” CHAR(9)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY “EXT_DIR”
LOCATION
( ‘DEPT.DAT’
)
) REJECT LIMIT UNLIMITED

Now, if we create another external table in some other environment and make it access the same table we would be able to read it from the file.


February 21, 2010  7:17 AM

Migrating from 9i to 11g – DML Error logging

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

When we are bulk processing records – especially when we need to perform DML operations based on global temporary tables it becomes really difficult to trap the error – unless we use bulk exception handling. Error handling itself becomes a separate coding by itself !!

Create table emp1(empno number, still_employed varchar2(20))

create unique index emp1_idx on emp1 (empno)

Oracle 9i

forall j in emp_tab.first .. emp_tab.last save exceptions
INSERT INTO

VALUES (emp_tab(j));

exception when bulk_errors then
for j in 1 .. sql%bulk_exceptions.Count
loop
dbms_output.put_line ( ‘Error -’ ||
To_Char(sql%bulk_exceptions(j).error_index) || ‘: ‘ ||
Sqlerrm(SQL%bulk_exceptions(j).error_code) );
end loop;

Oracle 11g
exec dbms_errlog.CREATE_ERROR_LOG (‘EMP1′,’ERR_EMP1′)

INSERT

insert into emp1 values (1, ‘Y’);
insert into emp1 values (2, ‘N’);
insert into emp1 values (3, ‘Y’);

insert into emp1 SELECT rownum, ‘N’ from dual connect by level <= 20
log errors into err_emp1
reject limit 50
17 rows created.

insert into emp1 SELECT rownum, ‘N’ from dual connect by level <= 20
log errors into err_emp1
reject limit 10
SQL> insert into emp1 SELECT rownum, ‘N’ from dual connect by level <= 20
2 log errors into err_emp1
3 reject limit 10;
insert into emp1 SELECT rownum, ‘N’ from dual connect by level <= 20
*
ERROR at line 1:
ORA-00001: unique constraint (SYSTEM.EMP1_IDX) violated

SELECT * FROM ERR_EMP1

Output-

1 “ORA-00001: unique constraint (SYSTEM.EMP1_IDX) violated
” I 1 N

UPDATE

SQL> desc emp1
Name Null? Type
—————————————– ——– ——————-

EMPNO NUMBER
STILL_EMPLOYED VARCHAR2(1)

UPDATE emp1
SET still_employed = decode(still_employed,’Y’, ‘YES’, ‘N’) where
empno in (1,6)
log errors into err_emp1
reject limit 2
1 row updated.

UPDATE emp1
SET still_employed = ‘YES’ where
empno between 1 and 20
log errors into err_emp1
reject limit 2
ERROR at line 2:
ORA-12899: value too large for column “SYSTEM”.”EMP1″.”STILL_EMPLOYED” (actual: 3, maximum: 1)

DELETE

DLETE emp1
where
empno between 1 and 20
log errors into err_emp1
reject limit 2
(Mainly useful for Referential integrity)

Also possible with Merge

The coding effort becomes really really less !!

As per Oracle documentation -

Error Logging Restrictions and Caveats-

- Oracle Database logs the following errors during DML operations:
- Column values that are too large
- Constraint violations (NOT NULL, unique, referential, and check constraints)
- Errors raised during trigger execution
- Errors resulting from type conversion between a column in a subquery and the corresponding column of the table
- Partition mapping errors
- Certain MERGE operation errors (ORA-30926: Unable to get a stable set of rows for MERGE operation.)


February 21, 2010  7:11 AM

Migrating from 9i to 11g – Dynamic SQL for PLSQL – Functional completeness

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Lets see the functional completeness of Dynamic SQL for PLSQL in this article.

It allows us to write dynamic SQL statements larger than 32KB. Now, DBMS_SQL.PARSE function is overloaded with CLOB data type. Now a REF cursor can be converted into a DBMS_SQL cursor.

DBMS_SQL – to execute dynamic SQL statements that has unknown in / out variables – similar to Method 4 in pro*c. When we do not know the columns a select statement would return / the data type then DBMS_SQL is the best way to go.

Native dynamic SQL – Available since Oracle 8i and enables to perform dynamic SQL. This enables to retrieve the records – variables should be known at compile time. We can use the cursor attributes like %ISOPEN, %FOUND, %NOTFOUND and %ROWCOUNT.

Oracle 11g
CREATE OR REPLACE PROCEDURE native_dyn_sql
(in_source_code IN CLOB) AS
BEGIN
EXECUTE IMMEDIATE in_source_code;
dbms_output.put_line(‘The value of source code is ‘ || in_source_code);
END native_dyn_sql;
/

exec native_dyn_sql (‘begin dbms_output.put_line(”hello how are you !!”); end;’)

SQL> exec native_dyn_sql (‘begin dbms_output.put_line(”hello how are you !!”);
end;’)

hello how are you !!
The value of source code is begin dbms_output.put_line(‘hello how are you !!’);
end;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.06
SQL>

Amazing !!

Now, it is possible to switch between DBMS_SQL to Native dynamic SQL – rocking !!which was not possible prior to Oracle 11g. This can be achieved by DBMS_SQL.TO_REF_CURSOR and DBMS_SQL.TO_CURSOR_NUMBER.

Oracle 11g
Create or replace procedure convert_native_dbms
AS
TYPE native_cursor IS REF CURSOR;
native_cursor_tab native_cursor;
desc_vars dbms_sql.desc_tab;
cursor_handle NUMBER;
cursor_return NUMBER;

BEGIN
cursor_handle := dbms_sql.open_cursor;
dbms_sql.parse(cursor_handle, ‘select empno from emp where rownum exec convert_dbms_native;
The value of empno_Tab is – 1
The value of empno_Tab is – 2
The value of empno_Tab is – 3
The value of empno_Tab is – 4
The value of empno_Tab is – 5
The value of empno_Tab is – 6
The value of empno_Tab is – 7
The value of empno_Tab is – 8
The value of empno_Tab is – 9

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.42

Also, it is equally possible to transform REF CURSOR into DBMS_SQL cursor – this can be achieved using DBMS_SQL.TO_CURSOR_NUMBER.

Oracle 11g
Create or replace procedure convert_dbms_native( in_cursor IN VARCHAR2)
AS
TYPE native_cursor IS REF CURSOR;
native_cursor_tab native_cursor;
TYPE empno_list IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
empno_tab empno_list;
cursor_handle NUMBER;
cursor_return NUMBER;
v_emp_no NUMBER;
v_num_columns number;
v_describe dbms_sql.desc_tab;
BEGIN
OPEN native_cursor_tab FOR in_cursor;
cursor_handle := DBMS_SQL.TO_CURSOR_NUMBER(native_cursor_tab);
dbms_sql.describe_columns(cursor_handle, v_num_columns, v_describe);

FOR i in 1 .. v_num_columns LOOP
if v_describe(i).col_type = 1 THEN
dbms_sql.define_column(cursor_handle, i, v_emp_no);
END IF;
END LOOP;

WHILE DBMS_SQL.FETCH_ROWS(cursor_handle) > 0 LOOP
FOR i in 1 .. v_num_columns LOOP
if v_describe(i).col_type = 1 then
dbms_sql.column_value (cursor_handle, i, v_emp_no);
dbms_output.put_line(‘The value of empno is – ‘ || v_emp_no);
end if;
END LOOP;
END LOOP;
dbms_sql.close_cursor(cursor_handle);
END;
/

exec convert_dbms_native(‘select empno from emp where rownum <= 10′)


February 21, 2010  7:04 AM

Migrating from 9i to 11g – Commit_write parameter

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Feature available from Oracle 10g.

COMMIT_WRITE – COMMIT_WRITE = ‘{IMMEDIATE | BATCH},{WAIT |NOWAIT}’
COMMIT_WAIT – COMMIT_WAIT = { NOWAIT | WAIT | FORCE_WAIT }
COMMIT_POINT_STRENGTH Values range 0 to 225
COMMIT_LOGGING = COMMIT_LOGGING = { IMMEDIATE | BATCH }

Examples:

Oracle 11g
COMMIT_WRITE – All to do with writing to REDO logs

commit_write wait;

Doesn’t return unless the redo information is written to the online redo log.

CREATE TABLE TEST_COMMIT
(a number,
b varchar2(100))
/

set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values ( j, ‘COMMIT WRITE WAIT’);
j := j + 1;
end loop;
COMMIT WRITE WAIT;
END;
/

Elapsed: 00:00:25.90

commit_write immediate;

Redo information is written immediate to the logs

truncate table test_commit;

set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values (j, ‘COMMIT WRITE NOWAIT’);
j := j+1;
END LOOP;
COMMIT WRITE NOWAIT;
END;
/

Elapsed: 00:00:08.76

commit_write batch;

Redo information writes are deffered

truncate table test_commit;

set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values (j, ‘COMMIT WRITE BATCH;’);
j := j+1;
END LOOP;
COMMIT WRITE BATCH;
END;
/

Elapsed: 00:00:05.45

commit_write nowait;

returns before the redo information is written to the online redo log

truncate table test_commit;

set timing on
declare
j number := 1;
BEGIN
FOR i IN 1 .. 50000 loop
insert into test_Commit values (j, ‘COMMIT WRITE IMMEDIATE’);
j := j+1;
END LOOP;
COMMIT WRITE IMMEDIATE;
END;
/

Elapsed: 00:00:04.48

It can be set at SESSION / SYSTEM level.


February 21, 2010  7:02 AM

Migrating from 9i to 11g – Compression

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Note: I have installed 11g on my laptop and tested the scripts from there – so the timing specified here might vary.

Compression is available in Oracle for sometime now. With Oracle 9i we can perform normal COMPRESSION – but, with DML operations the compression goes for a toss. Unless we perform direct path inserts, compression does not work as expected. In Oracle 11g COMPRESS FOR ALL OPERATIONS is available which enables the data to be still compressed with the DML operations also – even with 10 times data load !! This minimizes the overhead of using compression for OLTP tables. For partitioned tables compression can be controlled at the partition level – this feature allows the same table to have partitions that are compressed at very different levels. If there are two sets of compression one at table level and the other at partition level – the partition level compression overrides the compression at table level – this allows decently finer levels compressing and managing the tables.

Require additional license for Advanced Compression in 11g.

Here the block is compressed and not the row. This does not reduce the speed of DML’s because the compression does not happen when the row is inserted into the table. But actually happens as a trigger event (batch mode) – the rows are inserted un-compressed (normal way) – after certain number of rows are updated / inserted into the table then the compression happens on those inserted / updated rows in the block.

Important note: Compression is CPU intensive. So when the compression is enabled the CPU resource usage will be high.

How does compression work internally?

Oracle finds the repeating rows in the compressed table and puts them near the header of the block – “symbol table”. Each value in the column is assigned a symbol that replaces the actual value in the table. This symbol values size is smaller than the original data size. If there are more repeating data then the symbol table will be more compact. Net net, we can say that one of the key factors that drives this compression is repeating data in the table also.

Compressed table access via dblinks

It is said that it takes less time to fetch data across dblinks / network.

Note : The following is run from my PC

Oracle 9i (Normal compression) Oracle 11g (Advanced compression)
Create statement – 3072 Create statement -3072
Insert 10 times of load – 81920 Insert 10 times of load – 49152

Testing -

Oracle 9i

Table creation – DIRECT PATH COMPRESSION Vs NORMAL Vs OLTP COMPRESSION

DIRECT PATH COMPRESSION

CREATE TABLE obj_compress
COMPRESS
AS SELECT * FROM all_objects
/

Timing – Elapsed: 00:00:10.73

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_COMPRESS’

3072

Uncompressed table

CREATE TABLE obj_uncompress
AS SELECT * FROM all_objects
/

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_UNCOMPRESS’

8192

Timing : 00:00:09:79

OLTP Compression

CREATE TABLE obj_compress
COMPRESS FOR ALL OPERATIONS
AS SELECT * FROM all_objects
/

Timing – Elapsed: 00:00:08.57

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_COMPRESS’

3072

Insertion based on select – Single load – DIRECT PATH COMPRESSION Vs NORMAL Vs OLTP COMPRESSION

DIRECT PATH COMPRESSION

Single Load

INSERT INTO obj_compress
SELECT * from all_objects

Timing – Elapsed: 00:00:08.43

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_COMPRESS’

10240

Normal

Single Load

INSERT INTO obj_uncompress
SELECT * from all_objects

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_UNCOMPRESS’

16384
OLTP Compression

Single Load

INSERT INTO obj_compress
SELECT * from all_objects

Timing – Elapsed: 00:00:13.29

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_COMPRESS’

6144

Compression ratio for Single load: 1.6 : 2.7 : 1 (compressed vs OLTP compressed vs uncompressed ratio)

Insertion based on select 10 times load – DIRECT PATH COMPRESSION Vs NORMAL Vs OLTP COMPRESSION

DIRECT PATH COMPRESSION

INSERT INTO obj_compress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b

Timing : Elapsed: 00:00:18.37

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_COMPRESS’

81920

Normal

INSERT INTO obj_uncompress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_UNCOMPRESS’

97280

Timing – Elapsed: 00:00:20.39

OLTP Compression

INSERT INTO obj_compress
SELECT a.* from all_objects a, (select * from dual connect by level <= 10) b
Timing – Elapsed: 00:02:11.39

select sum(bytes)/1024 from user_segments where segment_name = ‘OBJ_COMPRESS’

49152

Compression ratio for 10 times load: 1.18 : 1.97 : 1 (compressed vs OLTP compressed vs uncompressed ratio)

Note: OLTP UPDATE / DELETE / MERGE for 500 rows took too much of time – suspect mainly because I have installed 11g in my laptop and it has limited RAM and CPU. Test it in a environment with proper resource.

For one time load of data the data compression was close to 2.7 : 1. whereas with 10 times load of data the data compression ratio was 1.97:1. based on the above example at least 50-60% compression can be achieved. (90% performance gain is based on direct inserts. Real time scenarios we perform inserts quite differently).

Notice that the time taken to insert into COMPRESSED FOR ALL OPERATIONS table takes a little bit more time than inserting into DIRECT PATH ONLY compression table. This is due to the overhead in the CPU operations. Given the advantages of the compression – the slight minimal overhead for the DML operations are acceptable !!

Benefits -
1.Reduction of disk space
2.Extra savings on I/O and cache efficiency – Oracle operates directly on the compressed data without incurring the overhead to uncompress the data then use it.
3.Performance of full table scans where ever required also becomes more efficient
4.Fair reduction in the consistent gets as the blocks used to store the data is less

Net net – Oracle 11g’s OLTP compression is rocking !!


February 21, 2010  6:47 AM

Migrating from 9i to 11g – Compile time warnings

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Oracle warnings available from Oracle 10g. To avoid problems at run times and to have a clean code we can turn on checking some compile time warnings.

Syntax-
PLSQL_WARNINGS = ‘value_clause’ [, 'value_clause' ] …
value_clause::=
{ ENABLE | DISABLE | ERROR }:
{ ALL
| SEVERE
| INFORMATIONAL
| PERFORMANCE
| { integer
| (integer [, integer ] …)}}
If the value is set to ERROR then, it will not be compiled successfully. If the value is set to ENABLE then, it will be compiled but with compilation errors.
To re-compile an existing procedure

ALTER PROCEDURE COMPILE PLSQL_WARNINGS=’ENABLE:ALL’
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS=’ENABLE:ALL’;
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS=’DISABLE:ALL’;
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS=’ENABLE:PERFORMANCE’;
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS=’DISABLE:PERFORMANCE’;
ALTER SESSION /SYSTEM SET PLSQL_WARNINGS=’ERROR:ALL’;
ALTER SESSION / SYSTEM SET PLSQL_WARNINGS=’ENABLE:SEVERE’, ‘DISABLE:PERFORMANCE’,

ALTER SESSION SET PLSQL_WARNINGS=’ERROR:ALL’;

CREATE OR REPLACE PROCEDURE DUMMY
AS
v_count varchar2(10);
BEGIN
v_count := 5;
END;
/
Warning: Procedure created with compilation errors.
OUTPUT

SELECT STATUS FROM ALL_OBJECTS WHERE OBJECT_NAME = ‘DUMMY’

INVALID

LINE/COL ERROR
——– —————————————————————–
5/5 PLW-07206: analysis suggests that the assignment to ‘V_COUNT’ may
be unnecessary

ALTER SESSION SET PLSQL_WARNINGS=’ENABLE:ALL’;

SP2-0804: Procedure created with compilation warnings

OUTPUT

SELECT STATUS FROM ALL_OBJECTS WHERE OBJECT_NAME = ‘DUMMY’

VALID
ALTER SESSION SET PLSQL_WARNINGS=’DISABLE:ALL’;

CREATE OR REPLACE PROCEDURE DUMMY
AS
v_count varchar2(10);
BEGIN
v_count := 5;
END;
/

ALTER PROCEDURE dummy COMPILE PLSQL_WARNINGS=’ENABLE:ALL’

SELECT STATUS FROM ALL_OBJECTS WHERE OBJECT_NAME = ‘DUMMY’

VALID


February 21, 2010  6:45 AM

Migrating from 9i to 11g – CONTINUE

Lakshmi Venkatesh Profile: Lakshmi Venkatesh

Feature available from Oracle 11g

This is used to control program flow within the loop. With earlier versions of Oracle we either need to use IF condition or EXIT from the loop. CONTNUE / CONTINUE WHEN can be used.

How is it currently done?

Oracle 9i
SET SERVEROUTPUT ON SIZE 100000
DECLARE
v_item_num NUMBER := 0;
BEGIN
WHILE ( v_item_num < 200)
LOOP
LOOP
v_item_num := v_item_num + 5;
IF MOD(v_item_num,25) = 0 THEN
EXIT;
END IF;
END LOOP;
Dbms_output.put_line(‘Item number = ‘ || v_item_num);
END LOOP;
END;
/

OUTPUT

Item number = 25
Item number = 50
Item number = 75
Item number = 100
Item number = 125
Item number = 150
Item number = 175
Item number = 200

PL/SQL procedure successfully completed.

Oracle 11g
CONTINUE WHEN
SET SERVEROUTPUT ON SIZE 100000
DECLARE
v_item_num NUMBER := 0;
BEGIN
WHILE ( v_item_num < 200)
LOOP
v_item_num := v_item_num + 5;
CONTINUE WHEN MOD(v_item_num,25) 0;
Dbms_output.put_line(‘Item number = ‘ || v_item_num);
END LOOP;
END;
/


OUTPUT

Item number = 25
Item number = 50
Item number = 75
Item number = 100
Item number = 125
Item number = 150
Item number = 175
Item number = 200

PL/SQL procedure successfully completed.

CONTINUE
SET SERVEROUTPUT ON SIZE 100000
DECLARE
v_item_num NUMBER := 0;
BEGIN
WHILE ( v_item_num < 200)
LOOP
v_item_num := v_item_num + 5;
IF MOD(v_item_num,25) 0 THEN
CONTINUE;
END IF;
Dbms_output.put_line(‘Item number = ‘ || v_item_num);
END LOOP;
END;
/
OUTPUT
Item number = 25
Item number = 50
Item number = 75
Item number = 100
Item number = 125
Item number = 150
Item number = 175
Item number = 200

PL/SQL procedure successfully completed.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: