Tuesday, June 21, 2011

CLOB-related DataPump issue

Yesterday while building a demo-instance for the upcoming KScope'11 I've hit a strange Oracle bug. Here is the story:
- a single schema is transferred from 10.2.0.5 to 11.2.0.1 (both Windows 32 bit) via datapump
- importing of the data fails with fatal error (in the middle of data import)
- I took export log, found exact table it fails to import (by looking at the last successful load), and tried to import just one table. It also failed with the same fatal error:

Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Job "SCOTT"."SYS_IMPORT_TABLE_01" stopped due to fatal error at 10:38:53

- in the alert.log I found a bit more details

ORA-07445: exception encountered: core dump [kcblnb_dscn()+38] [ACCESS_VIOLATION] [ADDR:0x8] [PC:0xDD94FE] [UNABLE_TO_READ] []

Metalink pretty quickly gave me an answer: Doc ID 1313537.1:
- Using datapump to import a table that has mixed CACHE/NOCHACHE LOB columns results in an ORA-7445.
- Eventually there will be patch (11.2.0.3), but for now the workaround is to use regular EXP/IMP

I tested the workaround option - it does its job. So, beware!

P.s. Since I am talking for years about manual adjusting CLOB storage parameters, I decided that it is my duty to create this post! As usual, advanced features are causing side-effects...

P.p.s. Forgot to mention - just FYI about old EXP/IMP module: you cannot export data from 10g database using 11g client utilities. The export must be done using native 10g utilities - otherwise a pretty strange error is thrown:


EXP-00008: ORACLE error 904 encountered
ORA-00904: "MAXSIZE": invalid identifier

Wednesday, June 15, 2011

LONG to CLOB

Yes, I know that LONG datatype was considered "dead" already for Oracle 8i. But still I have cases when people are asking about the conversion to CLOB. Finally, I decided to build a "memo" on this topic.

Small remark: in this blog post I explicitly ignore 11g SecureFile LOB implementation, since the chances of people still using LONG datatypes to step up to the most recent feature are not really high.
--------------------

From the database point of view it is very simple to change LONG to CLOB datatype, since ALTER TABLE..MODIFY supports such change explicitly. But since CLOB columns have many more parameters to be configured, the process of conversion provides a good opportunity to make appropriate setups.

The most critical
*Tablespace - CLOBs could use a different tablespace comparing to the table itself, which could be useful for the purposes of data storage management.
* Retention - configures the LOB column to store old versions of LOB data for a period of time, rather than using a percentage of the tablespace (PCTVERSION, which is used by default for columns converted from LONG). The RETENTION parameter is designed for be used with Undo features of the database, such as Flashback Versions Query, and recommended for all DB configurations using Automatic Undo Management.
* Caching – defines whether DML operations against LOB columns would use BUFFER CACHE or Direct IO operations. There are 3 possible settings of this parameter – CACHE / CACHE READS / NOCACHE (default). Using the CACHE option results in improved performance when reading and writing data from the LOB column. However, it can potentially age other non-LOB pages out of the buffer cache prematurely.

Overall, the migration will take a form of running the following statements to each LONG column (as you can see, column names is not changed – only its datatype and extra storage options)

alter table <TableName> modify <ColumnName> CLOB
lob(<ColumnName>) store as 
(cache 
 tablespace <TablespaceName>
 retention
 )

From the application point of view the following issues/remarks are detected:
* There are no known areas where LONG datatype is doing anything better  (performance/functionality) than CLOB
* There is no easy technique to modify CLOB column back to LONG without using some kind of staging solution.
* PL/SQL supports explicit casting of CLOB columns to LONG variables as long as the length of retrieved data meets LONG restrictions. The next example should read/change/write:

declare
    v_long long;
begin
    select clob_column
    into v_long
    from t_table;
    
    v_long:= v_long||'ABCDEFG';
    
    update t_table
    set clob_column = v_long
    where pk_id = 1;
end;

It is recommended to do a full code review and remove variables of type LONG, especially in performance-critical modules (because DBMS_LOB APIs are much more efficient than, for example, concatenation)
--------------------

Hope, it helps!

Monday, June 6, 2011

Oracle OpenWorld 2011 => Oracle Mix

As many of you have noticed, at the end of the last week Oracle sent around accept/reject letters for OpenWorld 2011. Mixed feelings...From one side, I already got in with one of "performance" talks that we often do together with Dr. Paul Dorsey (he covers "why"/I cover "how"). From the other side, I didn't get through the abstract I really hoped.

But this year there is an alternative,“Oracle Mix”, where people could re-submit their abstracts to the “popular vote”.  What surprises me is that majority of MIX topics are either about development or about core DBA’s issues (manageability/ scalability/ availability). Middleware/SOA/other products are well-behind. Two possible conclusions – either DBA/Dev specialists are more active in sharing of their ideas or the balance of presentations in OpenWorld is slightly off. Opinions?

Anyways, if anybody would like to vote for my additional abstract via Oracle Mix, here is a link. 

It is the same talk I will be doing at KScope’11, “Managing unstructured Data: Lobs, BasicFiles, SecureFiles” (Tue June 28, 3 p.m.). Please, increase my chances of increasing the real development content at OpenWorld 2011. 

P.s. A bit more of advertising - Yalim's architectural talk on how to appropriately use PL/SQL in XXI century is strongly suggested to everybody: