UTL_FILE.fGetAttr

DECLARE
  lexists      BOOLEAN;
  lfile_len    NUMBER;
  lblocksize   NUMBER;
BEGIN
  UTL_FILE.fgetattr(
    location    => 'a',
    filename    => 'b',
    exists      => lexists,
    file_length => lfile_len,
    blocksize   => lblocksize);
END;

I was trying to use this procedure in a 9i database and kept getting:

PLS-00103: Encountered the symbol ">"...

– complaining about line 8 (the “exists” parameter). If I removed the parameter names, it worked fine. Something was wrong with that “exists” parameter name.
In the 9i and 10g documentation:

UTL_FILE.FGETATTR(
    location    IN VARCHAR2,
    filename    IN VARCHAR2,
    exists      OUT BOOLEAN,
    file_length OUT NUMBER,
    blocksize   OUT NUMBER);

In the 11g documentation:

UTL_FILE.FGETATTR(
    location    IN VARCHAR2,
    filename    IN VARCHAR2,
    fexists     OUT BOOLEAN,
    file_length OUT NUMBER,
    blocksize   OUT BINARY_INTEGER);

Ah – the parameter was actually called “fexists”. Ok. Try again:

PLS-00306: wrong number or types of arguments in call to 'FGETATTR'

Aaarrgh. Time for more googling.
According to psoug:

utl_file.fgetattr(
    location    IN  VARCHAR2,
    filename    IN  VARCHAR2,
    fexists     OUT BOOLEAN,
    file_length OUT NUMBER,
    block_size  OUT BINARY_INTEGER);

Thank goodness I’ve got access to more than just the Oracle docs!


The exception we never knew we needed

If, by some great random cosmic chance, you are a reader of this blog, but not of Tom Kyte‘s, then you would have missed this post:

NO_DATA_NEEDED – something I learned recently

It appears to have been documented in the 9i documentation, complete with spelling error:

ORA-06548, 00000, "no more rows needed"
Cause:   The caller of a pipelined function does not
         need more rows to be produced by the pipelined
         function.
Action:  Catch the NO_DATA_NEEDED exception is an
         exception handling block.

Mind you, it’s not all that obvious since if the pipelined function does not handle the exception, nothing goes wrong – the exception is never raised by the calling SQL statement. It’s not obvious when ORA-06548 would ever be raised.

EDIT:

Looks like ORA-06548 can appear in the error stack.


Please, hard-code your literals

As Feuerstein says,


We all know that hard-coding is a bad thing in software. But most developers think of hard-coding simply as typing a literal value into your program whenever you need it.

So what’s wrong with doing this? Nothing – as long as the value is never going to change. But what’s the chance of that happening? In fact, what’s the chance of anything staying the same (never changing) in our application requirements and resulting code?

Almost nil.

I agree with most of what Steven says in this post, except for the “Almost nil” part. In my experience, developers who have slavishly converted all literal values in their applications to constants have made life much more difficult for subsequent maintenance and performance tuning.

I would contend that literal values should be hard-coded throughout the code when those values will not change, because they cannot change. The main example of this kind of value is that of hidden magical ID numbers, that crop up especially often in database designs featuring any kind of EAV pattern. These designs usually feature some kind of metadata table, e.g.:

PROPERTYTYPES (
  propertyid,
  code,
  description,
  datatype, length, etc.)

and sprinkled throughout the codebase, like sand in your toddler’s nappy after a trip to the beach, are statements like this:

SELECT value INTO custname
FROM propertyvalues
WHERE entityid = p1
AND propertyid = cNAME;

(where cNAME is a constant that happens to be set to 30456 or something like that)

It gets worse when they need more than one property about an entity:

SELECT a.value, b.value, c.value
INTO custname, custphone, custaddress
FROM propertyvalues a, propertyvalues b, propertyvalues c
WHERE a.entityid = p1 AND b.entityid = p1 and c.entityid = p1
AND a.propertyid = cNAME
AND b.propertyid = cPHONE
AND c.propertyid = cADDRESS;

Someone might say, “that’s good, isn’t it? The hardcoded literal values have been stored once and once only in the constant declarations, and they can be used everywhere. If we want to change a property ID we can change it in one place and everything still works.”

Wrong – for two reasons.

Firstly, it’s not just one place – if you change the property ID, you have to also change the data in all the tables that point to that property. You also have to change code in that frontend UI that can’t read those constants, or in that external process that inexplicably was written with its own logic around those particular ID values (and search-and-replace won’t work because they’ve written some braindead code like this: if propertyid > 30453 & propertyid < 30458 { ... }). The larger, more complex and widespread the codebase, the more it’s just not going to change.

Secondly, WHY? Why would you ever want to change these IDs? Much preferable to hard-code those ID values everywhere. If you stick to using constants where it makes sense (like the “maximum salary” that Steven had in his excellent example), then your hard-coded literals will tell future developers one important and life-preserving message:

“Do not change this code.”



“This Procedure Never Raises Exceptions”

It’s a really bad thing to do, but that’s ok because “we put comments in that say it’s bad”.

PROCEDURE insert_stats IS
  PRAGMA AUTONOMOUS_TRANSACTION
BEGIN
  INSERT INTO stats_table ...
  COMMIT;
EXCEPTION
  WHEN OTHERS THEN
    NULL; --yeah this is a bad thing to do, bla bla bla...
END insert_stats;

The idea is that we want to gather some stats about user behaviour, but we are not allowed to interrupt the user’s important work with any unexpected error that might be raised due to the gathering of those stats.

This post is not about why that’s a bad thing – others have made very good points about this practice elsewhere.

What I want to write about is the exception handler here – does it really protect the caller from exceptions raised by this procedure? The answer is, no. Why?

SQL> CREATE PROCEDURE test_handler AS
  PRAGMA AUTONOMOUS_TRANSACTION;
  n NUMBER;
BEGIN
  INSERT INTO stats_table (id) VALUES (0);
  dbms_output.put_line('inserted=' || SQL%ROWCOUNT);
  n := 1 / 0; -- fail...
  COMMIT;
EXCEPTION
  WHEN OTHERS THEN
    dbms_output.put_line('handled: ' || SUBSTR(SQLERRM,1,4000));
    --"silly thing to do but at least we're safe, right?"
END test_handler;
/

Procedure created.

SQL> BEGIN test_handler; END;
/
inserted=1
handled: ORA-01476: divisor is equal to zero
BEGIN test_handler; END;
*
ERROR at line 1:
ORA-06519: active autonomous transaction detected and rolled back

The divide-by-zero is just there to simulate an exception that is raised after the insert succeeds, but before (or during) the COMMIT. As soon as the transaction is started, the procedure must raise ORA-06519 unless a COMMIT or ROLLBACK succeeds. Lesson to learn? An autonomous transaction will raise ORA-06519 to the caller, and it will not be caught by “WHEN OTHERS”. To get around this we could put a ROLLBACK in the exception handler.


TOO_MANY_ROWS side effect

I used to assume that whenever a TOO_MANY_ROWS exception is raised, the target bind variables would be left untouched. Until today I’ve never written any code that relies on the bind variables being in any particular state when a TMR exception is raised, so was surprised.

For example, given the code below, I would expect the dbms_output to indicate that v is null:

CREATE PROCEDURE proc (v OUT NUMBER) IS
BEGIN
   SELECT 1 INTO v FROM all_objects;
EXCEPTION
   WHEN TOO_MANY_ROWS THEN
      dbms_output.put_line
         ('TOO MANY ROWS: v='
          || v);
END
/
 

DECLARE
   v NUMBER;
BEGIN
   proc(v);
   dbms_output.put_line('AFTER: v=' || v);
END
/

TOO MANY ROWS: v=1
AFTER: v=1

What appears to happen is that the out bind variables will be assigned values from the first row returned from the query; then when a second row is found, the TOO_MANY_ROWS exception is raised.

According to the documentation (emphasis added):

“By default, a SELECT INTO statement must return only one row. Otherwise, PL/SQL raises the predefined exception TOO_MANY_ROWS and the values of the variables in the INTO clause are undefined. Make sure your WHERE clause is specific enough to only match one row.”

(Oracle Database PL/SQL User’s Guide and Reference (10gR2): SELECT INTO Statement)

So it appears my original stance (don’t assume anything about the variables’ state after TOO_MANY_ROWS is raised) was correct. Lesson learned: beware of performing a SELECT INTO directly on the OUT parameters of your procedure!


Un-riching Rich Text Format

Let’s just call it Legacy because I’m not going to say what the source is. It is a single-user desktop application that after a little investigation (i.e. searching the online forum for the app) was found to be storing its data in tables readable by MS Access. I wanted to get at this data, analyze it, maybe do some smart things to it, and then present it via Apex.

Step 1: Get the data into Oracle.

Simple matter of exporting from MS Access via ODBC. At least, it was simple once I replaced the Oracle ODBC drivers with the latest download from OTN. Before that I was getting a number of annoying TNS errors.

Step 2: Transform the data.

Most of the tables are easy-to-understand normalized relational tables. One of them, however, has a column that came through as a CLOB containing strange values like this:

{\rtf1\ansi\deff0\deftab254
{\fonttbl{\f0\fnil\fcharset0 Arial;}
{\f1\fnil\fcharset0 Verdana;}}{\colortbl\red0\green0\blue0;\red255\green0\blue0;\red0\green128\blue0;\red0\green0\blue255;\red255\green255\blue0;\red255\green0\blue255;\red128\green0\blue128;\red128\green0\blue0;\red0\green255\blue0;\red0\green255\blue255;\red0\green128\blue128;\red0\green0\blue128;\red255\green255\blue255;\red192\green192\blue192;\red128\green128\blue128;\red255\green255\blue255;}
\paperw12240\paperh15840\margl1880\margr1880\margt1440\margb1440
{\*\pnseclvl1\pnucrm\pnstart1\pnhang\pnindent720
{\pntxtb}{\pntxta{.}}}
...

Now based on my knowledge of the application I knew that this column was used to store small pieces of text (typically 8 to 30 short lines), with some amount of formatting (e.g. fonts, alignment, etc.). Again the online forum came in useful in that a side comment from one of the developers (regarding a small bug undocumented anti-feature) revealed that they stored the formatted text as RTF – Rich Text Format. Should have known from the opening 6 bytes in the data.

Somewhere in these oceans of rtf codes were swimming the plain text I craved. So Googled RTF, skimmed this old RTF specification, and ended up with this admittedly poorly-performing PL/SQL, which for the 651 rows in this table, each with an RTF of average 3KB, works just well enough for my purposes. As it turned out the only RTF codes I was interested in were \fcharset and \*, both of which I used to ignore bits of text I didn’t want in my output. Oh and \par, which denotes the end of a paragraph. I can run this script once a month on the freshly exported data and apply the full weight of Oracle’s analytic capabilities on it.

The code below exemplifies the use of a pipelined function. This is not a good idea, by the way, if you want to use it in regular queries, e.g. a view. In my case, however, I only wanted to call this from within PL/SQL, and then only once a month. Because of the way parameters work with functions like this, I had to call it with dynamic SQL (execute immediate).

That was kind of fun, but I’d rather not have to deal with RTF ever again, thank you.

create or replace package myutil_rtf is
  type t_v4000_table is table of varchar2(4000);
  function extract_text (p_recid in number)
  return t_v4000_table pipelined deterministic;
end;
/

create or replace package body myutil_rtf is
  function extract_text (p_recid in number)
    return t_v4000_table pipelined deterministic is
    
    l_ch varchar2(1);
    l_ctrl varchar2(4000);
    l_line varchar2(4000);
    l_rtf clob;
    
    --don't output any text between
    --\fcharset and closing }
    l_fcharset boolean := false;
    
    --increments for each enclosed pair of { }
    --within a discard section
    l_discard number;
    
  begin
    
    select rtf_clob into l_rtf
    from rtf_table where recid = p_recid;
    
    for i in 1..dbms_lob.getlength(l_rtf) loop
    
      l_ch := substr(l_rtf,i,1);
      
      if l_ch = '}' then
      
        if l_fcharset then
          --closing } found; re-enable output
          l_fcharset := false;
          l_line := null;
        end if;
        
        if l_discard &gt; 0 then
          l_discard := l_discard - 1;
          if l_discard = 0 then
            l_discard := null;
          end if;
        end if;
        
      elsif l_ch = '{' then
      
        if l_discard is not null then
          l_discard := l_discard + 1;
        end if;
      
      elsif l_ch = '\' then
      
        --controls start with a backslash
        l_ctrl := '\';
      
      elsif l_ctrl is not null then
      
        --controls are always ended by some
        --non-alphanumeric character
        if instr('abcdefghijklmnopqrstuvwxyz'
        || '0123456789',lower(l_ch)) &gt; 0 then
          l_ctrl := l_ctrl || lower(l_ch);
        else
          if l_ctrl = '\par' then
            pipe row (l_line);
            l_line := null;
          elsif substr(l_ctrl,1,9) = '\fcharset' then
            l_fcharset := true;
          elsif l_ctrl || l_ch = '\*' then
            --{\* ... } means you can ignore
            --anything between the { }
            if l_discard is null then
              l_discard := 1;
            end if;
          end if;
          l_ctrl := null;
        end if;
      
      elsif l_ch not in (chr(10), chr(13), '{')
        and not l_fcharset and l_discard is null then
      
        l_line := l_line || l_ch;
      
      end if;
    
    end loop;
    
    if l_line is not null and not l_fcharset then
      pipe row (l_line);
    end if;
    
    return;
  end extract_text;
end myutil_rtf;
/

To extract the text from the table with recid=1:

select column_value line_of_text
      ,rownum line_number
from table(myutil_rtf.extract_text(1));

Replace the default XDB welcome page

I was annoyed that to get to my main PL/SQL page I have to type in a full URL like this:

http://host:7777/mydad/home

(e.g. “mydad” could be “apex” for Oracle Apex)

If I was using Apache HTTP Server I’d do something this article suggests. But I’m using Oracle’s Embedded PL/SQL Gateway.

A. I got rid of the “:7777” by changing the HTTP port to 80, e.g.:

SQL> exec dbms_xdb.sethttpport(80);

Now, I can get to it without the port number:

http://host/mydad/home

B. Now I want to remove the need to remember to type “home”. To do this, I just tell the DAD what the default page is:

SQL> exec dbms_epg.set_dad_attribute('MYDAD','default-page','home');

Now, the url is a bit simpler:

http://host/mydad

The URL is now rewritten automatically to point to “mydad/home”.

C. Finally, I want to remove the need to specify the DAD. To do this is a little more complicated. I’ll create an XDB resource that will override the default XDB navigator that comes up.

    1. Log into Enterprise Manager

 

    1. Open the “Administration” tab and select “Resources” under “XML Database”

 

    1. Click “Create” and set the fields as follows:
      Owner = SYS
      Name = index.html
      Location = /
      Type = XML Database Resource File
      Source = Specify the file contents
      Contents =
      <html><head><meta http-equiv="REFRESH" content="0; URL=http://host/mydad"></head><body><a href="http://host/mydad">Home</a></body></html>

 

  1. Click “Ok”

(you’ll need to change “host” and “mydad” to appropriate values in the sample Contents above)

Now, the url is simply:

http://host

This causes it to load the index.html page from the XML database, which redirects to the DAD, the default page for which is “home”.


ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define

I came across an inexplicable error when bulk collecting into a PL/SQL table with the NOT NULL constraint the other day. What was confusing was that the code had been passing tests for quite some time.

In the end the only thing that had changed was that a VARCHAR2 which should have been non-null happened to be NULL for one particular row in the table.

Thanks to Connor for the simple test case, listed below.

If you know what might be the cause or reason behind this error, and why it doesn’t occur for dates, I’d be interested.

This was reproduced on Oracle 10.2.0.1.0.

SQL> declare
      type t is table of number not null index by pls_integer;
      r t;
     begin
      select case when rownum < 20 then rownum else null end
      bulk collect into r from all_Objects
      where rownum <= 20;
     end;
     /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
ORA-06512: at line 5

SQL> declare
      type t is table of varchar2(80) not null index by pls_integer;
      r t;
     begin
      select case when rownum < 20 then rownum else null end
      bulk collect into r from all_Objects
      where rownum <= 20;
     end;
     /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
ORA-06512: at line 5

SQL> declare
      type t is table of date not null index by pls_integer;
      r t;
     begin
      select case when rownum < 20 then sysdate else null end
      bulk collect into r from all_Objects
      where rownum <= 20;
     end;
     /
PL/SQL procedure successfully completed.

Table Types Supplied by Oracle

This is a list of all the table types I’ve found in Oracle-supplied packages, e.g. OWA_UTIL in 10g supplies the type:
TYPE datetype IS TABLE OF varchar2(10) INDEX BY binary_integer;

I find them handy for quick one-off scripts.

Oracle_supplied_table_types