Single-Point-of-Definition by Example

Steven Feuerstein lists seven excellent “Golden Rules” in his presentation (via Eddie Awad) and says “Don’t repeat anything. Aim for a Single Point of Definition for every aspect of your application – formulas, business rules, magic values, SQL statements.” giving the following code as exhibit A:

(source)

I’m guessing in his presentation he spoke about various things that could be done to improve this code, but they’re not in the PDF; so I’d like to give it a go myself and see how much we can improve the maintainability of this code by reducing hard-coding.

1. Type Declarations

Instead of declaring parameters and variables as NUMBER, VARCHAR2 etc, these should use the %TYPE operator so that they are automatically synchronized with the datatype from the table columns they represent:

PROCEDURE process_employee
(department_id_in IN employees.department_id%TYPE)
IS
   l_id     employees.employee_id%TYPE;
   l_salary employees.salary%TYPE;
...

l_name, however, is not based on any table column we know of at this point; so there is no %TYPE we can use for it. But bear with me, we’ll fix this later.

2. Magic Values

This one’s a no-brainer: that “10000000” is obviously a magic value that some bean-counter decided was the correct threshold for the CEO’s salary. Whatever.

You might define this as a constant defined in a global package specification, e.g.

CREATE PACKAGE employee_constant AS
ceo_salary_threshold CONSTANT employees.salary%TYPE := 10000000;
END employee_constant;

Personally, I’d suspect that the business will review and revise this number from time to time, to keep up with inflation; so we might end up needing a database table to store the current threshold, plus a date range for which the threshold applies. I’d then add an interface on top of this table so that queries and procedures don’t need to know how to get the current threshold. We can retrofit this later by changing ceo_salary_threshold into a function instead of a constant. That’s a bit beyond the scope of this exercise, however.

3. Formatting Rules

The rule about formatting an employee name as “LAST,FIRST” is duplicated in a comment and in the SELECT statement; and chances are it will be required elsewhere in the application as well. My preferred method for creating a SPOD for this sort of business rule used to be to move the implementation into a view, e.g.:

CREATE VIEW formatted_employees AS
SELECT employees.*,
       employees.last_name || ',' || employees.first_name
       AS full_name
FROM   employees;
COMMENT ON COLUMN formatted_employees.full_name
IS 'Full name: LAST COMMA FIRST (ReqDoc 123.A.47)';

This view is what I like to call a “formatting” view: it is only allowed to query one table, it contains no WHERE, GROUP BY or HAVING clauses, and it selects all the columns from the table. The view can be used almost anywhere the table may be used. It adds additional columns that format the data in various ways. If need be, we can even add INSTEAD OF triggers to handle inserts/updates on the derived columns – if the business rules make the conversion from derived-to-underlying-column well defined.

So, now we can redefine the cursor as:

CURSOR emps_in_dept_cur
IS
   SELECT employee_id, salary, full_name AS lname
     FROM formatted_employees
    WHERE department_id = department_id_in;

Notice that I don’t call the column “last_comma_first” or anything like that – that would again be hard-coding the business rule, which would then be replicated throughout the application. In Oracle 11g, however, I think it might be better to create virtual columns on the table instead:

ALTER TABLE employees ADD (
full_name VARCHAR2(100)
GENERATED ALWAYS AS (last_name || ',' || first_name) VIRTUAL
);
COMMENT ON COLUMN employees.full_name
IS 'Full name: LAST COMMA FIRST (ReqDoc 123.A.47)';
CURSOR emps_in_dept_cur
IS
   SELECT employee_id, salary, full_name AS lname
     FROM employees
    WHERE department_id = department_id_in;

The virtual column can have its own stats, and even an index if needed for querying.

Another option would be to create a function that does this formatting:

CREATE FUNCTION employee_full_name
   (last_name  IN employees.last_name%TYPE,
    first_name IN employees.first_name%TYPE)
RETURN VARCHAR2 DETERMINISTIC IS
--Full name: LAST COMMA FIRST (ReqDoc 123.A.47)
BEGIN
   RETURN last_name || ',' || first_name;
END employee_full_name;

We could call this function from the procedure or the view, but if we’re on 11g there’s no reason we can’t create a virtual column on it:

ALTER TABLE employees ADD (
full_name VARCHAR2(100)
GENERATED ALWAYS
AS (employee_full_name(last_name,first_name)) VIRTUAL
);

Another advantage to using the view or a virtual column is that we can now remove the “VARCHAR2 (100)” from the variable declaration, e.g.:

l_name   employees.full_name%TYPE;

4. Cursor Parameter

The cursor refers directly to the parameter to the procedure, which is a no-no – this couples the cursor too much with the procedure, i.e. we can’t re-use it elsewhere unless we always define a variable “department_id_in”. Instead, we should use a cursor parameter:

CURSOR emps_in_dept_cur
   (department_id_in IN employees.department_id%TYPE)
IS
   SELECT employee_id, salary, full_name AS lname
     FROM employees
    WHERE department_id = emps_in_dept_cur.department_id_in;

The addition of the context “emps_in_dept_cur.” is not strictly necessary, but it is good practice to define the scope of all variables so that unrelated changes (e.g. the addition of a column called “department_id_in”) don’t change the code.

5. Cursor Row Type

What if we need to add 10 more columns to the cursor? At the moment we’re adding one more variable for each column of the cursor, and specifying it three times (variable declaration, cursor SELECT clause, and the FETCH INTO). We can reduce this to just once by declaring a cursor row type instead:

PROCEDURE process_employee
   (department_id_in IN employees.department_id%TYPE)
IS
   CURSOR emps_in_dept_cur
      (department_id_in IN employees.department_id%TYPE)
   IS
      SELECT employee_id, salary, full_name lname
        FROM employees
       WHERE department_id = emps_in_dept_cur.department_id_in;
   TYPE emps_in_dept_cur_type IS emps_in_dept_cur%ROWTYPE;
   emp emps_in_dept_cur_type;
BEGIN
   OPEN emps_in_dept_cur;
   LOOP
      FETCH emps_in_dept_cur
      INTO emp;
...

6. Don’t COMMIT

Procedures should rarely COMMIT (there are very few exceptions to this rule, e.g. procedures declared as autonomous transactions). Transactional control should be left to the calling process – this process might need to be done along with a number of other changes elsewhere, and we would want to either COMMIT or ROLLBACK all the changes together as one transaction. What if the next procedure raised an error and we had to rollback? Our system would be left in an inconsistent state.

7. Error Package

That RAISE_APPLICATION_ERROR hard-codes an error code and an error message. What if we type the error number wrong somewhere? If the calling process handles ORA-20907 in some fashion, but we mistype it as -20908 in one procedure, the calling process will not handle it.
We could declare an exception instead, e.g. in a global package specification:

CREATE PACKAGE employee_exception AS
invalid_dept_id EXCEPTION;
PRAGMA EXCEPTION_INIT (invalid_dept_id, -20907);
END employee_exception;

Now, our exception handler can raise just the one exception:

EXCEPTION WHEN NO_DATA_FOUND THEN
   RAISE employee_exception.invalid_dept_id;

However, we’ve now lost the error message. It would be better to create an error-handling package instead:

CREATE PACKAGE employee_error AS
 invalid_error_no CONSTANT NUMBER := -20000;
 invalid_error_no_exception EXCEPTION;
 PRAGMA EXCEPTION_INIT (invalid_error_no_exception, -20000);
 invalid_dept_id CONSTANT NUMBER := -20907;
 invalid_dept_id_exception EXCEPTION;
 PRAGMA EXCEPTION_INIT (invalid_dept_id_exception, -20907);
 PROCEDURE raise_exception (error_no IN NUMBER);
END employee_exception;

CREATE PACKAGE BODY employee_error AS
 PROCEDURE raise_exception (error_no IN NUMBER) IS
 BEGIN
 CASE error_no
 WHEN invalid_dept_id
 THEN RAISE_APPLICATION_ERROR(invalid_dept_id, 'Invalid department ID');
 ELSE RAISE_APPLICATION_ERROR(invalid_error_no, 'Bug: invalid error number');
 END;
 END message;
END employee_exception;

EDIT: PRAGMA EXCEPTION_INIT only accepts literal numbers for its second parameter (or else you get PLS-00702 at compile time) – fixed accordingly

Now, our exception handler is nicely modular:

EXCEPTION WHEN NO_DATA_FOUND THEN
employee_error.raise_exception(employee_error.invalid_dept_id);

So now, our code looks like this:

PROCEDURE process_employee (department_id_in IN employees.department_id%TYPE)
IS
   CURSOR emps_in_dept_cur (department_id_in IN employees.department_id%TYPE)
   IS
      SELECT employee_id, salary, full_name lname
        FROM employees
       WHERE department_id = emps_in_dept_cur.department_id_in;
   TYPE emps_in_dept_cur_type IS emps_in_dept_cur%ROWTYPE;
   emp emps_in_dept_cur_type;
BEGIN
   OPEN emps_in_dept_cur;
   LOOP
      FETCH emps_in_dept_cur
      INTO emp;

      IF emp.salary > employee_constant.ceo_salary_threshold THEN adjust_comp_for_ceo (emp.salary);
      ELSE analyze_compensation (emp.employee_id, emp.salary, employee_constant.ceo_salary_threshold); END IF;
      EXIT WHEN emps_in_dept_cur%NOTFOUND;
   END LOOP;
EXCEPTION WHEN NO_DATA_FOUND THEN
   employee_error.raise_exception(employee_error.invalid_dept_id);
END;

One final change that one might make here is to move the SQL query right out of the procedure and use a ref cursor instead, supplied by a central “employee_cursor” package.

There are probably plenty of other changes we could make to improve the maintainability of this code further.
What do you think?


Another way to learn something new every day

The PL/SQL ChallengeMy morning routine for the last few weeks has had a small addition – just a few minutes to log on The PL/SQL Challenge at 8am (WST) and answer a quiz question.

This is a great way to test the depth and breadth of your knowledge of Oracle PL/SQL. Whether you’re a beginner in the field or have years of experience, I’m certain you’ll occasionally learn something new – plus, it’s completely free!

Some of the questions are fairly general, others are trivial; while they’re never “trick” questions, they can still trip you up if you’re not observant!

I think I have an unfair advantage, being in this timezone – early morning is when I am most alert and ready to tackle any challenge.

A couple of things I’ve learned recently:

  • UTL_FILE.putf can be used if you want to use C-style “printf” syntax, e.g. '\n'
  • SIGNTYPE is a standard subtype that can only take values of NULL, -1, 0 or 1

(I’ve learned a few other things as well, but I’ve forgotten what they were :|)


UTL_FILE.fGetAttr

DECLARE
  lexists      BOOLEAN;
  lfile_len    NUMBER;
  lblocksize   NUMBER;
BEGIN
  UTL_FILE.fgetattr(
    location    => 'a',
    filename    => 'b',
    exists      => lexists,
    file_length => lfile_len,
    blocksize   => lblocksize);
END;

I was trying to use this procedure in a 9i database and kept getting:

PLS-00103: Encountered the symbol ">"...

– complaining about line 8 (the “exists” parameter). If I removed the parameter names, it worked fine. Something was wrong with that “exists” parameter name.
In the 9i and 10g documentation:

UTL_FILE.FGETATTR(
    location    IN VARCHAR2,
    filename    IN VARCHAR2,
    exists      OUT BOOLEAN,
    file_length OUT NUMBER,
    blocksize   OUT NUMBER);

In the 11g documentation:

UTL_FILE.FGETATTR(
    location    IN VARCHAR2,
    filename    IN VARCHAR2,
    fexists     OUT BOOLEAN,
    file_length OUT NUMBER,
    blocksize   OUT BINARY_INTEGER);

Ah – the parameter was actually called “fexists”. Ok. Try again:

PLS-00306: wrong number or types of arguments in call to 'FGETATTR'

Aaarrgh. Time for more googling.
According to psoug:

utl_file.fgetattr(
    location    IN  VARCHAR2,
    filename    IN  VARCHAR2,
    fexists     OUT BOOLEAN,
    file_length OUT NUMBER,
    block_size  OUT BINARY_INTEGER);

Thank goodness I’ve got access to more than just the Oracle docs!


The exception we never knew we needed

If, by some great random cosmic chance, you are a reader of this blog, but not of Tom Kyte‘s, then you would have missed this post:

NO_DATA_NEEDED – something I learned recently

It appears to have been documented in the 9i documentation, complete with spelling error:

ORA-06548, 00000, "no more rows needed"
Cause:   The caller of a pipelined function does not
         need more rows to be produced by the pipelined
         function.
Action:  Catch the NO_DATA_NEEDED exception is an
         exception handling block.

Mind you, it’s not all that obvious since if the pipelined function does not handle the exception, nothing goes wrong – the exception is never raised by the calling SQL statement. It’s not obvious when ORA-06548 would ever be raised.

EDIT:

Looks like ORA-06548 can appear in the error stack.


Please, hard-code your literals

As Feuerstein says,


We all know that hard-coding is a bad thing in software. But most developers think of hard-coding simply as typing a literal value into your program whenever you need it.

So what’s wrong with doing this? Nothing – as long as the value is never going to change. But what’s the chance of that happening? In fact, what’s the chance of anything staying the same (never changing) in our application requirements and resulting code?

Almost nil.

I agree with most of what Steven says in this post, except for the “Almost nil” part. In my experience, developers who have slavishly converted all literal values in their applications to constants have made life much more difficult for subsequent maintenance and performance tuning.

I would contend that literal values should be hard-coded throughout the code when those values will not change, because they cannot change. The main example of this kind of value is that of hidden magical ID numbers, that crop up especially often in database designs featuring any kind of EAV pattern. These designs usually feature some kind of metadata table, e.g.:

PROPERTYTYPES (
  propertyid,
  code,
  description,
  datatype, length, etc.)

and sprinkled throughout the codebase, like sand in your toddler’s nappy after a trip to the beach, are statements like this:

SELECT value INTO custname
FROM propertyvalues
WHERE entityid = p1
AND propertyid = cNAME;

(where cNAME is a constant that happens to be set to 30456 or something like that)

It gets worse when they need more than one property about an entity:

SELECT a.value, b.value, c.value
INTO custname, custphone, custaddress
FROM propertyvalues a, propertyvalues b, propertyvalues c
WHERE a.entityid = p1 AND b.entityid = p1 and c.entityid = p1
AND a.propertyid = cNAME
AND b.propertyid = cPHONE
AND c.propertyid = cADDRESS;

Someone might say, “that’s good, isn’t it? The hardcoded literal values have been stored once and once only in the constant declarations, and they can be used everywhere. If we want to change a property ID we can change it in one place and everything still works.”

Wrong – for two reasons.

Firstly, it’s not just one place – if you change the property ID, you have to also change the data in all the tables that point to that property. You also have to change code in that frontend UI that can’t read those constants, or in that external process that inexplicably was written with its own logic around those particular ID values (and search-and-replace won’t work because they’ve written some braindead code like this: if propertyid > 30453 & propertyid < 30458 { ... }). The larger, more complex and widespread the codebase, the more it’s just not going to change.

Secondly, WHY? Why would you ever want to change these IDs? Much preferable to hard-code those ID values everywhere. If you stick to using constants where it makes sense (like the “maximum salary” that Steven had in his excellent example), then your hard-coded literals will tell future developers one important and life-preserving message:

“Do not change this code.”



“This Procedure Never Raises Exceptions”

It’s a really bad thing to do, but that’s ok because “we put comments in that say it’s bad”.

PROCEDURE insert_stats IS
  PRAGMA AUTONOMOUS_TRANSACTION
BEGIN
  INSERT INTO stats_table ...
  COMMIT;
EXCEPTION
  WHEN OTHERS THEN
    NULL; --yeah this is a bad thing to do, bla bla bla...
END insert_stats;

The idea is that we want to gather some stats about user behaviour, but we are not allowed to interrupt the user’s important work with any unexpected error that might be raised due to the gathering of those stats.

This post is not about why that’s a bad thing – others have made very good points about this practice elsewhere.

What I want to write about is the exception handler here – does it really protect the caller from exceptions raised by this procedure? The answer is, no. Why?

SQL> CREATE PROCEDURE test_handler AS
  PRAGMA AUTONOMOUS_TRANSACTION;
  n NUMBER;
BEGIN
  INSERT INTO stats_table (id) VALUES (0);
  dbms_output.put_line('inserted=' || SQL%ROWCOUNT);
  n := 1 / 0; -- fail...
  COMMIT;
EXCEPTION
  WHEN OTHERS THEN
    dbms_output.put_line('handled: ' || SUBSTR(SQLERRM,1,4000));
    --"silly thing to do but at least we're safe, right?"
END test_handler;
/

Procedure created.

SQL> BEGIN test_handler; END;
/
inserted=1
handled: ORA-01476: divisor is equal to zero
BEGIN test_handler; END;
*
ERROR at line 1:
ORA-06519: active autonomous transaction detected and rolled back

The divide-by-zero is just there to simulate an exception that is raised after the insert succeeds, but before (or during) the COMMIT. As soon as the transaction is started, the procedure must raise ORA-06519 unless a COMMIT or ROLLBACK succeeds. Lesson to learn? An autonomous transaction will raise ORA-06519 to the caller, and it will not be caught by “WHEN OTHERS”. To get around this we could put a ROLLBACK in the exception handler.


TOO_MANY_ROWS side effect

I used to assume that whenever a TOO_MANY_ROWS exception is raised, the target bind variables would be left untouched. Until today I’ve never written any code that relies on the bind variables being in any particular state when a TMR exception is raised, so was surprised.

For example, given the code below, I would expect the dbms_output to indicate that v is null:

CREATE PROCEDURE proc (v OUT NUMBER) IS
BEGIN
   SELECT 1 INTO v FROM all_objects;
EXCEPTION
   WHEN TOO_MANY_ROWS THEN
      dbms_output.put_line
         ('TOO MANY ROWS: v='
          || v);
END
/
 

DECLARE
   v NUMBER;
BEGIN
   proc(v);
   dbms_output.put_line('AFTER: v=' || v);
END
/

TOO MANY ROWS: v=1
AFTER: v=1

What appears to happen is that the out bind variables will be assigned values from the first row returned from the query; then when a second row is found, the TOO_MANY_ROWS exception is raised.

According to the documentation (emphasis added):

“By default, a SELECT INTO statement must return only one row. Otherwise, PL/SQL raises the predefined exception TOO_MANY_ROWS and the values of the variables in the INTO clause are undefined. Make sure your WHERE clause is specific enough to only match one row.”

(Oracle Database PL/SQL User’s Guide and Reference (10gR2): SELECT INTO Statement)

So it appears my original stance (don’t assume anything about the variables’ state after TOO_MANY_ROWS is raised) was correct. Lesson learned: beware of performing a SELECT INTO directly on the OUT parameters of your procedure!


Un-riching Rich Text Format

Let’s just call it Legacy because I’m not going to say what the source is. It is a single-user desktop application that after a little investigation (i.e. searching the online forum for the app) was found to be storing its data in tables readable by MS Access. I wanted to get at this data, analyze it, maybe do some smart things to it, and then present it via Apex.

Step 1: Get the data into Oracle.

Simple matter of exporting from MS Access via ODBC. At least, it was simple once I replaced the Oracle ODBC drivers with the latest download from OTN. Before that I was getting a number of annoying TNS errors.

Step 2: Transform the data.

Most of the tables are easy-to-understand normalized relational tables. One of them, however, has a column that came through as a CLOB containing strange values like this:

{\rtf1\ansi\deff0\deftab254
{\fonttbl{\f0\fnil\fcharset0 Arial;}
{\f1\fnil\fcharset0 Verdana;}}{\colortbl\red0\green0\blue0;\red255\green0\blue0;\red0\green128\blue0;\red0\green0\blue255;\red255\green255\blue0;\red255\green0\blue255;\red128\green0\blue128;\red128\green0\blue0;\red0\green255\blue0;\red0\green255\blue255;\red0\green128\blue128;\red0\green0\blue128;\red255\green255\blue255;\red192\green192\blue192;\red128\green128\blue128;\red255\green255\blue255;}
\paperw12240\paperh15840\margl1880\margr1880\margt1440\margb1440
{\*\pnseclvl1\pnucrm\pnstart1\pnhang\pnindent720
{\pntxtb}{\pntxta{.}}}
...

Now based on my knowledge of the application I knew that this column was used to store small pieces of text (typically 8 to 30 short lines), with some amount of formatting (e.g. fonts, alignment, etc.). Again the online forum came in useful in that a side comment from one of the developers (regarding a small bug undocumented anti-feature) revealed that they stored the formatted text as RTF – Rich Text Format. Should have known from the opening 6 bytes in the data.

Somewhere in these oceans of rtf codes were swimming the plain text I craved. So Googled RTF, skimmed this old RTF specification, and ended up with this admittedly poorly-performing PL/SQL, which for the 651 rows in this table, each with an RTF of average 3KB, works just well enough for my purposes. As it turned out the only RTF codes I was interested in were \fcharset and \*, both of which I used to ignore bits of text I didn’t want in my output. Oh and \par, which denotes the end of a paragraph. I can run this script once a month on the freshly exported data and apply the full weight of Oracle’s analytic capabilities on it.

The code below exemplifies the use of a pipelined function. This is not a good idea, by the way, if you want to use it in regular queries, e.g. a view. In my case, however, I only wanted to call this from within PL/SQL, and then only once a month. Because of the way parameters work with functions like this, I had to call it with dynamic SQL (execute immediate).

That was kind of fun, but I’d rather not have to deal with RTF ever again, thank you.

create or replace package myutil_rtf is
  type t_v4000_table is table of varchar2(4000);
  function extract_text (p_recid in number)
  return t_v4000_table pipelined deterministic;
end;
/

create or replace package body myutil_rtf is
  function extract_text (p_recid in number)
    return t_v4000_table pipelined deterministic is
    
    l_ch varchar2(1);
    l_ctrl varchar2(4000);
    l_line varchar2(4000);
    l_rtf clob;
    
    --don't output any text between
    --\fcharset and closing }
    l_fcharset boolean := false;
    
    --increments for each enclosed pair of { }
    --within a discard section
    l_discard number;
    
  begin
    
    select rtf_clob into l_rtf
    from rtf_table where recid = p_recid;
    
    for i in 1..dbms_lob.getlength(l_rtf) loop
    
      l_ch := substr(l_rtf,i,1);
      
      if l_ch = '}' then
      
        if l_fcharset then
          --closing } found; re-enable output
          l_fcharset := false;
          l_line := null;
        end if;
        
        if l_discard &gt; 0 then
          l_discard := l_discard - 1;
          if l_discard = 0 then
            l_discard := null;
          end if;
        end if;
        
      elsif l_ch = '{' then
      
        if l_discard is not null then
          l_discard := l_discard + 1;
        end if;
      
      elsif l_ch = '\' then
      
        --controls start with a backslash
        l_ctrl := '\';
      
      elsif l_ctrl is not null then
      
        --controls are always ended by some
        --non-alphanumeric character
        if instr('abcdefghijklmnopqrstuvwxyz'
        || '0123456789',lower(l_ch)) &gt; 0 then
          l_ctrl := l_ctrl || lower(l_ch);
        else
          if l_ctrl = '\par' then
            pipe row (l_line);
            l_line := null;
          elsif substr(l_ctrl,1,9) = '\fcharset' then
            l_fcharset := true;
          elsif l_ctrl || l_ch = '\*' then
            --{\* ... } means you can ignore
            --anything between the { }
            if l_discard is null then
              l_discard := 1;
            end if;
          end if;
          l_ctrl := null;
        end if;
      
      elsif l_ch not in (chr(10), chr(13), '{')
        and not l_fcharset and l_discard is null then
      
        l_line := l_line || l_ch;
      
      end if;
    
    end loop;
    
    if l_line is not null and not l_fcharset then
      pipe row (l_line);
    end if;
    
    return;
  end extract_text;
end myutil_rtf;
/

To extract the text from the table with recid=1:

select column_value line_of_text
      ,rownum line_number
from table(myutil_rtf.extract_text(1));

ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define

I came across an inexplicable error when bulk collecting into a PL/SQL table with the NOT NULL constraint the other day. What was confusing was that the code had been passing tests for quite some time.

In the end the only thing that had changed was that a VARCHAR2 which should have been non-null happened to be NULL for one particular row in the table.

Thanks to Connor for the simple test case, listed below.

If you know what might be the cause or reason behind this error, and why it doesn’t occur for dates, I’d be interested.

This was reproduced on Oracle 10.2.0.1.0.

SQL> declare
      type t is table of number not null index by pls_integer;
      r t;
     begin
      select case when rownum < 20 then rownum else null end
      bulk collect into r from all_Objects
      where rownum <= 20;
     end;
     /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
ORA-06512: at line 5

SQL> declare
      type t is table of varchar2(80) not null index by pls_integer;
      r t;
     begin
      select case when rownum < 20 then rownum else null end
      bulk collect into r from all_Objects
      where rownum <= 20;
     end;
     /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
ORA-06512: at line 5

SQL> declare
      type t is table of date not null index by pls_integer;
      r t;
     begin
      select case when rownum < 20 then sysdate else null end
      bulk collect into r from all_Objects
      where rownum <= 20;
     end;
     /
PL/SQL procedure successfully completed.