Single-Point-of-Definition by Example

Steven Feuerstein lists seven excellent “Golden Rules” in his presentation (via Eddie Awad) and says “Don’t repeat anything. Aim for a Single Point of Definition for every aspect of your application – formulas, business rules, magic values, SQL statements.” giving the following code as exhibit A:

(source)

I’m guessing in his presentation he spoke about various things that could be done to improve this code, but they’re not in the PDF; so I’d like to give it a go myself and see how much we can improve the maintainability of this code by reducing hard-coding.

1. Type Declarations

Instead of declaring parameters and variables as NUMBER, VARCHAR2 etc, these should use the %TYPE operator so that they are automatically synchronized with the datatype from the table columns they represent:

PROCEDURE process_employee
(department_id_in IN employees.department_id%TYPE)
IS
   l_id     employees.employee_id%TYPE;
   l_salary employees.salary%TYPE;
...

l_name, however, is not based on any table column we know of at this point; so there is no %TYPE we can use for it. But bear with me, we’ll fix this later.

2. Magic Values

This one’s a no-brainer: that “10000000” is obviously a magic value that some bean-counter decided was the correct threshold for the CEO’s salary. Whatever.

You might define this as a constant defined in a global package specification, e.g.

CREATE PACKAGE employee_constant AS
ceo_salary_threshold CONSTANT employees.salary%TYPE := 10000000;
END employee_constant;

Personally, I’d suspect that the business will review and revise this number from time to time, to keep up with inflation; so we might end up needing a database table to store the current threshold, plus a date range for which the threshold applies. I’d then add an interface on top of this table so that queries and procedures don’t need to know how to get the current threshold. We can retrofit this later by changing ceo_salary_threshold into a function instead of a constant. That’s a bit beyond the scope of this exercise, however.

3. Formatting Rules

The rule about formatting an employee name as “LAST,FIRST” is duplicated in a comment and in the SELECT statement; and chances are it will be required elsewhere in the application as well. My preferred method for creating a SPOD for this sort of business rule used to be to move the implementation into a view, e.g.:

CREATE VIEW formatted_employees AS
SELECT employees.*,
       employees.last_name || ',' || employees.first_name
       AS full_name
FROM   employees;
COMMENT ON COLUMN formatted_employees.full_name
IS 'Full name: LAST COMMA FIRST (ReqDoc 123.A.47)';

This view is what I like to call a “formatting” view: it is only allowed to query one table, it contains no WHERE, GROUP BY or HAVING clauses, and it selects all the columns from the table. The view can be used almost anywhere the table may be used. It adds additional columns that format the data in various ways. If need be, we can even add INSTEAD OF triggers to handle inserts/updates on the derived columns – if the business rules make the conversion from derived-to-underlying-column well defined.

So, now we can redefine the cursor as:

CURSOR emps_in_dept_cur
IS
   SELECT employee_id, salary, full_name AS lname
     FROM formatted_employees
    WHERE department_id = department_id_in;

Notice that I don’t call the column “last_comma_first” or anything like that – that would again be hard-coding the business rule, which would then be replicated throughout the application. In Oracle 11g, however, I think it might be better to create virtual columns on the table instead:

ALTER TABLE employees ADD (
full_name VARCHAR2(100)
GENERATED ALWAYS AS (last_name || ',' || first_name) VIRTUAL
);
COMMENT ON COLUMN employees.full_name
IS 'Full name: LAST COMMA FIRST (ReqDoc 123.A.47)';
CURSOR emps_in_dept_cur
IS
   SELECT employee_id, salary, full_name AS lname
     FROM employees
    WHERE department_id = department_id_in;

The virtual column can have its own stats, and even an index if needed for querying.

Another option would be to create a function that does this formatting:

CREATE FUNCTION employee_full_name
   (last_name  IN employees.last_name%TYPE,
    first_name IN employees.first_name%TYPE)
RETURN VARCHAR2 DETERMINISTIC IS
--Full name: LAST COMMA FIRST (ReqDoc 123.A.47)
BEGIN
   RETURN last_name || ',' || first_name;
END employee_full_name;

We could call this function from the procedure or the view, but if we’re on 11g there’s no reason we can’t create a virtual column on it:

ALTER TABLE employees ADD (
full_name VARCHAR2(100)
GENERATED ALWAYS
AS (employee_full_name(last_name,first_name)) VIRTUAL
);

Another advantage to using the view or a virtual column is that we can now remove the “VARCHAR2 (100)” from the variable declaration, e.g.:

l_name   employees.full_name%TYPE;

4. Cursor Parameter

The cursor refers directly to the parameter to the procedure, which is a no-no – this couples the cursor too much with the procedure, i.e. we can’t re-use it elsewhere unless we always define a variable “department_id_in”. Instead, we should use a cursor parameter:

CURSOR emps_in_dept_cur
   (department_id_in IN employees.department_id%TYPE)
IS
   SELECT employee_id, salary, full_name AS lname
     FROM employees
    WHERE department_id = emps_in_dept_cur.department_id_in;

The addition of the context “emps_in_dept_cur.” is not strictly necessary, but it is good practice to define the scope of all variables so that unrelated changes (e.g. the addition of a column called “department_id_in”) don’t change the code.

5. Cursor Row Type

What if we need to add 10 more columns to the cursor? At the moment we’re adding one more variable for each column of the cursor, and specifying it three times (variable declaration, cursor SELECT clause, and the FETCH INTO). We can reduce this to just once by declaring a cursor row type instead:

PROCEDURE process_employee
   (department_id_in IN employees.department_id%TYPE)
IS
   CURSOR emps_in_dept_cur
      (department_id_in IN employees.department_id%TYPE)
   IS
      SELECT employee_id, salary, full_name lname
        FROM employees
       WHERE department_id = emps_in_dept_cur.department_id_in;
   TYPE emps_in_dept_cur_type IS emps_in_dept_cur%ROWTYPE;
   emp emps_in_dept_cur_type;
BEGIN
   OPEN emps_in_dept_cur;
   LOOP
      FETCH emps_in_dept_cur
      INTO emp;
...

6. Don’t COMMIT

Procedures should rarely COMMIT (there are very few exceptions to this rule, e.g. procedures declared as autonomous transactions). Transactional control should be left to the calling process – this process might need to be done along with a number of other changes elsewhere, and we would want to either COMMIT or ROLLBACK all the changes together as one transaction. What if the next procedure raised an error and we had to rollback? Our system would be left in an inconsistent state.

7. Error Package

That RAISE_APPLICATION_ERROR hard-codes an error code and an error message. What if we type the error number wrong somewhere? If the calling process handles ORA-20907 in some fashion, but we mistype it as -20908 in one procedure, the calling process will not handle it.
We could declare an exception instead, e.g. in a global package specification:

CREATE PACKAGE employee_exception AS
invalid_dept_id EXCEPTION;
PRAGMA EXCEPTION_INIT (invalid_dept_id, -20907);
END employee_exception;

Now, our exception handler can raise just the one exception:

EXCEPTION WHEN NO_DATA_FOUND THEN
   RAISE employee_exception.invalid_dept_id;

However, we’ve now lost the error message. It would be better to create an error-handling package instead:

CREATE PACKAGE employee_error AS
 invalid_error_no CONSTANT NUMBER := -20000;
 invalid_error_no_exception EXCEPTION;
 PRAGMA EXCEPTION_INIT (invalid_error_no_exception, -20000);
 invalid_dept_id CONSTANT NUMBER := -20907;
 invalid_dept_id_exception EXCEPTION;
 PRAGMA EXCEPTION_INIT (invalid_dept_id_exception, -20907);
 PROCEDURE raise_exception (error_no IN NUMBER);
END employee_exception;

CREATE PACKAGE BODY employee_error AS
 PROCEDURE raise_exception (error_no IN NUMBER) IS
 BEGIN
 CASE error_no
 WHEN invalid_dept_id
 THEN RAISE_APPLICATION_ERROR(invalid_dept_id, 'Invalid department ID');
 ELSE RAISE_APPLICATION_ERROR(invalid_error_no, 'Bug: invalid error number');
 END;
 END message;
END employee_exception;

EDIT: PRAGMA EXCEPTION_INIT only accepts literal numbers for its second parameter (or else you get PLS-00702 at compile time) – fixed accordingly

Now, our exception handler is nicely modular:

EXCEPTION WHEN NO_DATA_FOUND THEN
employee_error.raise_exception(employee_error.invalid_dept_id);

So now, our code looks like this:

PROCEDURE process_employee (department_id_in IN employees.department_id%TYPE)
IS
   CURSOR emps_in_dept_cur (department_id_in IN employees.department_id%TYPE)
   IS
      SELECT employee_id, salary, full_name lname
        FROM employees
       WHERE department_id = emps_in_dept_cur.department_id_in;
   TYPE emps_in_dept_cur_type IS emps_in_dept_cur%ROWTYPE;
   emp emps_in_dept_cur_type;
BEGIN
   OPEN emps_in_dept_cur;
   LOOP
      FETCH emps_in_dept_cur
      INTO emp;

      IF emp.salary > employee_constant.ceo_salary_threshold THEN adjust_comp_for_ceo (emp.salary);
      ELSE analyze_compensation (emp.employee_id, emp.salary, employee_constant.ceo_salary_threshold); END IF;
      EXIT WHEN emps_in_dept_cur%NOTFOUND;
   END LOOP;
EXCEPTION WHEN NO_DATA_FOUND THEN
   employee_error.raise_exception(employee_error.invalid_dept_id);
END;

One final change that one might make here is to move the SQL query right out of the procedure and use a ref cursor instead, supplied by a central “employee_cursor” package.

There are probably plenty of other changes we could make to improve the maintainability of this code further.
What do you think?


ORA-01481 Invalid number format model

My code worked perfectly fine in Oracle 9i, but after our upgrade to 11g all of a sudden it was failing with ORA-01481 “Invalid number format model”.

The cause? A mixed-up number format model – I’d accidentally mixed locale-specific (“.“) and locale-safe (“G“) codes in the number format. I don’t know if this is due to a bug in 9i or an enhancement in 11g.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0
- 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL> select to_char(12345.678,'99G990.0') from dual;
select to_char(12345.678,'99G990.0') from dual
*
ERROR at line 1:
ORA-01481: invalid number format model
SQL> select to_char(12345.678,'99,990.0') from dual;
TO_CHAR(1
---------
12,345.7
SQL> select to_char(12345.678,'99G990D0') from dual;
TO_CHAR(1
---------
12,345.7

“Action: Consult the manual… in retrospect, not of much help in this instance, unfortunately – however, these sorts of problems may usually be worked out by experimentation.


Weird SQL Results

Chris Taylor raised a good question.

Here is the interesting part (to me) – Sometimes it returns 1 row, sometimes it returns more than 1 row, and sometimes it returns no rows. To my mind, “trunc(dbms_random.value(1,500)” should return a random value between 1 & 500 every time and just one value.  Is this weird, or is it just me???

Basically he’s wondering why querying a table on a randomly-chosen unique identifier does not always return exactly 1 row.

At first glance it seems nonsensical and looks like a bug, but really, it’s not – in fact, Oracle is doing exactly the right thing (anecdotally, a colleague has told me that SQL Server gets this particular behaviour wrong).

A simple test case: the inner query generates exactly 100 rows, with the numbers 1 to 100. The outer query then queries this result set, with the intention of picking one of those rows only:

select * from (
  select rownum r from dual connect by level <= 100
)
where r = trunc(dbms_random.value(1,100));

However, what happens here is that Oracle knows that dbms_random.value is not deterministic; and the predicates (WHERE clause) must be evaluated once for each row returned from the FROM clause. Since dbms_random.value is not deterministic, Oracle knows it must re-evaluate it for each row, separately – which means it is comparing a different number for each row returned.

So, for example, Oracle looks at row 1, with the value 1. It generates a random number, e.g. 12, and so the predicate evaluates to FALSE, and the row is not returned. Oracle then looks at row 2, which has the value 2. It generates a random number, e.g. 2, and so the predicate evaluates to TRUE, and the row is returned. It does this for each row until the 100th is evaluated and then the query stops.

If none of the 100 random numbers happen to coincide with the values returned from the table, then the query will return no rows.

Quiz question: what is the theoretical maximum number of rows the above query could ever return?

To fix it so that Oracle only evaluates the selection criteria once, we just need to wrap it in another subquery:

select * from (
  select rownum r from dual connect by level <= 100
)
where r = (select trunc(dbms_random.value(1,100)) from dual);

A Good Bad Example

I learnt something new today (thanks to Steven) about TRIM – in the past I’ve used RTRIM and LTRIM if I only want to trim from the left or right ends of a string, but I was not aware (or had forgotten) that SQL’s TRIM function allows this to be specified.

Oracle’s documentation of TRIM, however, has a somewhat confusing example:

“This example trims leading zeros from the hire date of the employees in the hr schema:

SELECT employee_id,
TO_CHAR(TRIM(LEADING 0 FROM hire_date))
FROM employees
WHERE department_id = 60
ORDER BY employee_id;

EMPLOYEE_ID TO_CHAR(T
----------- ---------
103 3-JAN-90
104 21-MAY-91
105 25-JUN-97
106 5-FEB-98
107 7-FEB-99

As an example of using the TRIM function the above code is adequate; however, why does it call the TO_CHAR function at that point? After all, TO_CHAR is only useful if the datatype of its argument is not a character string – but in this case, it’s a string already – TRIM always returns a string.

Not only does the example perform a useless datatype conversion, it involves an implicit conversion – from date to string. The expression would be much better like this:

TRIM(LEADING 0 FROM TO_CHAR(hire_date))

Better – but still not perfect. Sure, we should probably specify the date format so the code is a bit more robust (what if the session has a default format of ‘MM/DD/YY’?), but we can see yet another implicit data type conversion: “LEADING 0“. The only purpose for TRIM is to remove characters. Sure, you can put in zero if you want, but it’s only going to be converted to a character, ‘0’ anyway – so why not make it obvious to the next coder who comes along:

TRIM(LEADING '0' FROM TO_CHAR(hire_date,'DD/MM/YY'))

There, much better! I think the above expression is much clearer about what it will actually do – i.e. it converts a date to a string of characters, and then removes any leading ‘0’ characters.


Truncated Mean in Oracle

A colleague needed to get the average from a set of data, but disregarding the top 25% and the bottom 25%. We didn’t know of any builtin Oracle function that would do this, but a review of the wikipedia page for Average yielded the technical term: truncated (or trimmed) mean. So we searched the Oracle docs and Google for ways to implement this function and didn’t come up with anything very useful. There were some SQL Server scripts which would have required two or three passes over the dataset.

After browsing through the aggregate functions documentation, I hit upon the NTILE function which I’ve used before, and realised that was the answer. The NTILE function takes a set of ordered data and divides it evenly (or as evenly as possible) between a number of buckets. In our case, we wanted to discard the top 25% and bottom 25%, so we simply divide the set into 4 buckets and discard the 1st and the 4th buckets; then take the standard average of the remainder:

SELECT AVG(mystat)
FROM (
      SELECT mystat,
             NTILE(4) OVER (ORDER BY mystat) n
      FROM (SELECT mystat
            FROM mytable)
     )
WHERE n IN (2,3);

The benefit of this query is that it only does one pass over the data, and was easily modified to partition the data set into groups; a count was also added so that the average could be taken over the entire data set for any groups that had less than 4 items.

To get the truncated mean in groups, except for groups with <4 items (for which we’ll report the average over the entire group):

SELECT mycat, AVG(mystat)
FROM (
      SELECT mycat, mystat,
             NTILE(4) OVER (PARTITION BY mycat
                            ORDER BY mystat) n,
             COUNT(1) OVER (PARTITION BY mycat) c 
      FROM (SELECT mycat, mystat
            FROM mytable)
     )
WHERE n IN (2,3) OR c < 4
GROUP BY mycat
ORDER BY mycat;


A good entry-level interview question

I think this would make a reasonable question for would-be database developers:

What are the differences between the following statements?

UPDATE mytable SET mycolumn = NULL;
ALTER TABLE mytable MODIFY (mycolumn NULL);

If they can’t answer this correctly, they don’t get to stage two.


How to Change the ORDER BY When You Can’t Modify the SQL

Is it possible to change the ORDER BY on a query, if the SQL is defined in a wrapped PL/SQL package? A colleague asked me this interesting question today – and the answer was quite simply, Yes and No.

If the SQL already has an ORDER BY, then the answer is No. Fortunately, in our case, the SQL didn’t have any ORDER BY clause:

SELECT a.addressid
FROM address a, addresstype t
WHERE a.locationid = :locationid
AND a.typeid = t.typeid
AND t.typecode = 'STREET';

The plan currently being picked by the CBO (Oracle 9.2.0.8) is:

NESTED LOOPS
   TABLE ACCESS BY ROWID addresstype
      INDEX RANGE SCAN pkx_addresstype
   INDEX RANGE SCAN ix_address

For background, here are the relevant schema details:

TABLE address
(addressid NUMBER PRIMARY KEY
,locationid NUMBER NOT NULL
,typeid NUMBER NOT NULL --FOREIGN KEY to addresstype
);

INDEX idx_address (locationid, typeid, addressid);

TABLE addresstype
(typeid NUMBER PRIMARY KEY
,typecode VARCHAR2(30) NOT NULL
);

typecode is actually unique, but no constraint has been declared to the database.

Because the first step in the plan (the TABLE ACCESS on addresstype) only ever gets one row (in practice), the index range scan on ix_address causes this query to return the rows in ascending order of addressid.

The process that runs this query gets rows one at a time, does some work to “check” it against some other data, and once it finds a “match”, it stops getting more rows.

This design probably seemed like a good idea at the time, but now the business would like this process changed so that in the (rare) occurrence when more than one row would “match”, the process should favour a more recent address rather than an old one. For our purposes, “recent” may be approximated by assuming that bigger addressids are more recent than smaller addressids.

We can’t change the code without a lengthy and costly call to the application vendor; it would be better if we can change it ourselves. We cannot change the PL/SQL package in which this query is defined; so what can we do? Can we change the order in which rows are returned from this query?

The answer is yes – firstly, the query has no ORDER BY, so the order of rows is affected only by the query plan. If we change the query plan, which we can do with a stored outline, the order of rows can be changed.

So in this case, all we have to do is create a stored outline to add an INDEX_DESC hint:

SELECT /*+INDEX_DESC(a)*/ a.addressid
FROM address a, addresstype t
WHERE a.locationid = :locationid
AND a.typeid = t.typeid
AND t.typecode = 'STREET';

This causes the following plan to be used:

NESTED LOOPS
   TABLE ACCESS BY ROWID addresstype
      INDEX RANGE SCAN pkx_addresstype
   INDEX RANGE SCAN DESCENDING ix_address

With the stored outline, the query now returns the rows in descending order of addressid.

NOTE:
If you want to use this approach, it is highly dependant on being able to find a query plan that produces the ordering that you want. This may mean creating a suitable index designed strategically for this purpose, if one doesn’t exist already. Even creating indexes won’t guarantee that you’ll be able to find a plan that gets the order that you want; generally I’d say this approach will only be feasible for relatively simple queries.


Two questions about stored outlines

Out of a discussion with a colleague at work I had two questions to answer:

  1. Can a stored outline change a query – e.g. the FROM or WHERE clauses?

  2. If a stored outline hints the query to use a function-based index, and a user doesn’t have execute privilege on the function, what happens if the user runs the query?

1. Can a stored outline change a query?

The answer was simple to find – no, stored outlines only comprise hints; if the hint doesn’t make sense for a particular query, the optimiser ignores the hint just as it does if you add the hint directly to the query. For example:


SQL> CONN SYS AS SYSDBA

SQL> ALTER SYSTEM SET USE_STORED_OUTLINES = TRUE;

SQL> GRANT CREATE ANY OUTLINE TO SCOTT;

SQL> CONN SCOTT

SQL> CREATE OUTLINE testoutline1 ON
SELECT * FROM EMP WHERE EMPNO = :1;

SQL> CREATE OUTLINE testoutline2 ON
SELECT * FROM DEPT WHERE DEPTNO = :1;

SQL> CONN SYS AS SYSDBA

–swap the outlines
SQL> UPDATE OUTLN.ol$hints
SET ol_name =
CASE ol_name
WHEN ‘TESTOUTLINE1’ THEN ‘TESTOUTLINE2’
WHEN ‘TESTOUTLINE2’ THEN ‘TESTOUTLINE1’
END
WHERE ol_name IN (‘TESTOUTLINE1′,’TESTOUTLINE2’);

SQL> CONN SCOTT

SQL> SELECT * FROM EMP WHERE EMPNO = :1;

SELECT STATEMENT
TABLE ACCESS BY INDEX ROWID (EMP)
INDEX UNIQUE SCAN (PK_EMP)

2. What if a user doesn’t have execute privilege on a function used in a function-based index, but the stored outline tells them to use the index?

The scenario is where a table has an index on a column and a function. A stored outline could tell the optimiser to use this index, but a particular user doesn’t have execute privilege on the function; will the the optimiser still allow the user to query the table using the index? It should, since you shouldn’t need to execute the function to do a range scan on the first column, ignoring the function-based portion of the index.


SQL> CONN SCOTT

SQL> CREATE TABLE SCOTT.testtable
(c1 NUMBER NOT NULL
,c2 NUMBER NOT NULL
);

SQL> GRANT SELECT ON SCOTT.testtable TO LARRY;

SQL> CREATE OR REPLACE
FUNCTION SCOTT.testfunction (i NUMBER)
RETURN NUMBER DETERMINISTIC IS
BEGIN
RETURN I * 3.14159;
END;

–note: don’t grant execute on testfunction to LARRY

SQL> CREATE INDEX SCOTT.testindex
ON SCOTT.testtable (c1, SCOTT.testfunction(c2));

SQL> CONN LARRY

SQL> SELECT * FROM SCOTT.testtable
WHERE c1 = TO_NUMBER(:1);

SELECT STATEMENT
TABLE ACCESS FULL (TESTTABLE)

SQL> CREATE OUTLINE testoutline3 ON
SELECT * FROM SCOTT.testtable x
WHERE c1 = TO_NUMBER(:1)
AND SCOTT.testfunction(c2) = TO_NUMBER(:2);

SQL> CREATE OUTLINE testoutline4 ON
SELECT * FROM SCOTT.testtable x
WHERE c1 = TO_NUMBER(:1);

SQL> CONN SYS

–swap the outlines
SQL> UPDATE OUTLN.ol$hints
SET ol_name =
CASE ol_name
WHEN ‘TESTOUTLINE3’ THEN ‘TESTOUTLINE4’
WHEN ‘TESTOUTLINE4’ THEN ‘TESTOUTLINE3’
END
WHERE ol_name IN (‘TESTOUTLINE3′,’TESTOUTLINE4’);

SQL> CONN LARRY

SQL> SELECT * FROM SCOTT.testtable
WHERE c1 = TO_NUMBER(:1);

SELECT STATEMENT
TABLE ACCESS BY INDEX ROWID (TESTTABLE)
INDEX RANGE SCAN (TESTINDEX)

The answer is yes – just as Larry could have hinted his query to use the index, and it would have worked just as well (using the non-function-based part of the index), it works just the same if stored outlines are used instead. The use of stored outlines is compatible with indexes on function-based indexes and will not muck around with queries run by users who don’t have all the privileges on the functions used by those indexes.


Create Excel file from SQL query using Python

This will create a file called “emp.xls” with a nicely formatted spreadsheet based on the query specified. The header row is based on the column names, and is frozen. The column widths are set according to the max width of the data.

import cx_Oracle
from table2Excel import write_cursor_to_excel

orcl = cx_Oracle.connect('scott/tiger')
curs = orcl.cursor()
curs.execute("""
  SELECT e.ename "Employee",
         e.job "Job",
         e.hiredate "Hire Date",
         e.sal "Salary",
         e.comm "Commission",
         d.dname "Department",
         (SELECT ename FROM emp WHERE emp.empno = e.mgr) "Manager"
  FROM   emp e, dept d
  WHERE  e.deptno = d.deptno
""")
write_cursor_to_excel(curs, 'emp.xls', 'Employees')

You can get the source for table2Excel from here [table2Excel.py.txt]. Feel free to adapt it to your needs.