Dumb triggers? Let’s make ’em a bit smarter

Some time back, Connor rightly pointed out that triggers that modify data can get in the way when you need to do out-of-the-ordinary data maintenance, e.g. when you need to fix up a row here or re-insert a row over there. You can’t just disable the trigger or else make your users suffer down-time.

Now, the only purpose for which I use triggers is to do common things like setting audit columns and incrementing a VERSION_ID column, and in certain special cases for carefully implementing cross-row constraints; also, I use them to populate a journal table with all changes to the table. Mind you, in recent times features have been added and improved in the Oracle database (such as Flashback Query and Flashback Data Archive) to the point where I’m almost ready to stop doing this. However, there are still some minor use-cases where having a separate “journal” table can be useful. Any argument about that assertion is ruled “out of scope” for this article! 🙂

So, assuming we’re sticking with triggers that might change data, a solution to this problem is already built-in to the journal triggers and Table APIs (TAPI) that my PL/SQL code generator creates. This allows me to disable the trigger on any table, just for my current session without affecting any other concurrent activity – and no DDL required.

UPDATED 16/2/2016: now uses a context variable (thanks Connor for the idea)

In the trigger I have this code:

create or replace TRIGGER EMPS$TRG 
  FOR INSERT OR UPDATE OR DELETE ON EMPS 
  COMPOUND TRIGGER 
 
  BEFORE EACH ROW IS 
  BEGIN 
    IF SYS_CONTEXT('SAMPLE_CTX','EMPS$TRG') IS NULL THEN 
      ...etc... 
    END IF; 
  END BEFORE EACH ROW; 
 
  AFTER EACH ROW IS 
  BEGIN 
    IF SYS_CONTEXT('SAMPLE_CTX','EMPS$TRG') IS NULL THEN 
      ...etc... 
    END IF; 
  END AFTER EACH ROW; 
 
END EMPS$TRG;

The trigger takes advantage of some extra code that is generated in the Table API:

create or replace PACKAGE EMPS$TAPI AS 
/***********************************************
 Table API for emps 
 10-FEB-2016 - Generated by SAMPLE
***********************************************/ 

... 
 
-- Use these procedures to disable and re-enable the
-- journal trigger just for this session (to disable for
-- all sessions, just disable the database trigger 
-- instead). 
PROCEDURE disable_journal_trigger; 
PROCEDURE enable_journal_trigger; 
 
END EMPS$TAPI; 

The package body code is quite simple:

create or replace PACKAGE BODY EMPS$TAPI AS 
/***********************************************
 Table API for emps 
 10-FEB-2016 - Generated by SAMPLE
***********************************************/ 
 
...

-- may be used to disable and re-enable the journal trigger for this session 
PROCEDURE disable_journal_trigger IS 
BEGIN 
  log_start('disable_journal_trigger'); 
 
  SECURITY.disable_journal_trigger('EMPS$TRG');
 
  log_end;
EXCEPTION 
  WHEN OTHERS THEN 
    UTIL.log_sqlerrm; 
    RAISE; 
END disable_journal_trigger; 
 
PROCEDURE enable_journal_trigger IS 
BEGIN 
  log_start('enable_journal_trigger'); 
 
  SECURITY.enable_journal_trigger('EMPS$TRG');
 
  log_end; 
EXCEPTION 
  WHEN OTHERS THEN 
    UTIL.log_sqlerrm; 
    RAISE; 
END enable_journal_trigger; 
 
END EMPS$TAPI;

A context variable is set with the name of the trigger to disable it – the default state for a new session (i.e. the context variable not set) means the trigger is enabled.

create or replace PACKAGE BODY SECURITY AS

...

PROCEDURE disable_journal_trigger
  (trigger_name IN VARCHAR2
  ,client_id    IN VARCHAR2 := NULL) IS
BEGIN
  -- set the context to any non-null value
  DBMS_SESSION.set_context 
    (namespace => 'SAMPLE_CTX'
    ,attribute => trigger_name
    ,value     => 'DISABLED'
    ,client_id => NVL(client_id, SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER')));
END disable_journal_trigger;

PROCEDURE enable_journal_trigger
  (trigger_name IN VARCHAR2
  ,client_id    IN VARCHAR2 := NULL) IS
BEGIN
  -- clear the context
  DBMS_SESSION.clear_context 
    (namespace => 'SAMPLE_CTX'
    ,attribute => trigger_name
    ,client_id => NVL(client_id, SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER')));
END enable_journal_trigger;

END SECURITY;

So now, to run some data maintenance, I can simply call the TAPI to disable, then re-enable, the trigger:

BEGIN EMPS$TAPI.disable_journal_trigger; END;
/

... do the data maintenance...

BEGIN EMPS$TAPI.enable_journal_trigger; END;
/

Unless the data maintenance is doing something very unusual, this script should be safe to run while the system is still up and running for users.

Also, it would be a trivial exercise to write a routine which disables or enables all the journal triggers at once.

The point of this, of course, is that you should be able to do all this sort of thing without writing a lot of code for each table in your schema – solve it for one table, and then generate the code for all your tables.

Source code/download: http://bitbucket.org/jk64/jk64-sample-apex-tapi


Using compound triggers to boost your journal table performance

If your schemas are like those I deal with, almost every table has a doppelgänger which serves as a journal table; an “after insert, update or delete” trigger copies each and every change into the journal table. It’s a bit of a drag on performance for large updates, isn’t it?

I was reading through the docs (as one does) and noticed this bit:

Scenario: You want to record every change to hr.employees.salary in a new table, employee_salaries. A single UPDATE statement will update many rows of the table hr.employees; therefore, bulk-inserting rows into employee.salaries is more efficient than inserting them individually.

Solution: Define a compound trigger on updates of the table hr.employees, as in Example 9-3. You do not need a BEFORE STATEMENT section to initialize idx or salaries, because they are state variables, which are initialized each time the trigger fires (even when the triggering statement is interrupted and restarted).

http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm#CIHFHIBH

The example shows how to use a compound trigger to not only copy the records to another table, but to do so with a far more efficient bulk insert. Immediately my journal table triggers sprang to mind – would this approach give me a performance boost?

The answer is, yes.

My test cases are linked below – emp1 is a table with an ordinary set of triggers, which copies each insert/update/delete into its journal table (emp1$jn) individually for each row. emp2 is a table with a compound trigger instead, which does a bulk insert of 100 journal entries at a time.

I ran a simple test case involving 100,000 inserts and 100,000 updates, into both tables; the first time, I did emp1 first followed by emp2; in the second time, I reversed the order. From the results below you’ll see I got a consistent improvement, shaving about 4-7 seconds off of about 21 seconds, an improvement of 19% to 35%. This is with the default value of 100 for the bulk operation; tweaking this might wring a bit more speed out of it (at the cost of using more memory per session).

Of course, this performance benefit only occurs for multi-row operations; if your application is only doing single-row inserts, updates or deletes you won’t see any difference in performance. However, I still think this method is neater (only one trigger) than the alternative so would recommend. The only reason I wouldn’t use this method is if my target might potentially be a pre-11g database, which doesn’t support compound triggers.

Here are the test case scripts if you want to check it out for yourself:

ordinary_journal_trigger.sql
compound_journal_trigger.sql
test_journal_triggers.sql

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

insert emp1 (test run #1)
100000 rows created.
Elapsed: 00:00:21.19

update emp1 (test run #1)
100000 rows updated.
Elapsed: 00:00:21.40

insert emp2 (test run #1)
100000 rows created.
Elapsed: 00:00:16.01

update emp2 (test run #1)
100000 rows updated.
Elapsed: 00:00:13.89

Rollback complete.

insert emp2 (test run #2)
100000 rows created.
Elapsed: 00:00:15.94

update emp2 (test run #2)
100000 rows updated.
Elapsed: 00:00:16.60

insert emp1 (test run #2)
100000 rows created.
Elapsed: 00:00:21.01

update emp1 (test run #2)
100000 rows updated.
Elapsed: 00:00:20.48

Rollback complete.

And here, in all its glory, is the fabulous compound trigger:

CREATE OR REPLACE TRIGGER emp2$trg
  FOR INSERT OR UPDATE OR DELETE ON emp2
  COMPOUND TRIGGER
  
  FLUSH_THRESHOLD CONSTANT SIMPLE_INTEGER := 100;
  TYPE jnl_t IS TABLE OF emp2$jn%ROWTYPE
    INDEX BY SIMPLE_INTEGER;
  jnls  jnl_t;
  rec   emp2$jn%ROWTYPE;
  blank emp2$jn%ROWTYPE;
  
  PROCEDURE flush_array (arr IN OUT jnl_t) IS
  BEGIN
    FORALL i IN 1..arr.COUNT
      INSERT INTO emp2$jn VALUES arr(i);
    arr.DELETE;
  END flush_array;
  
  BEFORE EACH ROW IS
  BEGIN
    IF INSERTING THEN
      IF :NEW.db_created_by IS NULL THEN
        :NEW.db_created_by := NVL(v('APP_USER'), USER);
      END IF;
    ELSIF UPDATING THEN
      :NEW.db_modified_on := SYSDATE;
      :NEW.db_modified_by := NVL(v('APP_USER'), USER);
      :NEW.version_id     := :OLD.version_id + 1;
    END IF;
  END BEFORE EACH ROW;
  
  AFTER EACH ROW IS
  BEGIN
    rec := blank;
    IF INSERTING OR UPDATING THEN
      rec.id             := :NEW.id;
      rec.name           := :NEW.name;
      rec.db_created_on  := :NEW.db_created_on;
      rec.db_created_by  := :NEW.db_created_by;
      rec.db_modified_on := :NEW.db_modified_on;
      rec.db_modified_by := :NEW.db_modified_by;
      rec.version_id     := :NEW.version_id;
      IF INSERTING THEN
        rec.jn_action := 'I';
      ELSIF UPDATING THEN
        rec.jn_action := 'U';
      END IF;
    ELSIF DELETING THEN
      rec.id             := :OLD.id;
      rec.name           := :OLD.name;
      rec.db_created_on  := :OLD.db_created_on;
      rec.db_created_by  := :OLD.db_created_by;
      rec.db_modified_on := :OLD.db_modified_on;
      rec.db_modified_by := :OLD.db_modified_by;
      rec.version_id     := :OLD.version_id;
      rec.jn_action      := 'D';
    END IF;
    rec.jn_timestamp := SYSTIMESTAMP;
    jnls(NVL(jnls.LAST,0) + 1) := rec;
    IF jnls.COUNT >= FLUSH_THRESHOLD THEN
      flush_array(arr => jnls);
    END IF;
  END AFTER EACH ROW;
  
  AFTER STATEMENT IS
  BEGIN
    flush_array(arr => jnls);
  END AFTER STATEMENT;
  
END emp2$trg;

Just to be clear: it’s not that it’s a compound trigger that impacts the performance; it’s the bulk insert. However, using the compound trigger made the bulk operation much simpler and neater to implement.

UPDATE 14/08/2014: I came across a bug in the trigger which caused it to not flush the array when doing a MERGE. I found I had to pass the array as a parameter internally.