Skip to content

Apex Dynamic Action silently fails – a story

Here is a short story about a little problem that caused me a bit of grief; but in the end had a simple cause and a simple fix.

I had a dynamic action in my Apex 4.1 app that had to run some PL/SQL – which was working fine, except the PL/SQL kept on getting longer and longer and more complex; so quite naturally I wanted it to be encapsulated in a database procedure.

I did so, but it didn’t work: the page ran without error, but it seemed like the dynamic action wasn’t firing. It was supposed to change the value of some display items on the page in response to the change of a radio button item, but now they weren’t changing! There was no debug warnings or logs to give a hint either. I tried using Chrome’s developer tools to trace it but that just showed me a very high-level view of what the client was doing, and didn’t report any errors or warnings.

I reverted to my original code, and it worked fine. Ok, so that means it’s probably a problem with my procedure.

I checked and rechecked my procedure. Didn’t seem to be anything wrong with it. I added a line into the procedure to raise an exception. The apex page dutifully reported the PL/SQL error in the Ajax call – which means that my procedure was being called successfully. Also, I included the return values in the exception message, and this proved that my procedure was correctly determining the values. They just weren’t being returned to the items on the page.

I tried raising an exception in the apex dynamic action’s PL/SQL Code. That worked. The exception message correctly showed the new values were being returned; they still weren’t being populated on the page.

I tried removing all the items from the Page Items to Return setting; then I gradually added them back in, one by one. I narrowed it down to just one item. If I included that item, none of the items were being updated when the procedure returned. If I excluded that item, all the other items were correctly being updated when the procedure returned. Of course, that wasn’t a solution, because there was a cascade of other dynamic actions that were dependent on that particular item, so it has to be updated.

After lunch and a short walk, it occurred to me: unlike the other parameters, that particular parameter was anchored to a database column defined as CHAR(1). Could that be a problem?

"change code at random... bug solved"Sure enough, when I changed the parameter’s data type from column%TYPE (which mapped to a CHAR) to just a plain VARCHAR2, everything worked.

Yet another reason to avoid CHAR, I guess.

Apex and Application Contexts

My current project uses Oracle RLS (Row Level Security) to ensure each user can only see the data they’re allowed to see. This has made a potentially very complex application very simple. My apex pages can query any table, no WHERE clauses needed (except for user-entered filters), and I can be confident they won’t be able to inadvertently see rows they’re not supposed to see if I forget to add the right predicate on every page that queries each table.

Now, remember that Apex is a stateless environment as far as database sessions are concerned: when a page is requested, it might be generated by any session running on the database; it might re-use a session that had just finished rendering a page for someone else entirely. Therefore, it is imperative that all my database code assumes nothing about the state of package variables, for example. Instead, we use Oracle Application Contexts to hold session-specific information. For example, a context might hold the current user’s identifier, or a flag that indicates they have been granted a particular privilege. As an added advantage, when we use contexts, we can easily pass data into views.

How do we guarantee that the context has been set correctly for the current user’s session? We could follow the guidelines in the Apex help (quoted below) to set the context at the start of every page call, then clear them at the end of the page call. If we were to do this, these are the steps we’d take (these are taken from Apex 4.1.1 but should be comparable in other releases):

1. Create the context:

CREATE OR REPLACE CONTEXT MY_CONTEXT USING MY_PACKAGE;

2. Create a database package with a procedure to Set the context variable(s), e.g.:

PROCEDURE set_session IS
BEGIN
  DBMS_SESSION.set_context
    (namespace => 'MY_CONTEXT'
    ,attribute => 'is_admin'
    ,value     => 'Y');
END new_session;

3. Edit Application Properties; under Security, in the Database Session section, set Initialization PL/SQL Code to call MY_PACKAGE.set_session.

It is interesting that the Apex help page for Initialization PL/SQL Code has this to say:

“Use this attribute to enter a PL/SQL block that sets a context for the database session associated with the current “show page” or “accept page” request. The block you enter here is executed at a very early point during the page request, immediately after the APP_USER value is established. The value of APP_USER (using :APP_USER or v(‘APP_USER’)) may be used within the block. Values of other items in session state may be referenced as well, but any such items must have been established in session state before the initiation of the current page request. Consider the following example:

dbms_session.set_context(‘CTX_USER_QRY’,’USERPRIV’,my_package.my_function(:APP_USER));
It sets the value of USERPRIV in the context named CTX_USER_QRY to the value returned by the function my_function in package my_package. The function is passed the current value of APP_USER as an input argument. Presumably, the named context would be used in a VPD policy ( created within the application’s parsing schema) to effect the generation of predicates appropriate to the authenticated user.

Virtual Private Database, also know as Fine-Grained Access Control or FGAC, is an Oracle database feature that provides an application programming interface (API) that enables developers to assign security policies to database tables and views. Using PL/SQL, developers can create security policies with stored procedures and bind the procedures to a table or view by means of a call to an RDBMS package. Such policies are based on the content of application data stored within the database, or based on context variables provided by Oracle database. In this way, VPD permits access security mechanisms to be removed from applications, and to be situated closer to particular schemas.

…”

This works fine. Before every page is rendered, Apex calls our package to set the context variables. All our procedures, views, etc. as well as our RLS package can use these context variables as required.

Is there a simpler way? Yes.

Just this morning I attended an “Odds and Ends” session presented by Connor McDonald at the AUSOUG Monthly meeting in Perth. He briefly mentioned a cool feature of Oracle contexts – the fact that they can be made globally accessible. I had seen this in the documentation but didn’t know what the words meant or how it might be useful.

What Connor’s talk alerted me to was that what a Globally Accessible context does could be very useful for our Apex application: it allows a context to be simultaneously available from any session on the database, to any user who logs in and sets the correct Client Identifier. We can use this to avoid having to call any code from Initialization PL/SQL Code, which would be run before every page request, which may have a performance impact; instead, the context variables will follow the user around, no matter which database session services their page request.

To do this, you simply:

1. Create our context to be globally accessible:

CREATE OR REPLACE CONTEXT MY_CONTEXT USING MY_PACKAGE
  ACCESSED GLOBALLY;

2. Alter our database package to set the context variables, this time sending the client_id parameter, e.g.:

PROCEDURE set_session IS
v_real_client_id VARCHAR2(200);
BEGIN
  v_real_client_id := REPLACE(
    SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER')
               ,'nobody'
               ,v('APP_USER'));
  DBMS_SESSION.set_context
    (namespace => 'MY_CONTEXT'
    ,attribute => 'is_admin'
    ,value     => 'Y'
    ,client_id => v_real_client_id);
END set_session;

3. In Apex, remove the Initialization PL/SQL Code – we don’t need it anymore.

4. In the Authentication scheme, set Post-Authentication Procedure Name to MY_PACKAGE.set_session.

Now, our context is only set once – just after login. Excellent.

“Now,” you’ll no doubt want to ask me, “where did you actually set the Client Identifier? And what’s the deal with the REPLACE?”

We don’t have to worry about setting Client Identifier – Apex already sets it for us automatically for each page request. If you log into an ordinary Apex application, and examine the Client Identifier, you’ll get something like this:

SCOTT:15135454564654

Where “SCOTT” is the login username, and “15135454564654” is the Apex session ID. We can bind our context to this combination of values simply by passing the Client Identifier to the client_id parameter of DBMS_SESSION.set_context. In fact, most of the time this sort of code will work fine:

DBMS_SESSION.set_context
 (namespace => 'MY_CONTEXT'
 ,attribute => 'my_attribute'
 ,value     => 'xyz'
 ,client_id => SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER'));

Unfortunately, however, if we look at CLIENT_IDENTIFIER from the post-login procedure (i.e. immediately after the user has successfully logged in), we instead find it is something like this:

nobody:15135454564654

So the session ID is correct, but the username hasn’t been set yet. Looking at the Apex debug logs it’s apparent that the client identifier gets set with the username very soon after the post-login procedure is run – which is a tad inconvenient. However, APP_USER is set to SCOTT prior to the post-login procedure; therefore, my set_session procedure simply replaces “nobody” with v(‘APP_USER’), so that the client_id will agree with the client identifier that will be used from that point on.

That works great – my views, packages and RLS predicates can use SYS_CONTEXT to get context variables with negligible performance impact (remembering that context variables are always strings, so don’t forget to convert your datatypes if necessary).

As an added bonus, as a developer or DBA we can log into the database and diagnose an active session by pretending to be it. For example, Joe sends us an email with a screenshot of an Apex page showing “No Data Found”. When you log into your tool of choice and run the same query, you can see plenty of data. You want your session to pretend to be Joe. To do this, you would get Joe’s username and the session ID from the URL on the Apex page (it’s the 3rd parameter after the f?p= bit in the URL), and run this:

DBMS_SESSION.set_identifier('JOE:18765421554654');

What this does is tell Oracle that your session is Joe’s session. Your session will automatically see the same context variables that Joe’s session currently sees. You run the query again, and you can see that it returns no rows – ah, maybe there’s something wrong with the RLS predicate your application is generating. You now have a testbed under which you can test any change you might want to make in order to resolve the issue (or at least, to be able to tell Joe why he’s not allowed to see any data).

Alexandria Updated

Just a quick post to point out that the Alexandria PL/SQL Library has been updated to v1.7, including updates to the Amazon S3 package and a new package for generating iCalendar objects – more details on Morten’s blog.

Select All / Unselect All Checkbox in Interactive Report Header

I want a checkbox in an Interactive Report (IR), and I want the users to be able to quickly Select All or Unselect All of them. I don’t want two big clunky buttons to do this, I just want a single checkbox in the header like I see on cool peoples’ web sites.
To do this:
1. In the Region Definition for the IR, add the checkbox to the query, e.g.:
SELECT ...,
       apex_item.checkbox(1, record_id) selected
FROM   ...
2. In the Report Attributes for the IR, modify the attributes of column “SELECTED”:
Heading = <input type="checkbox" id="selectunselectall">
Display Text As = Standard Report Column
Allow Users To = (uncheck all options, including Hide, Sort, etc.)
3. In the page definition, add a Dynamic Action:
Event = Change
Selection Type = jQuery Selector
jQuery Selector = #selectunselectall
Condition = - No Condition -
True Action = Execute JavaScript Code
Fire On Page Load = (uncheck)
Code =
if ( $( '#selectunselectall' ).is(':checked') ) {
  $('input[type=checkbox][name=f01]').attr('checked',true);
}
else {
  $('input[type=checkbox][name=f01]').attr('checked',false);
}

The only issue with this is if the user clicks “Action” and “Select Columns”, the checkbox item shows the html code (“<input type=”checkbox”…”) instead of a nice name. I think I’ll be able to get away with this.

Dynamic Action on a Tabular Form

I want to visually enable/disable certain items in each row of a tabular form depending on the value of another item in that row. I’m using Apex 4.1.
My tabular form has a number of editable fields representing budgets. Each line might be an Annual budget (AMOUNT_TYPE = ‘YEAR’) with a single amount for the year, or a Monthly budget (AMOUNT_TYPE = ‘MONTH’) with separate amounts for each of the twelve months.
The first editable item (internal id f02) is AMOUNT_TYPE which is a Select List with an LOV. The second editable item (internal id f03) is the Annual Amount and should only be enabled if AMOUNT_TYPE = ‘YEAR’. The 3rd, 4th … 14th items (internal ids f04..f15) are the Monthly Amounts and should only be enabled if AMOUNT_TYPE = ‘MONTH’.
To do this:
1. Define a visual style to be applied to items that are disabled.
Add this to the Page’s “HTML Header” attribute:
<style>
.textinputdisabled {
  color:grey;
  background-color:lightgrey;
  text-decoration:line-through;
}
</style>
In this instance, I’ve set the background color to a light grey, the text color to darker grey, and I’ve added a strikethrough effect.
2. Set the class on the AMOUNT_TYPE item
Edit the Column Attributes on the AMOUNT_TYPE column, set Element Attributes to:
class="typeselect"
3. Define the Dynamic Action
Event = Change
Selection Type = jQuery Selector
jQuery Selector = .typeselect
Condition = - No Condition -
True Action = Execute JavaScript Code
Fire On Page Load = yes
Code =
row_id = $(this.triggeringElement ).attr('id').substr(4);
if( $(this.triggeringElement ).val() == 'MONTH')
{
  $( "#f03_" + row_id ).prop( 'readOnly', 'readonly');
  $( "#f03_" + row_id ).prop( 'class', 'textinputdisabled');
  for (var i=4;i<16;i++)
  {
    column_id = ("0" + i).slice(-2);
    $( "#f" + column_id + "_" + row_id ).prop( 'readOnly', false);
    $( "#f" + column_id + "_" + row_id ).prop( 'class', false);
  }
}
else
{
  $( "#f03_" + row_id ).prop( 'readOnly', false);
  $( "#f03_" + row_id ).prop( 'class', false);
  for (var i=4;i<16;i++)
  { 
    column_id = ("0" + i).slice(-2);
    $( "#f" + column_id + "_" + row_id ).prop( 'readOnly', 'readonly');
    $( "#f" + column_id + "_" + row_id ).prop( 'class', 'textinputdisabled');
  }
}
The above code first determines the id for the row; $(this.triggeringElement).attr(‘id’) returns ‘f02_nnnn’ where nnnn is the row number left-padded with zeroes. For Oracle peeps, substr(4) is equivalent to SUBSTR(x,5).
If the value of the triggering item is MONTH, we want to disable the Annual amount item and re-enable (in case they were previously disabled) the Month amount items. And vice-versa.
To disable an item, we set the readOnly property (note the capital O: this is case sensitive!) to the value “readonly” (all lowercase); this makes it so that the user cannot modify the value in the field. Note that we don’t set the “disabled” property because that would stop the item being posted to the database, which will break the tabular form processing.

Side Note: at first, I was using the .attr and .removeAttr jquery functions to set/unset readOnly as per some notes I’d found on the web; this worked for Chrome, but it made all the items permanently read-only in IE7; after some googling I found this is a feature, not a bug; and that .prop is the correct function to use in this instance.

We also set the class to the CSS style we defined earlier.
Because I have 12 items in a row to modify, I use a Javascript loop. The expression to generate the column id (“0″ + i).slice(-2) does the same job as the Oracle expression TO_CHAR(i, ‘fm00′).
Next, I want to enhance this page further, so that when the user is entering monthly amounts, the Total field automatically calculates the sum of all the months (while still saving the original annual amount, if any, on the database). I had to get outside help [stackoverflow] to get this working – I’ll have to follow up with another post describing this solution in detail.

Workaround for ORA-04063: view “x” has errors

When you deploy a procedure, function or package that has a compilation error, the object is still created, and you can still apply grants on them. This is convenient when deploying a large number of objects, meaning you don’t have to get them all in the right order. After deploying your schema, you can just recompile the invalid objects.

Unfortunately, this doesn’t work for views. Now, normally if you create a view with compilation errors, the view will not be created at all; for a deployment script, however, you could use CREATE FORCE VIEW that means the view will be created (but marked invalid).

Let’s say you have a view that depends on a table that doesn’t exist yet – and won’t exist until much later in your deployment scripts. So you create the view with the FORCE option – success. Then, you apply the GRANTs for the view, and get this:

ORA-04063: view "x" has errors

Why? If you try to grant on a procedure, function or package that has errors, it works fine. For views, apparently, this is not allowed.

Obviously, to solve this you might do the hard work and reorder your deployment scripts so that they create every object in the perfectly correct order, avoiding compilation errors entirely. If you have a large number of objects to deploy, this might be more trouble than you want. Well, there is a workaround:

1. Create the view, minus the bit that causes a compilation error.

2. Apply the grant.

3. Recreate the view, compilation error and all. The grant will remain.

Why might this be useful? In my case, we have two databases connected by database links (on both sides); and we need to deploy a large number of objects to both instances. They are managed by different teams, so we want to be able to deploy the changes to each independently. For the most part, the objects on “our side” compile fine, except for some views that refer to objects on the other side of the database link; but they won’t exist until the other team deploys their changes. We could even have a chicken-and-egg problem, when their views refer to objects on our instance; either way, some of the objects cannot be created error-free until both deployments have been completed.

As it stands, we have two options: deploy everything as best we can, then afterwards (when both deployments have completed), recompile the invalid objects and apply the view grants. An alternative is to use this workaround.

Demonstration

TEST CASE #1: cannot grant on a view with errors

SQL> create or replace force view testview as
     select 1 as col from bla;
Warning: View created with compilation errors.
SQL> grant select on testview to someone;
ORA-04063: view "USER.TESTVIEW" has errors
SQL> select grantee, privilege from user_tab_privs
     where table_name = 'TESTVIEW';
no rows selected

TEST CASE #2: grant on a view with errors

SQL> create or replace view testview as
     select 1 as col from dual;
View created.
SQL> grant select on testview to someone;
Grant succeeded.
SQL> create or replace force view testview as
     select 1 as col from bla;
Warning: View created with compilation errors.
SQL> select grantee, privilege from user_tab_privs
     where table_name = 'TESTVIEW';
GRANTEE PRIVILEGE
======= =========
SOMEONE SELECT

The above has been tested on Oracle 10gR2 and 11gR2. Should this mean that Oracle should not really raise ORA-04063 in this case? I think so.

Mobile, Cloud, Oracle 12c: Oracle with 20:20 Foresight

The Australian Oracle User Group is holding the Oracle with 20:20 Foresight National Conference in Perth, 29-30 October. Yikes, that’s only 3 weeks away – if you’re in Perth, you have to sign up right now. If you’re not in Perth, grab your skateboard or canoe (depending on the intervening terrain) and get over here!

We’re going to be treated with talks by Tom Kyte, Connor McDonald, Chris Muir, Scott Wesley, Graham Wood and many others. Check out the conference program to see what’s on offer. A lot of the topics seem to be very Mobile and Cloudy…

I’ll be presenting twice, if you’re interested I’d love to see you there:

1. Alexandria – A Guided Tour – an overview of just a few of the goodies that you’ll find in the Alexandria PL/SQL Library, and how you can use them out-of-the-box to do things that you might have thought could not be done in PL/SQL.

2. Top 20 Gotchas with Old Database Versions – most probably you’ll be working with Oracle 10g or 11g nowadays – but sometimes you don’t have a choice but to deal with older versions like 8i or 9i. If so you may very well pick up a few hints and tips that will save you time and headaches.

If you missed out on that “open world” conference, you’ll have to come and hear about the new features planned for Oracle 12c. Even if you did manage to get to that big conference, you’ll want to come to this one as well, not least because it’s in beautiful sunny Perth :)

EDIT: Slide decks and demo scripts for the presentations are available from here: http://jeffkemponoracle.com/presentations/

“Automate Amazon S3 Storage with Alexandria”

In Perth this morning, at a breakfast courtesy of the local AUSOUG, I spoke about using the Alexandria PL/SQL Library to automate various tasks with Amazon’s Simple Storage (S3) service. If you haven’t used Amazon Web Services before, or haven’t looked at Alexandria yet, and you enjoy discovering new capabilities with PL/SQL I think you’ll find this interesting.

The powerpoint slides and demo script are now available on my Presentations page.

In late October I’ll be speaking at the 20:20 Foresight Perth Conference - more details later.

Top 10 Confusing Things in Oracle

Every mature language, platform or system has little quirks, eccentricities, and anachronisms that afficionados just accept as “that’s the way it is” and that look weird, or outlandishly strange to newbies and outsiders. The more mature, and more widely used is the product, the more resistance to change there will be – causing friction that helps to ensure these misfeatures survive.

Oracle, due to the priority placed on backwards compatibility, and its wide adoption, is not immune to this phenomenon. Unless a feature is actively causing things to break, as long as there are a significant number of sites using it, it’s not going to change. In some cases, the feature might be replaced and the original deprecated and eventually removed; but for core features such as SQL and PL/SQL syntax, especially the semantics of the basic data types, it is highly unlikely these will ever change.

So here I’d like to list what I believe are the things in Oracle that most frequently confuse people. These are not necessarily intrinsically complicated – just merely unintuitive, especially to a child of the 90′s or 00′s who was not around when these things were first implemented, when the idea of “best practice” had barely been invented; or to someone more experienced in other technologies like SQL Server or Java. These are things I see questions about over and over again – both online and in real life. Oh, and before I get flamed – another disclaimer: some of these are not unique to Oracle – some of them are more to do with the SQL standard; some of them are caused by a lack of understanding of the relational model of data.

Once you know them, they’re easy – you come to understand the reasons (often historical) behind them; eventually, the knowledge becomes so ingrained, it’s difficult to remember what it was like beforehand.

Top 10 Confusing Things in Oracle

  1. Empty strings are NULL

  2. DATEs are not dates

  3. Nothing equals NULL

  4. ROWNUM … ’nuff said?

  5. Use VARCHAR2, not CHAR or VARCHAR

  6. Double-quotes delimit identifiersSingle quotes delimit strings

  7. No intrinsic sort orderYou can’t insert a record at the “top”, “middle” or “end” of a table

  8. You don’t add a column in the “middle” of a table

  9. Don’t create temporary tables at runtime

  10. Every User is/has a Schema, every Schema is/has a User

Got something to add to the list? Drop me a note below.

More resources:

Non-Overlapping Dates Constraint

If you have a table that represents time-varying info, e.g. with From and To date/time columns, you have a few options with regards to the problem of overlapping dates:

1. Check for overlapping dates in the application layer.

2. Use an off-the-shelf product to generate the appropriate triggers, e.g. Oracle CDM*RuleFrame or Toon Koppelaars’  RuleGen.

3. Roll your own, in the database.

4. Use a different data model that can use a unique constraint.

5. Forget about it.

One reason it’s difficult is that this is an example of a cross-row constraint, i.e. one that cannot merely be checked for the current row by itself. Oracle supports a few cross-row constraints, i.e. Primary Key, Unique and Foreign Key constraints; but it doesn’t natively support arbitrary assertions, which would allow us to easily declare this sort of constraint.

The real challenge comes from the fact that Oracle is a multi-user system and concurrent sessions cannot see the uncommitted data from other sessions; so some form of serialization will be required to ensure that when one session wishes to insert/update the data for a particular entity, no other session is allowed to start working on the same entity until the first session commits (or issues a rollback).

The problem is not new; it’s been around for a long time, and tripped many a new (and old) programmer.

There are two problems with option #1 (code in the application layer): firstly, you have to repeat the code for each different type of client (e.g. you might have a Java UI on one side, as well as some batch script somewhere else); secondly, usually the programmer involved will not understand the concurrency problem mentioned above and will not take it into account.

Option #2 is probably the best, most of the time. The solution is implemented at the database level, and is more likely to work correctly and efficiently.

Option #4 (change the data model) involves not storing the From and To dates, but instead dividing up all time ranges into discrete chunks, and each row represents a single chunk of time. This solution is valid if the desired booking ranges are at predictable discrete ranges, e.g. daily. You can then use an ordinary unique constraint to ensure that each chunk of time is only booked by one entity at any one time. This is the solution described here.

Option #5 (forget about it) is also a viable option, in my opinion. Basically it entails designing the rest of the application around the fact that overlapping date ranges might exist in the table – e.g. a report might simply merge the date ranges together prior to output.

Option #3, where you implement the triggers yourself on the database, has the same advantage as Option #2, where it doesn’t matter which application the data is coming from, the constraint will hold true. However, you have to be really careful because it’s much easier to get it wrong than it is to get right, due to concurrency.

I hear you scoffing, “Triggers?!?”. I won’t comment except to refer you to this opinion, which I couldn’t say it better myself: The fourth use-case for Triggers.

There is another Option #3 using a materialized view instead of triggers; I’ll describe this alternative at the end of this post.

So, here is a small example showing how an overlapping date constraint may be implemented. It is intentionally simple to illustrate the approach: it assumes that the From and To dates cannot be NULL, and its rule for detecting overlapping dates requires that the dates not overlap at all, to the nearest second.

1. Create the tables

CREATE TABLE room
  (room_no NUMBER NOT NULL
  ,CONSTRAINT room_pk PRIMARY KEY (room_no)
  );

CREATE TABLE room_booking
  (room_no NUMBER NOT NULL
  ,booked_from DATE NOT NULL
  ,booked_to DATE NOT NULL
  ,CONSTRAINT room_booking_pk
     PRIMARY KEY (room_no, booked_from)
  ,CONSTRAINT room_booking_fk
     FOREIGN KEY (room_no) REFERENCES room (room_no)
  );

2. Create the validation trigger (note – I’ve used an Oracle 11g compound trigger here, but it could easily be rewritten for earlier versions by using two triggers + a database package):

CREATE OR REPLACE TRIGGER room_booking_trg
  FOR INSERT OR UPDATE OF room_no, booked_from, booked_to
    ON room_booking
  COMPOUND TRIGGER

  TYPE room_no_array IS TABLE OF CHAR(1)
    INDEX BY BINARY_INTEGER;

  room_nos room_no_array;

PROCEDURE lock_room (room_no IN room.room_no%TYPE) IS
  dummy room.room_no%TYPE;
BEGIN
  SELECT r.room_no
  INTO dummy
  FROM room r
  WHERE r.room_no = lock_room.room_no
  FOR UPDATE;
END lock_room;

PROCEDURE validate_room (room_no IN room.room_no%TYPE) IS
  overlapping_booking EXCEPTION;
  dummy CHAR(1);
BEGIN
  -- check for overlapping date/time ranges
  BEGIN
    SELECT 'X' INTO dummy
    FROM room_booking rb1
        ,room_booking rb2
    WHERE rb1.room_no = validate_room.room_no
    AND rb2.room_no = validate_room.room_no
    AND rb1.booked_from != rb2.booked_from
    AND (
         rb1.booked_from BETWEEN rb2.booked_from
                             AND rb2.booked_to
         OR
         rb1.booked_to BETWEEN rb2.booked_from
                           AND rb2.booked_to
        )
    AND ROWNUM = 1;
    RAISE overlapping_booking;
  EXCEPTION
    WHEN NO_DATA_FOUND THEN
      -- good, no constraint violations
      NULL;
  END;
EXCEPTION
  WHEN overlapping_booking THEN
    RAISE_APPLICATION_ERROR(-20000,
      'Overlapping booking for room #' || room_no);
END validate_room;

PROCEDURE validate_rooms IS
  room_no room.room_no%TYPE;
BEGIN
  room_no := room_nos.FIRST;
  LOOP
    EXIT WHEN room_no IS NULL;
    validate_room (room_no);
    room_no := room_nos.NEXT(room_no);
  END LOOP;
  room_nos.DELETE;
EXCEPTION
  WHEN OTHERS THEN
    room_nos.DELETE;
    RAISE;
END validate_rooms;

BEFORE EACH ROW IS
BEGIN
  -- lock the header record (so other sessions
  -- can't modify the bookings for this room
  -- at the same time)
  lock_room(:NEW.room_no);
  -- remember the room_no to validate later
  room_nos(:NEW.room_no) := 'Y';
END BEFORE EACH ROW;

AFTER STATEMENT IS
BEGIN
  validate_rooms;
END AFTER STATEMENT;

END room_booking_trg;
/

That’s all you need. The trigger locks the header record for the room, so only one session can modify the bookings for a particular room at any one time. If you don’t have a table like “room” in your database that you can use for this purpose, you could use DBMS_LOCK instead (similarly to that proposed in the OTN forum discussion here).

It would not be difficult to adapt this example for alternative requirements, e.g. where the From and To dates may be NULL, or where the overlapping criteria should allow date/time ranges that coincide at their endpoints (e.g. so that the date ranges (1-Feb-2000 to 2-Feb-2000) and (2-Feb-2000 to 3-Feb-2000) would not be considered to overlap). You’d just need to modify the comparison in the query in validate_room to take these requirements into account.

Test case #1

INSERT INTO room (room_no) VALUES (101);
INSERT INTO room (room_no) VALUES (201);
INSERT INTO room (room_no) VALUES (301);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (101, DATE '2000-01-01', DATE '2000-01-02' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (101, DATE '2000-01-02', DATE '2000-01-03' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-02-01', DATE '2000-02-05' - 0.00001);

Expected: no errors

Test case #2

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-02-01', DATE '2000-02-02' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-02-02', DATE '2000-02-04' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-02-03', DATE '2000-02-05' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-02-03', DATE '2000-02-06' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-01-31', DATE '2000-02-01');

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-01-31', DATE '2000-02-02' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-01-31', DATE '2000-02-06' - 0.00001);

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (201, DATE '2000-02-05' - 0.00001, DATE '2000-02-06' - 0.00001);

UPDATE room_booking SET booked_to = '2000-01-02' - 0.00001
WHERE room_no = 101 AND booked_from = DATE '2000-01-02';

Expected: constraint violation on each statement

Test case #3

in session #1:

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (301, DATE '2000-01-01', DATE '2000-02-01' - 0.00001);

in session #2:

INSERT INTO room_booking (room_no, booked_from, booked_to)
VALUES (301, DATE '2000-01-15', DATE '2000-01-16' - 0.00001);

Expected: session #2 will wait until session #1 issues a COMMIT or ROLLBACK. If session #1 COMMITs, session #2 will then report a constraint violation. If session #2 rolls back, session #2 will complete without error.

The No-Trigger option #3: Materialized View

This is similar to a solution proposed by Rob Van Wijk. It uses a constraint on a materialized view to stop overlapping date ranges.

So, instead of the trigger, you would do something like this:

CREATE MATERIALIZED VIEW LOG ON room_booking WITH ROWID;

CREATE MATERIALIZED VIEW room_booking_date_ranges
  REFRESH FORCE ON COMMIT
  AS SELECT 'X' AS dummy
     FROM room_booking rb1
         ,room_booking rb2
     WHERE rb1.room_no = rb2.room_no
     AND rb1.booked_from != rb2.booked_from
     AND (
          rb1.booked_from BETWEEN rb2.booked_from
                              AND rb2.booked_to
          OR
          rb1.booked_to BETWEEN rb2.booked_from
                            AND rb2.booked_to
         );

ALTER TABLE room_booking_date_ranges
  ADD CONSTRAINT no_overlapping_dates_ck
  CHECK ( dummy = 'Z' );

The nice thing about this solution is that it is simpler to code, and seems more “declarative” in nature. Also, you don’t have to worry about concurrency at all.

The constraint is checked at COMMIT-time when the materialized view is refreshed; so it behaves like a deferred constraint, which may be an advantage for some scenarios.

I believe it may perform better than the trigger-based option when large volumes of data are inserted or updated; however it may perform worse than the trigger-based option when you have lots of small transactions. This is because, unfortunately, the query here cannot be a “REFRESH FAST ON COMMIT” (if you know how this could be changed into a REFRESH FAST MV, please let me know!).

 

What do you think? If you see any potential issues with the above solutions please feel free to comment.

EDIT 30/8: added some more test cases

Follow

Get every new post delivered to your Inbox.

Join 175 other followers