Skip to content

Change an item Label dynamically

Get it? “an item with many hats”… yeah ok.

Need to change the label of an item on-the-fly? When I run my Apex page it renders item labels like this:

<label for="P1_CONTACT_NUMBER">
  <span>Contact Number</span>
</label>

If the label needs to change based on another item, I could set the label with the value of another item, e.g. “&P1_CONTACT_NUMBER_LABEL.” and when the page is refreshed it would pick up the new label. But at runtime, if the label needs to change dynamically in response to changes in other items, we need to do something else.

Caveat: The need for changing the label should be very rare – it’s bad practice to overload one field with multiple meanings. But if you must, this is what you can do.

It’s easy with a Dynamic Action running some Javascript. This changes the label text for the P1_CONTACT_NUMBER item depending on the value chosen for P1_CONTACT_METHOD, which might be a radio group or select list. The method uses jquery to search for a “label” tag with the attribute “for” that associates it with the desired item; we then navigate down to the “span” element, and call the “text” function to change the label text:

if ($v("P1_CONTACT_METHOD")=='SMS') {
    $("label[for=P1_CONTACT_NUMBER]>span").text("Contact Mobile")
} else if ($v("P1_CONTACT_METHOD")=='EMAIL') {
    $("label[for=P1_CONTACT_NUMBER]>span").text("Contact Email")
} else {
    $("label[for=P1_CONTACT_NUMBER]>span").text("Contact Number")
}

The Dynamic Action is set up as follows:

Event = Change
Selection Type = Item(s)
Item(s) = P1_CONTACT_METHOD
Condition = (no condition)

True Action = Execute JavaScript Code
Fire On Page Load = Yes
Selection Type = (blank)
Code = (the javascript shown above)

Parallel Development in Apex

Source: http://paulhammant.com/files/multi-branch.jpgMy current client has a large number of Apex applications, one of which is a doozy. It is a mission-critical and complex application in Apex 4.0.2 used throughout the business, with an impressively long list of features, with an equally impressively long list of enhancement requests in the queue.

They always have a number of projects on the go with it, and they wanted us to develop two major revisions to it in parallel. In other words, we’d have v1.0 (so to speak) in Production, which still needed support and urgent defect fixing, v1.1 in Dev1 for project A, and v1.2 in Dev2 for project B. Oh, and we don’t know if Project A will go live before Project B, or vice versa. We have source control, so we should be able to branch the application and have separate teams working on each branch, right?

We said, “no way”. Trying to merge changes from a branch of an Apex app into an existing Apex app is not going to work, practically speaking. The merged script would most likely fail to run at all, or if it somehow magically runs, it’d probably break something.

So we pushed back a bit, and the terms of the project were changed so that development of project A would be done first, and the development of project B would follow straight after. So at least now we know that v1.2 can be built on top of v1.1 with no merge required. However, we still had the problem that production defect fixes would still need to be done on a separate version of the application in dev, and that they needed to continue being deployed to sit/uat/prod without carrying any changes from our projects.

The solution we have used is to have two copies of dev, each with its own schema, apex application and version control folder: I’ll call them APP and APP2. We took an export of APP and created APP2, and instructed the developer who was tasked with production defect fixes to manually duplicate his changes in both APP and APP2. That way the defect fixes were “merged” in a manual fashion as we went along – also, it meant that the project development would gain the benefit of the defect fixes straight away. The downside was that everything worked and acted as if they were two completely different and separate applications, which made things tricky for integration.

Next, for developing project A and project B, we needed to be able to make changes for both projects in parallel, but we needed to be able to deploy just Project A to SIT/UAT/PROD without carrying the changes from project B with it. The solution was to use Apex’s Build Option feature (which has been around for donkey’s years but I never had a use for it until now), in combination with Conditional Compilation on the database schema.

I created a build option called (e.g.) “Project B”. I set Status = “Include”, and Default on Export = “Exclude”. What this means is that in dev, my Project B changes will be enabled, but when the app is exported for deployment to SIT etc the build option’s status will be set to “Exclude”. In fact, my changes will be included in the export script, but they just won’t be rendered in the target environments.

When we created a new page, region, item, process, condition, or dynamic action for project B, we would mark it with our build option “Project B”. If an existing element was to be removed or replaced by Project B, we would mark it as “{NOT} Project B”.

Any code on the database side that was only for project B would be switched on with conditional compilation, e.g.:

$IF $$projectB $THEN
  PROCEDURE my_proc (new_param IN ...) IS...
$ELSE
  PROCEDURE my_proc IS...
$END

When the code is compiled, if the projectB flag has been set (e.g. with ALTER SESSION SET PLSQL_CCFLAGS='projectB:TRUE';), the new code will be compiled.

Build Options can be applied to:

  • Pages & Regions
  • Items & Buttons
  • Branches, Computations & Processes
  • Lists & List Entries
  • LOV Entries
  • Navigation Bar & Breadcrumb Entries
  • Shortcuts
  • Tabs & Parent Tabs

This works quite well for 90% of the changes required. Unfortunately it doesn’t handle the following scenarios:

1. Changed attributes for existing Apex components – e.g. some layout changes that would re-order the items in a form cannot be isolated to a build option.

2. Templates and Authorization Schemes cannot be marked with a build option.

On the database side, it is possible to detect at runtime if a build option has been enabled or not. In our case, a lot of our code was dependent on schema structural changes (e.g. new table columns) which would not compile in the target environments anyway – so conditional compilation was a better solution.

Apart from these caveats, the use of Build Options and Conditional Compilation have made the parallel development of these two projects feasible. Not perfect, mind you – but feasible. The best part? There’s a feature in Apex that allows you to view a list of all the components that have been marked with a Build Option – this is accessible from Shared Components -> Build Options -> Utilization (tab).

Enhancement Requests:

1. If Build Options could be improved to allow the scenarios listed above, I’d be glad. In a perfect world, I should be able to go into Apex, select “Project B”, and all my changes (adding/modifying/removing items, regions, pages, LOVs, auth schemes, etc) would be marked for Project B. I could switch to “Project A”, and my changes for Project B would be hidden. I think this would require the Apex engine to be able to have multiple definitions of each item, region or page, one for each build option. Merging changes between build options would need to be made possible, somehow – I don’t hold any illusions that this would be a simple feature for the Apex team to deliver.

2. Make the items/regions/pages listed in the Utilization tab clickable, so I can easily click through and change properties on them.

3. Another thing I’d like to see from the Apex team is builtin GUI support for exporting applications as a collection of individual scripts, each independently runnable – one for each page and shared component. I’m aware there is a Java tool for this purpose, but the individual scripts it generates cannot be run on their own. For example, if I export a page, I should be able to import that page into another copy of the same application (but with a different application ID) to replace the existing version of that page. I should be able to check in a change to an authorization scheme or an LOV or a template, and deploy just the script for that component to other applications, even in other workspaces. The export feature for all this should be available and supported using a PL/SQL API so that we can automate the whole thing and integrate it with our version control and deployment software.

4. What would be really cool, would be if the export scripts from Apex were structured in such a way that existing source code merge tools could merge different versions of the same Apex script and result in a usable Apex script. This already works quite well for our schema scripts (table scripts, views, packages, etc), so why not?

Further Reading:

Fixing phone numbers

An enhancement request I was assigned was worded thus:

“User will optionally enter the Phone number (IF the phone was blank the system will default the store’s area code).”

I interpret this to mean that the Customer Phone number (land line) field should remain optional, but if entered, it should check if the local area code had been entered, and if not, default it according to the local store’s area code. We can assume that the area code has already been entered if the phone number starts with a zero (0).

This is for a retail chain with stores throughout Australia and New Zealand, and the Apex session knows the operator’s store ID. I can look up the country code and phone number for their store with a simple query, which will return values such as (these are just made up examples):

Country AU, Phone: +61 8 9123 4567 – area code should be 08
Country AU, Phone: 08 91234567 – area code should be 08
Country AU, Phone: +61 2 12345678 – area code should be 02
Country AU, Phone: 0408 123 456 – no landline area code
Country NZ, Phone: +64 3 123456 – area code should be 03
Country NZ, Phone: 0423 456 121 – area code should be 04

They only want to default the area code for landlines, so if the store’s phone number happens to be a mobile phone number it should not do any defaulting.

Step 1: create a database function (in a database package, natch) to return the landline area code for any given store ID.

FUNCTION get_store_landline_area_code (p_store_id IN VARCHAR2) RETURN VARCHAR2 IS
  v_area_code VARCHAR2(2);
  v_country_code stores_vw.country_code%TYPE;
  v_telephone_number stores_vw.telephone_number%TYPE;
BEGIN
  IF p_store_code IS NOT NULL THEN

    BEGIN

      SELECT country_code
            ,telephone_number
      INTO   v_country_code
            ,v_telephone_number
      FROM   stores_vw
      WHERE  store_id = p_store_id;

      v_area_code
        := CASE
           -- Australian International land line
           WHEN p_country_code = 'AU'
           AND REGEXP_LIKE(p_telephone_number, '^\+61( ?)[2378]')
             --e.g. +61 8 9752 6100
             THEN '0' || SUBSTR(REPLACE(p_telephone_number,' '), 4, 1)
           -- Australian Local land line
           WHEN p_country_code = 'AU'
           AND REGEXP_LIKE(p_telephone_number, '^0[2378]')
             THEN SUBSTR(p_telephone_number, 1, 2)
           -- New Zealand International land line
           WHEN p_country_code = 'NZ'
           AND REGEXP_LIKE(p_telephone_number, '^\+64( ?)[34679]')
             -- e.g. +64 3 1234 567
             THEN '0' || SUBSTR(REPLACE(p_telephone_number,' '), 4, 1)
           -- New Zealand Local land line
           WHEN p_country_code = 'NZ'
           AND REGEXP_LIKE(p_telephone_number, '^0[34679]')
             THEN SUBSTR(p_telephone_number, 1, 2)
           ELSE
             NULL
           END;

    EXCEPTION
      WHEN NO_DATA_FOUND OR TOO_MANY_ROWS THEN
        NULL;
    END;

  END IF;
  RETURN v_area_code;
END get_store_landline_area_code;

Phone number references:
http://en.wikipedia.org/wiki/Telephone_numbers_in_Australia
http://en.wikipedia.org/wiki/Telephone_numbers_in_New_Zealand

Step 2: add a Dynamic Action to prepend the area code to the phone number, if it wasn’t entered already:

Event: Change
Selection Type: Item(s)
Item(s): P1_CUSTOMER_PHONE_NUMBER
Condition: Javascript expression
Value: $v("P1_CUSTOMER_PHONE_NUMBER").length > 0 && $v("P1_CUSTOMER_PHONE_NUMBER").charAt(0) != "0"
True Action: Set Value
Set Type: PL/SQL Expression
PL/SQL Expression: my_util_pkg.get_store_landline_area_code(:F_USER_STORE_ID) || :P1_CUSTOMER_PHONE_NUMBER

Now, when the user types in a local land line but forget the prefix, the system will automatically add it in as soon as they tab out of the field. If the phone number field is unchanged, or is left blank, this will do nothing.

It assumes that the customer’s phone number uses the same prefix as the store, which in most cases will be true. Ultimately the user will still need to check that the phone number is correct for the customer.

Apex Interactive Report raising javascript error

I recently was working on an application in Apex 4.2.1.00.08, where the application had several pages with Interactive Reports.

On all these pages, the IR worked fine – except for one crucial page, where the IR’s action menu didn’t work (Select Columns, for example, showed a little circle instead of the expected shuttle region; all the column headings menus would freeze the page; and other issues).

In Console I could see the following errors get raised (depending on which IR widget I tried):

Uncaught SyntaxError: Unexpected token ) desktop_all.min.js?v=4.2.1.00.08:14
$u_evaldesktop_all.min.js?v=4.2.1.00.08:14
_Return widget.interactiveReport.min.js?v=4.2.1.00.08:1
b.onreadystatechange desktop_all.min.js?v=4.2.1.00.08:15
Uncaught TypeError: Object #<error> has no method 'cloneNode' desktop_all.min.js?v=4.2.1.00.08:14
dhtml_ShuttleObject desktop_all.min.js?v=4.2.1.00.08:14
_Return widget.interactiveReport.min.js?v=4.2.1.00.08:1
b.onreadystatechange desktop_all.min.js?v=4.2.1.00.08:15
Uncaught TypeError: Cannot read property 'undefined' of undefined widget.interactiveReport.min.js?v=4.2.1.00.08:1
dialog.column_check widget.interactiveReport.min.js?v=4.2.1.00.08:1
_Return widget.interactiveReport.min.js?v=4.2.1.00.08:1
b.onreadystatechange desktop_all.min.js?v=4.2.1.00.08:15

After a lot of head scratching and some investigative work from the resident javascript guru (“it looks like ajax is not getting the expected results from the server”), I found the following:

https://forums.oracle.com/message/10496937

The one thing in common was that my IR also had a Display Condition on it. In my case, the condition was based on an application item, not REQUEST. I removed the condition, and the problem went away.

I’ve tried to make a reproducible test case with a fresh application, but unfortunately with no success – which means I haven’t yet isolated the actual cause of the issue. A PL/SQL condition like “1=1″ doesn’t reproduce the problem. If I have a PL/SQL Expression like “:P1_SHOW = ‘Y'”, or a Value of Item / Column in Expression 1 = Expression 2 with a similar effect, the problem is reproduced – but only in this application.

As a workaround I’ve used a Dynamic Action to hide the IR on page load if required.

Never mind…

Update: thanks Christian for pointing out that I mistook this – it’s not CLIENT_INFO that I was using, but CLIENT_IDENTIFIER – and that behaviour hasn’t changed.

I’ve just gotten around to reading the Patch Set Notes for Apex 4.2.3, and noticed this bit:

8.5 Changes in How Oracle Application Express Populates CLIENT_INFO in V$SESSION and GV$SESSION
The Oracle Application Express 4.2.2.00.11 patch set changes how Application Express populates the CLIENT_INFO value in V$SESSION and GV$SESSION. The new information in this field is workspace ID, followed by colon (:), followed by the authenticated username.

Tip: You may have to adapt database instance monitoring scripts which interpret CLIENT_INFO and expect the previous content for Oracle Application Express sessions (username ‘:’ workspace id).

I have no idea why the patch set notes talk about “workspace id” here, since as far as I can tell, Apex actually puts the session ID there. I haven’t tested this in 4.2.3 yet though. Anyone care to verify this for me?

Performance of Apex Conditions

Just a little tip I picked up at the InSync13 conference from listening to Scott Wesley. If you have a lot of conditions that look like this:

apex-condition-plsql-expression(conditions based on a PL/SQL Expression, where the PL/SQL itself doesn’t actually call anything outside of Apex – it’s only dependent on variables that Apex already knows)

Because it’s a PL/SQL expression, the Apex engine must execute this as dynamic PL/SQL – requiring a parse/execute/fetch. This might take maybe 0.03 seconds or so. If there’s only one condition like this on a page, it won’t make any difference. But if there are 50 conditions on a page, it can make a difference to the overall page performance – adding up to 1 whole second or more to the page request, which can be noticeable.

The better alternative is to use the condition type Value of Item / Column in Expression 1 = Expression 2, e.g.:

apex-condition-item-equals-expressionThis condition type requires no dynamic PL/SQL – no parsing – which can reduce the time required to an almost negligible amount.

Image

Your ‘PL/SQL Code’

Your 'PL/SQL Code'

Am I the only one who finds this help message vaguely insulting?

dr-evil-plsql-code

Oracle VPD/RLS on Apex at InSync2013

AUSOUG is holding a series of conferences this year right across the country – starting in Sydney on 15-16 August, touring the other major city centres, and ending in Perth on 12-13 November.

The Perth program is still being finalized but the lineup is looking good. You can see the current list here: http://www.ausoug.org.au/insync13/insync13-perth-program.html

I’ll be talking about Oracle Virtual Private Database or RLS and its use in Apex applications. I’ve made good use of this technology in a recent project which is now live and I’m looking forward to presenting what I’ve learned. Abstract

Make sure you register soon – pre-registrations close soon for some locations.

UPDATE: The Perth program is now published: INSYNC13_Program_Perth.pdf

UPDATE 2: The slide deck if you’re interested can be seen here.

Link

My Function Result Cache talk

onedoesnotsimplyresultcacheIf you’re interested in my presentation on the Function Result Cache, it’s now available from my presentations page. It was given this morning at Oracle’s offices in Perth to the local AUSOUG branch and seemed to go down well and I got some good feedback. It was only a little overshadowed by all the hoopla over the release of 12c :)

Using compound triggers to boost your journal table performance

If your schemas are like those I deal with, almost every table has a doppelgänger which serves as a journal table; an “after insert, update or delete” trigger copies each and every change into the journal table. It’s a bit of a drag on performance for large updates, isn’t it?

I was reading through the docs (as one does) and noticed this bit:

Scenario: You want to record every change to hr.employees.salary in a new table, employee_salaries. A single UPDATE statement will update many rows of the table hr.employees; therefore, bulk-inserting rows into employee.salaries is more efficient than inserting them individually.

Solution: Define a compound trigger on updates of the table hr.employees, as in Example 9-3. You do not need a BEFORE STATEMENT section to initialize idx or salaries, because they are state variables, which are initialized each time the trigger fires (even when the triggering statement is interrupted and restarted).

http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm#CIHFHIBH

The example shows how to use a compound trigger to not only copy the records to another table, but to do so with a far more efficient bulk insert. Immediately my journal table triggers sprang to mind – would this approach give me a performance boost?

The answer is, yes.

My test cases are linked below – emp1 is a table with an ordinary set of triggers, which copies each insert/update/delete into its journal table (emp1$jn) individually for each row. emp2 is a table with a compound trigger instead, which does a bulk insert of 100 journal entries at a time.

I ran a simple test case involving 100,000 inserts and 100,000 updates, into both tables; the first time, I did emp1 first followed by emp2; in the second time, I reversed the order. From the results below you’ll see I got a consistent improvement, shaving about 4-7 seconds off of about 21 seconds, an improvement of 19% to 35%. This is with the default value of 100 for the bulk operation; tweaking this might wring a bit more speed out of it (at the cost of using more memory per session).

Of course, this performance benefit only occurs for multi-row operations; if your application is only doing single-row inserts, updates or deletes you won’t see any difference in performance. However, I still think this method is neater (only one trigger) than the alternative so would recommend. The only reason I wouldn’t use this method is if my target might potentially be a pre-11g database, which doesn’t support compound triggers.

Here are the test case scripts if you want to check it out for yourself:

ordinary_journal_trigger.sql
compound_journal_trigger.sql
test_journal_triggers.sql

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

insert emp1 (test run #1)
100000 rows created.
Elapsed: 00:00:21.19

update emp1 (test run #1)
100000 rows updated.
Elapsed: 00:00:21.40

insert emp2 (test run #1)
100000 rows created.
Elapsed: 00:00:16.01

update emp2 (test run #1)
100000 rows updated.
Elapsed: 00:00:13.89

Rollback complete.

insert emp2 (test run #2)
100000 rows created.
Elapsed: 00:00:15.94

update emp2 (test run #2)
100000 rows updated.
Elapsed: 00:00:16.60

insert emp1 (test run #2)
100000 rows created.
Elapsed: 00:00:21.01

update emp1 (test run #2)
100000 rows updated.
Elapsed: 00:00:20.48

Rollback complete.

And here, in all its glory, is the fabulous compound trigger:

CREATE OR REPLACE TRIGGER emp2$trg
  FOR INSERT OR UPDATE OR DELETE ON emp2
  COMPOUND TRIGGER
  
  FLUSH_THRESHOLD CONSTANT SIMPLE_INTEGER := 100;
  TYPE jnl_t IS TABLE OF emp2$jn%ROWTYPE
    INDEX BY SIMPLE_INTEGER;
  jnls  jnl_t;
  rec   emp2$jn%ROWTYPE;
  blank emp2$jn%ROWTYPE;
  
  PROCEDURE flush_array (arr IN OUT jnl_t) IS
  BEGIN
    FORALL i IN 1..arr.COUNT
      INSERT INTO emp2$jn VALUES arr(i);
    arr.DELETE;
  END flush_array;
  
  BEFORE EACH ROW IS
  BEGIN
    IF INSERTING THEN
      IF :NEW.db_created_by IS NULL THEN
        :NEW.db_created_by := NVL(v('APP_USER'), USER);
      END IF;
    ELSIF UPDATING THEN
      :NEW.db_modified_on := SYSDATE;
      :NEW.db_modified_by := NVL(v('APP_USER'), USER);
      :NEW.version_id     := :OLD.version_id + 1;
    END IF;
  END BEFORE EACH ROW;
  
  AFTER EACH ROW IS
  BEGIN
    rec := blank;
    IF INSERTING OR UPDATING THEN
      rec.id             := :NEW.id;
      rec.name           := :NEW.name;
      rec.db_created_on  := :NEW.db_created_on;
      rec.db_created_by  := :NEW.db_created_by;
      rec.db_modified_on := :NEW.db_modified_on;
      rec.db_modified_by := :NEW.db_modified_by;
      rec.version_id     := :NEW.version_id;
      IF INSERTING THEN
        rec.jn_action := 'I';
      ELSIF UPDATING THEN
        rec.jn_action := 'U';
      END IF;
    ELSIF DELETING THEN
      rec.id             := :OLD.id;
      rec.name           := :OLD.name;
      rec.db_created_on  := :OLD.db_created_on;
      rec.db_created_by  := :OLD.db_created_by;
      rec.db_modified_on := :OLD.db_modified_on;
      rec.db_modified_by := :OLD.db_modified_by;
      rec.version_id     := :OLD.version_id;
      rec.jn_action      := 'D';
    END IF;
    rec.jn_timestamp := SYSTIMESTAMP;
    jnls(NVL(jnls.LAST,0) + 1) := rec;
    IF jnls.COUNT >= FLUSH_THRESHOLD THEN
      flush_array(arr => jnls);
    END IF;
  END AFTER EACH ROW;
  
  AFTER STATEMENT IS
  BEGIN
    flush_array(arr => jnls);
  END AFTER STATEMENT;
  
END emp2$trg;

Just to be clear: it’s not that it’s a compound trigger that impacts the performance; it’s the bulk insert. However, using the compound trigger made the bulk operation much simpler and neater to implement.

UPDATE 14/08/2014: I came across a bug in the trigger which caused it to not flush the array when doing a MERGE. I found I had to pass the array as a parameter internally.

Follow

Get every new post delivered to your Inbox.

Join 195 other followers