Skip to content
Image

Your ‘PL/SQL Code’

Your 'PL/SQL Code'

Am I the only one who finds this help message vaguely insulting?

dr-evil-plsql-code

Oracle VPD/RLS on Apex at InSync2013

AUSOUG is holding a series of conferences this year right across the country – starting in Sydney on 15-16 August, touring the other major city centres, and ending in Perth on 12-13 November.

The Perth program is still being finalized but the lineup is looking good. You can see the current list here: http://www.ausoug.org.au/insync13/insync13-perth-program.html

I’ll be talking about Oracle Virtual Private Database or RLS and its use in Apex applications. I’ve made good use of this technology in a recent project which is now live and I’m looking forward to presenting what I’ve learned. Abstract

Make sure you register soon – pre-registrations close soon for some locations.

UPDATE: The Perth program is now published: INSYNC13_Program_Perth.pdf

UPDATE 2: The slide deck if you’re interested can be seen here.

Link

My Function Result Cache talk

onedoesnotsimplyresultcacheIf you’re interested in my presentation on the Function Result Cache, it’s now available from my presentations page. It was given this morning at Oracle’s offices in Perth to the local AUSOUG branch and seemed to go down well and I got some good feedback. It was only a little overshadowed by all the hoopla over the release of 12c :)

Using compound triggers to boost your journal table performance

If your schemas are like those I deal with, almost every table has a doppelgänger which serves as a journal table; an “after insert, update or delete” trigger copies each and every change into the journal table. It’s a bit of a drag on performance for large updates, isn’t it?

I was reading through the docs (as one does) and noticed this bit:

Scenario: You want to record every change to hr.employees.salary in a new table, employee_salaries. A single UPDATE statement will update many rows of the table hr.employees; therefore, bulk-inserting rows into employee.salaries is more efficient than inserting them individually.

Solution: Define a compound trigger on updates of the table hr.employees, as in Example 9-3. You do not need a BEFORE STATEMENT section to initialize idx or salaries, because they are state variables, which are initialized each time the trigger fires (even when the triggering statement is interrupted and restarted).

http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm#CIHFHIBH

The example shows how to use a compound trigger to not only copy the records to another table, but to do so with a far more efficient bulk insert. Immediately my journal table triggers sprang to mind – would this approach give me a performance boost?

The answer is, yes.

My test cases are linked below – emp1 is a table with an ordinary set of triggers, which copies each insert/update/delete into its journal table (emp1$jn) individually for each row. emp2 is a table with a compound trigger instead, which does a bulk insert of 100 journal entries at a time.

I ran a simple test case involving 100,000 inserts and 100,000 updates, into both tables; the first time, I did emp1 first followed by emp2; in the second time, I reversed the order. From the results below you’ll see I got a consistent improvement, shaving about 4-7 seconds off of about 21 seconds, an improvement of 19% to 35%. This is with the default value of 100 for the bulk operation; tweaking this might wring a bit more speed out of it (at the cost of using more memory per session).

Of course, this performance benefit only occurs for multi-row operations; if your application is only doing single-row inserts, updates or deletes you won’t see any difference in performance. However, I still think this method is neater (only one trigger) than the alternative so would recommend. The only reason I wouldn’t use this method is if my target might potentially be a pre-11g database, which doesn’t support compound triggers.

Here are the test case scripts if you want to check it out for yourself:

ordinary_journal_trigger.sql
compound_journal_trigger.sql
test_journal_triggers.sql

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

insert emp1 (test run #1)
100000 rows created.
Elapsed: 00:00:21.19

update emp1 (test run #1)
100000 rows updated.
Elapsed: 00:00:21.40

insert emp2 (test run #1)
100000 rows created.
Elapsed: 00:00:16.01

update emp2 (test run #1)
100000 rows updated.
Elapsed: 00:00:13.89

Rollback complete.

insert emp2 (test run #2)
100000 rows created.
Elapsed: 00:00:15.94

update emp2 (test run #2)
100000 rows updated.
Elapsed: 00:00:16.60

insert emp1 (test run #2)
100000 rows created.
Elapsed: 00:00:21.01

update emp1 (test run #2)
100000 rows updated.
Elapsed: 00:00:20.48

Rollback complete.

And here, in all its glory, is the fabulous compound trigger:

CREATE OR REPLACE TRIGGER emp2$trg
  FOR INSERT OR UPDATE OR DELETE ON emp2
  COMPOUND TRIGGER
  
  FLUSH_THRESHOLD CONSTANT SIMPLE_INTEGER := 100;
  TYPE jnl_t IS TABLE OF emp2$jn%ROWTYPE
    INDEX BY SIMPLE_INTEGER;
  jnls  jnl_t;
  rec   emp2$jn%ROWTYPE;
  blank emp2$jn%ROWTYPE;
  
  PROCEDURE flush_array (arr IN OUT jnl_t) IS
  BEGIN
    FORALL i IN 1..arr.COUNT
      INSERT INTO emp2$jn VALUES arr(i);
    arr.DELETE;
  END flush_array;
  
  BEFORE EACH ROW IS
  BEGIN
    IF INSERTING THEN
      IF :NEW.db_created_by IS NULL THEN
        :NEW.db_created_by := NVL(v('APP_USER'), USER);
      END IF;
    ELSIF UPDATING THEN
      :NEW.db_modified_on := SYSDATE;
      :NEW.db_modified_by := NVL(v('APP_USER'), USER);
      :NEW.version_id     := :OLD.version_id + 1;
    END IF;
  END BEFORE EACH ROW;
  
  AFTER EACH ROW IS
  BEGIN
    rec := blank;
    IF INSERTING OR UPDATING THEN
      rec.id             := :NEW.id;
      rec.name           := :NEW.name;
      rec.db_created_on  := :NEW.db_created_on;
      rec.db_created_by  := :NEW.db_created_by;
      rec.db_modified_on := :NEW.db_modified_on;
      rec.db_modified_by := :NEW.db_modified_by;
      rec.version_id     := :NEW.version_id;
      IF INSERTING THEN
        rec.jn_action := 'I';
      ELSIF UPDATING THEN
        rec.jn_action := 'U';
      END IF;
    ELSIF DELETING THEN
      rec.id             := :OLD.id;
      rec.name           := :OLD.name;
      rec.db_created_on  := :OLD.db_created_on;
      rec.db_created_by  := :OLD.db_created_by;
      rec.db_modified_on := :OLD.db_modified_on;
      rec.db_modified_by := :OLD.db_modified_by;
      rec.version_id     := :OLD.version_id;
      rec.jn_action      := 'D';
    END IF;
    rec.jn_timestamp := SYSTIMESTAMP;
    jnls(NVL(jnls.LAST,0) + 1) := rec;
    IF jnls.COUNT >= FLUSH_THRESHOLD THEN
      flush_array(arr => jnls);
    END IF;
  END AFTER EACH ROW;
  
  AFTER STATEMENT IS
  BEGIN
    flush_array(arr => jnls);
  END AFTER STATEMENT;
  
END emp2$trg;

Just to be clear: it’s not that it’s a compound trigger that impacts the performance; it’s the bulk insert. However, using the compound trigger made the bulk operation much simpler and neater to implement.

UPDATE 14/08/2014: I came across a bug in the trigger which caused it to not flush the array when doing a MERGE. I found I had to pass the array as a parameter internally.

AUSOUG WA Breakfast Meeting: Function Result Cache

If you’re in Perth on Wednesday the 26th, come for breakfast at the Oracle offices and hear me talk about my experiences with the PL/SQL Function Result Cache.

More details here: www.ausoug.org.au/cms/rest/event/1936

Deploying Application Express on the Command Line

apex exportI love the Apex UI, it makes development so much easier and more convenient – and makes it easy to impress clients when I can quickly fix issues right there and then, using nothing but their computer and their browser, no additional software tools needed.

However, at my main client they have a fairly strict “scripted releases only” policy which is a really excellent policy – deployments must always be provided as a script to run on the command line. This makes for less errors and a little less work for the person who runs the deployment.

In Apex it’s easy to create deployment scripts that will run right in SQL*Plus. You can export a workspace, an application, images, etc. as scripts that will run in SQL*Plus with almost no problem. There’s just a few little things to be aware of, and that’s the subject of this post.

1. Setting the session workspace

Normally if you log into Apex and import an application export script, it will be imported without problem. Also, if you log into SQL*Plus and try to run it, it will work fine as well.

The only difference comes if you want to deploy it into a different workspace ID to the one the application was exported from – e.g. if you want to have two workspaces on one database, one for dev, one for test, when you log into your test schema and try to run it, you’ll see something like this:

SQL> @f118.sql
APPLICATION 118 - My Wonderful App
Set Credentials...
Check Compatibility...
Set Application ID...
begin
*
ERROR at line 1:
ORA-20001: Package variable g_security_group_id must be set.
ORA-06512: at "APEX_040100.WWV_FLOW_API", line 73
ORA-06512: at "APEX_040100.WWV_FLOW_API", line 342
ORA-06512: at line 4

Side note: if you’re using Windows, the SQL*Plus window will disappear too quickly for you to see the error (as the generated apex script sets it to exit on error) – so you should SPOOL to a log file to see the output.

To fix this issue, you need to run a little bit of PL/SQL before you run the export, to override the workspace ID that the script should use:

declare
  v_workspace_id NUMBER;
begin
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = 'TESTWORKSPACE';
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  apex_application_install.set_schema('TESTSCHEMA');
  apex_application_install.set_application_id(119);
  apex_application_install.generate_offset;
  apex_application_install.set_application_alias('TESTAPP');
end;
 /

This will tell the Apex installer to use a different workspace – and a different schema, application ID and alias as well, since 118 already exists on this server. If your app doesn’t have an alias you can omit that last step. Since we’re changing the application ID, we need to get all the other IDs (e.g. item and button internal IDs) throughout the application changed as well, so we call generate_offset which makes sure they won’t conflict.

2. Installing Images

This is easy. Same remarks apply as above if you’re installing the image script into a different workspace.

3. Installing CSS Files

If you export your CSS files using the Apex export facility, these will work just as well as the above, and the same considerations apply if you’re installing into a different workspace.

If you created your CSS export file manually using Shared Components -> Cascading Style Sheets and clicking on your stylesheet and clicking “Display Create File Script“, you will find it doesn’t quite work as well as you might expect. It does work, except that the file doesn’t include a COMMIT at the end. Which normally wouldn’t be much of a problem, until you discover late that the person deploying your scripts didn’t know they should issue a commit (which, of course, would have merely meant the file wasn’t imported) – and they didn’t actually close their session straight away either, but just left it open on their desktop while they went to lunch or a meeting or something.

This meant that when I sent the test team onto the system, the application looked a little “strange”, and all the text was black instead of the pretty colours they’d asked for – because the CSS file wasn’t found. And when I tried to fix this by attempting to re-import the CSS, my session hung (should that be “hanged”? or “became hung”?) – because the deployment person’s session was still holding the relevant locks. Eventually they committed their session and closed it, and the autocommit nature of SQL*Plus ended up fixing the issue magically for us anyway. Which made things interesting the next day as I was trying to work out what had gone wrong, when the system was now working fine, as if innocently saying to me, “what problem?”.

4. A little bug with Data Load tables

We’re on Apex 4.1.1  If you have any CSV Import function in your application using Apex’s Data Loading feature, if you export the application from one schema and import into another schema, you’ll find that the Data Load will simply not work, because the export incorrectly hardcodes the owner of the data load table in the call to create_load_table. This bug is described here: https://forums.oracle.com/forums/thread.jspa?messageID=10307103&#10307103 and apparently there’s a patch for it.

wwv_flow_api.create_load_table(
 p_id =>4846012021772170+ wwv_flow_api.g_id_offset,
 p_flow_id => wwv_flow.g_flow_id,
 p_name =>'IMPORT_TABLE',
 p_owner =>'MYSCHEMA',
 p_table_name =>'IMPORT_TABLE',
 p_unique_column_1 =>'ID',
 p_is_uk1_case_sensitive =>'Y',
 p_unique_column_2 =>'',
 p_is_uk2_case_sensitive =>'N',
 p_unique_column_3 =>'',
 p_is_uk3_case_sensitive =>'N',
 p_wizard_page_ids =>'',
 p_comments =>'');

The workaround I’ve been using is, before importing into a different schema, I just edit the application script to fix the p_owner in the calls to wwv_flow_api.create_load_table.

5. Automating the Export

I don’t know if this is improved in later versions, but at the moment you can only export Applications using the provided API – no other objects (such as images or CSS files). Just a sample bit of code (you’ll need to put other bits around this to do what you want with the clob – e.g. my script spits it out to serverout so that SQL*Plus will write it to a sql file):

l_clob := WWV_FLOW_UTILITIES.export_application_to_clob
  (p_application_id => &APP_ID.
  ,p_export_ir_public_reports => 'Y'
  ,p_export_ir_private_reports => 'Y'
  ,p_export_ir_notifications => 'Y'
  );

That’s all my tips for scripting Apex deployments for now. If I encounter any more I’ll add them here.

 

EDIT:

Related: “What’s the Difference” – comparing exports to find diffs on an Apex application – http://blog.sydoracle.com/2011/11/whats-difference.html

Trivia question: Why does the 2nd Drop Work?

I have an Oracle database (10g or 11g, doesn’t matter) with an empty schema. I am the only user on the system and I only have one session open.

I create a single table (FOO) and run a few simple SQL commands. Then, I run this, twice in a row without any intermediate steps:

DROP TABLE foo;

The first time, some errors are raised and the table is not dropped.

The second time, it succeeds – the table is dropped without error.

Q. What did I do to cause this behaviour (which is 100% reproducible) – and what errors were reported?

A. Congratulations to Kirill Leontiev, Matthias Rogel, Charles Hooper and Mette who all found the answer!

Spoiler Alert: Don’t read the comments if you don’t want to know the answer straight away.

Apex Dynamic Action silently fails – a story

Here is a short story about a little problem that caused me a bit of grief; but in the end had a simple cause and a simple fix.

I had a dynamic action in my Apex 4.1 app that had to run some PL/SQL – which was working fine, except the PL/SQL kept on getting longer and longer and more complex; so quite naturally I wanted it to be encapsulated in a database procedure.

I did so, but it didn’t work: the page ran without error, but it seemed like the dynamic action wasn’t firing. It was supposed to change the value of some display items on the page in response to the change of a radio button item, but now they weren’t changing! There was no debug warnings or logs to give a hint either. I tried using Chrome’s developer tools to trace it but that just showed me a very high-level view of what the client was doing, and didn’t report any errors or warnings.

I reverted to my original code, and it worked fine. Ok, so that means it’s probably a problem with my procedure.

I checked and rechecked my procedure. Didn’t seem to be anything wrong with it. I added a line into the procedure to raise an exception. The apex page dutifully reported the PL/SQL error in the Ajax call – which means that my procedure was being called successfully. Also, I included the return values in the exception message, and this proved that my procedure was correctly determining the values. They just weren’t being returned to the items on the page.

I tried raising an exception in the apex dynamic action’s PL/SQL Code. That worked. The exception message correctly showed the new values were being returned; they still weren’t being populated on the page.

I tried removing all the items from the Page Items to Return setting; then I gradually added them back in, one by one. I narrowed it down to just one item. If I included that item, none of the items were being updated when the procedure returned. If I excluded that item, all the other items were correctly being updated when the procedure returned. Of course, that wasn’t a solution, because there was a cascade of other dynamic actions that were dependent on that particular item, so it has to be updated.

After lunch and a short walk, it occurred to me: unlike the other parameters, that particular parameter was anchored to a database column defined as CHAR(1). Could that be a problem?

"change code at random... bug solved"Sure enough, when I changed the parameter’s data type from column%TYPE (which mapped to a CHAR) to just a plain VARCHAR2, everything worked.

Yet another reason to avoid CHAR, I guess.

Apex and Application Contexts

My current project uses Oracle RLS (Row Level Security) to ensure each user can only see the data they’re allowed to see. This has made a potentially very complex application very simple. My apex pages can query any table, no WHERE clauses needed (except for user-entered filters), and I can be confident they won’t be able to inadvertently see rows they’re not supposed to see if I forget to add the right predicate on every page that queries each table.

Now, remember that Apex is a stateless environment as far as database sessions are concerned: when a page is requested, it might be generated by any session running on the database; it might re-use a session that had just finished rendering a page for someone else entirely. Therefore, it is imperative that all my database code assumes nothing about the state of package variables, for example. Instead, we use Oracle Application Contexts to hold session-specific information. For example, a context might hold the current user’s identifier, or a flag that indicates they have been granted a particular privilege. As an added advantage, when we use contexts, we can easily pass data into views.

How do we guarantee that the context has been set correctly for the current user’s session? We could follow the guidelines in the Apex help (quoted below) to set the context at the start of every page call, then clear them at the end of the page call. If we were to do this, these are the steps we’d take (these are taken from Apex 4.1.1 but should be comparable in other releases):

1. Create the context:

CREATE OR REPLACE CONTEXT MY_CONTEXT USING MY_PACKAGE;

2. Create a database package with a procedure to Set the context variable(s), e.g.:

PROCEDURE set_session IS
BEGIN
  DBMS_SESSION.set_context
    (namespace => 'MY_CONTEXT'
    ,attribute => 'is_admin'
    ,value     => 'Y');
END new_session;

3. Edit Application Properties; under Security, in the Database Session section, set Initialization PL/SQL Code to call MY_PACKAGE.set_session.

It is interesting that the Apex help page for Initialization PL/SQL Code has this to say:

“Use this attribute to enter a PL/SQL block that sets a context for the database session associated with the current “show page” or “accept page” request. The block you enter here is executed at a very early point during the page request, immediately after the APP_USER value is established. The value of APP_USER (using :APP_USER or v(‘APP_USER’)) may be used within the block. Values of other items in session state may be referenced as well, but any such items must have been established in session state before the initiation of the current page request. Consider the following example:

dbms_session.set_context(‘CTX_USER_QRY’,’USERPRIV’,my_package.my_function(:APP_USER));
It sets the value of USERPRIV in the context named CTX_USER_QRY to the value returned by the function my_function in package my_package. The function is passed the current value of APP_USER as an input argument. Presumably, the named context would be used in a VPD policy ( created within the application’s parsing schema) to effect the generation of predicates appropriate to the authenticated user.

Virtual Private Database, also know as Fine-Grained Access Control or FGAC, is an Oracle database feature that provides an application programming interface (API) that enables developers to assign security policies to database tables and views. Using PL/SQL, developers can create security policies with stored procedures and bind the procedures to a table or view by means of a call to an RDBMS package. Such policies are based on the content of application data stored within the database, or based on context variables provided by Oracle database. In this way, VPD permits access security mechanisms to be removed from applications, and to be situated closer to particular schemas.

…”

This works fine. Before every page is rendered, Apex calls our package to set the context variables. All our procedures, views, etc. as well as our RLS package can use these context variables as required.

Is there a simpler way? Yes.

Just this morning I attended an “Odds and Ends” session presented by Connor McDonald at the AUSOUG Monthly meeting in Perth. He briefly mentioned a cool feature of Oracle contexts – the fact that they can be made globally accessible. I had seen this in the documentation but didn’t know what the words meant or how it might be useful.

What Connor’s talk alerted me to was that what a Globally Accessible context does could be very useful for our Apex application: it allows a context to be simultaneously available from any session on the database, to any user who logs in and sets the correct Client Identifier. We can use this to avoid having to call any code from Initialization PL/SQL Code, which would be run before every page request, which may have a performance impact; instead, the context variables will follow the user around, no matter which database session services their page request.

To do this, you simply:

1. Create our context to be globally accessible:

CREATE OR REPLACE CONTEXT MY_CONTEXT USING MY_PACKAGE
  ACCESSED GLOBALLY;

2. Alter our database package to set the context variables, this time sending the client_id parameter, e.g.:

PROCEDURE set_session IS
v_real_client_id VARCHAR2(200);
BEGIN
  v_real_client_id := REPLACE(
    SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER')
               ,'nobody'
               ,v('APP_USER'));
  DBMS_SESSION.set_context
    (namespace => 'MY_CONTEXT'
    ,attribute => 'is_admin'
    ,value     => 'Y'
    ,client_id => v_real_client_id);
END set_session;

3. In Apex, remove the Initialization PL/SQL Code – we don’t need it anymore.

4. In the Authentication scheme, set Post-Authentication Procedure Name to MY_PACKAGE.set_session.

Now, our context is only set once – just after login. Excellent.

“Now,” you’ll no doubt want to ask me, “where did you actually set the Client Identifier? And what’s the deal with the REPLACE?”

We don’t have to worry about setting Client Identifier – Apex already sets it for us automatically for each page request. If you log into an ordinary Apex application, and examine the Client Identifier, you’ll get something like this:

SCOTT:15135454564654

Where “SCOTT” is the login username, and “15135454564654” is the Apex session ID. We can bind our context to this combination of values simply by passing the Client Identifier to the client_id parameter of DBMS_SESSION.set_context. In fact, most of the time this sort of code will work fine:

DBMS_SESSION.set_context
 (namespace => 'MY_CONTEXT'
 ,attribute => 'my_attribute'
 ,value     => 'xyz'
 ,client_id => SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER'));

Unfortunately, however, if we look at CLIENT_IDENTIFIER from the post-login procedure (i.e. immediately after the user has successfully logged in), we instead find it is something like this:

nobody:15135454564654

So the session ID is correct, but the username hasn’t been set yet. Looking at the Apex debug logs it’s apparent that the client identifier gets set with the username very soon after the post-login procedure is run – which is a tad inconvenient. However, APP_USER is set to SCOTT prior to the post-login procedure; therefore, my set_session procedure simply replaces “nobody” with v(‘APP_USER’), so that the client_id will agree with the client identifier that will be used from that point on.

That works great – my views, packages and RLS predicates can use SYS_CONTEXT to get context variables with negligible performance impact (remembering that context variables are always strings, so don’t forget to convert your datatypes if necessary).

As an added bonus, as a developer or DBA we can log into the database and diagnose an active session by pretending to be it. For example, Joe sends us an email with a screenshot of an Apex page showing “No Data Found”. When you log into your tool of choice and run the same query, you can see plenty of data. You want your session to pretend to be Joe. To do this, you would get Joe’s username and the session ID from the URL on the Apex page (it’s the 3rd parameter after the f?p= bit in the URL), and run this:

DBMS_SESSION.set_identifier('JOE:18765421554654');

What this does is tell Oracle that your session is Joe’s session. Your session will automatically see the same context variables that Joe’s session currently sees. You run the query again, and you can see that it returns no rows – ah, maybe there’s something wrong with the RLS predicate your application is generating. You now have a testbed under which you can test any change you might want to make in order to resolve the issue (or at least, to be able to tell Joe why he’s not allowed to see any data).

Alexandria Updated

Just a quick post to point out that the Alexandria PL/SQL Library has been updated to v1.7, including updates to the Amazon S3 package and a new package for generating iCalendar objects – more details on Morten’s blog.

Follow

Get every new post delivered to your Inbox.

Join 201 other followers