Add colours to your Shuttle item


I wanted to allow users to select one or more colours from a list, and to control the order of the colours, so I’ve used a Shuttle item.

I wanted to have different background colours for each value in the list, so I started here for help. I’m not very strong with javascript (yet) but with a bit of looking around and playing I ended up with what I wanted.

I created an item on the page with the following attributes:

Display As: Shuttle

List of values definition: STATIC2:Yellow;#FFFF00, Green;#00FF00, Turquoise;#00FFFF, Pink;#FF90FF, Blue;#9090FF, Purple;#FF00FF, Red;#FF9090

Post Element Text:
<script type="text/javascript">
(function() {
for (i=0;i<$x("#CURRENT_ITEM_NAME#_2").length;i++) {
$x("#CURRENT_ITEM_NAME#_2")[i].style.backgroundColor
= $x("#CURRENT_ITEM_NAME#_2")[i].value;
}
for (i=0;i<$x("#CURRENT_ITEM_NAME#").length;i++) {
$x("#CURRENT_ITEM_NAME#")[i].style.backgroundColor
= $x("#CURRENT_ITEM_NAME#")[i].value;
}

})();
</script>

Notes:

  • the value of each item in the list is a HTML colour code. This colour code is used to set the background colour of the item in the list.
  • the shuttle item actually involves two select lists in the generated page. If the item name is P1_SHUTTLE, the generated items will be P1_SHUTTLE_2 (the left-hand list) and P1_SHUTTLE (the right-hand list). These are referenced in the javascript via #CURRENT_ITEM_NAME# and #CURRENT_ITEM_NAME#_2.
  • the $x returns the select list dom object, which supports the “length” attribute – this returns the count of items in the list
  • the select list index starts at 0 and goes up to length-1

A small problem is when the “reset” button is clicked the colours disappear. They reappear if the page is refreshed, however.


Prefix searches, Wildcard characters and LIKE

Want to run a prefix search using the LIKE operator, but your search pattern includes wildcards like _ or %? A common example is searching the v$parameter view for parameters starting with the underscore (_) character. So, why doesn’t this work:

SELECT * FROM v$parameter WHERE name LIKE '_%';

This returns ALL the rows in the view, because the “_” in the pattern matches any single character. There’s all sorts of ways to get around this:

Option 1. Use TRANSLATE to modify the underscores to some other character

SELECT * FROM my_table
WHERE TRANSLATE(my_column, '_', '#') LIKE '#%';

No good if you’re after a general solution, e.g. what if the original string already has some “#”‘s in it? Also, even if the table has an ordinary index on the column the query cannot use it.

Option 2. Use SUBSTR so that we can use an equality condition instead of LIKE

SELECT * FROM my_table
WHERE SUBSTR(my_column,1,1) = '_';

Easy, simple. Except that an ordinary index on the column will still not be used (of course, an appropriate function-based index could be used).

Option 3. Use a regular expression

SELECT * FROM my_table
WHERE REGEXP_LIKE(my_column, '^_');

or

SELECT * FROM my_table
WHERE REGEXP_INSTR(my_column, '^_') > 0

Rather simple, once you know how to write regular expressions. Still can’t use an ordinary index, though (although, again, a (very specific and otherwise generally useless) function-based index could be used).

Option 4. Read the manual and learn something about the LIKE operator

SELECT * FROM my_table
WHERE my_column LIKE '\_%' ESCAPE '\';

Easy, simple; and if there’s an index on the column, the optimiser may use it. No need for any silly function-based indexes or new-fangled regular expressions.


Add a “Who’s Online Now” box to your APEX app

Something to file under “pointless fun”…

Allow your users to feel like they’re part of a community by letting them know who else is using the app at the same time.

  1. Open APEX Application Builder, and open the page you want to add this to.
  2. Click the Create Region icon.
  3. Choose Report, then SQL Report.
  4. Enter a title, e.g. “Who’s Online Now”. Click Next.
  5. Copy the following for the SQL Query (sorry about the poor formatting):
    SELECT apex_user || ' (' || minutes_ago
           || DECODE(minutes_ago,1,' min ago)',' mins ago)')
    FROM (
      SELECT DISTINCT
             apex_user,
             FIRST_VALUE(TRUNC(seconds_ago/60))
             OVER (PARTITION BY apex_user ORDER BY seconds_ago)
             AS minutes_ago
      FROM apex_workspace_activity_log
      WHERE application_id = :APP_ID
      AND seconds_ago < 3600
      AND apex_user NOT IN (:APP_USER, 'nobody')
      ORDER BY 2);
    
  6. Click Next. Click Create Region.

You can customise the region as much as you like. For example, I use:

  • Template (under Region Definition) = “Sidebar Region”
  • Report Template (under Report Attributes) = “template: 16. One Column Unordered List”
  • Pagination Scheme = “- No Pagination Selected -“
  • Headings Type = “None”

The query is based on the APEX view apex_workspace_activity_log which is supplied with the product. It looks at all session activity within the last hour by users other than the current user, and returns a list showing their most recent activity.

The only slight problem is that it doesn’t detect when someone logs out – they’ll stay in the report for up to an hour.


LIKE with wildcard (%) at start CAN use an index

A tester for our client raised a problem where searches like ‘ABC%’ would perform satisfactorily, but searches like ‘%ABC’ would time out. Of course, the reason is that the index on the column cannot be used if a wildcard is at the start of the string, so a full table scan is always performed.

I think we’re going to accept the slow response for now, but a possible solution that we might consider in future is to use a function-based index on the REVERSE() function. Its drawback is that it requires modifying the code.

First, some test data:

CREATE TABLE jka AS
SELECT ROWNUM id, dbms_random.string('x',10) v
FROM dual CONNECT BY LEVEL <= 10000;

Now, create an ordinary index:

CREATE INDEX jka_normal ON jka (v);

BEGIN
  dbms_stats.gather_table_stats
    (USER
    ,'JKA'
    ,estimate_percent=>100
    ,cascade=>TRUE);
END;

This query can use a range scan on jka_normal:

SELECT * FROM jka WHERE v LIKE 'ABC%';

But this query will use a full table scan (can’t use the index):

SELECT * FROM jka WHERE v LIKE '%ABC';

Now, create a function-based index (not to be confused with a REVERSE INDEX):

CREATE INDEX jka_reverse ON jka(REVERSE(v));

BEGIN
  dbms_stats.gather_table_stats
    (USER
    ,'JKA'
    ,estimate_percent=>100
    ,cascade=>TRUE);
END;

This query can use a range scan on jka_reverse:

SELECT * FROM jka WHERE REVERSE(v) LIKE REVERSE('%ABC');

Edit: looks like Richard Foote beat me to it, in a discussion of reverse indexes (which unfortunately don’t contribute anything useful to this particular problem) – I did a quick search on Google without results, but Foote’s article must not have been indexed yet or too far down the list.


Will Oracle use my regexp function-based index?

It was mentioned at the AUSOUG conference by one of the speakers that he couldn’t get the database to use a function-based index based on the regexp functions. I thought this was a little strange so decided to try for myself.


SQL> select * from v$version;
Oracle Database 10g Express Edition Release 10.2.0.1.0

Create a test table and gather stats on it:

SQL> create table testtable as
select rownum rn,
dbms_random.string(‘a’,10) string10,
rpad(‘x’,2000,’x’) padding
from all_objects
where rownum <= 5000;

SQL> exec dbms_stats.gather_table_stats(user,
‘TESTTABLE’, cascade=>TRUE);

This is our test query using regexp_instr:

SQL> explain plan for
select string10, regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 243 | 475K| 465
* 1 | TABLE ACCESS FULL| TESTTABLE | 243 | 475K| 465

1 – filter( REGEXP_INSTR (“STRING10”,’XE’)>0)
– dynamic sampling used for this statement

How many rows are actually returned by this query?

SQL> select string10, regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

STRING10 REGEXP_INSTR(STRING10,’XE’)
———- —————————
fwXEKwoDhG 3
rSmdOXEkeu 6
WXEbrrjXcW 2
qWEThNXEBO 7
XEQtOwjOCW 1
MRXEoAicUQ 3
IpECtZbjXE 9
LdmXjyePXE 9
gXEaiGrjSX 2
PoqtvdGcXE 9
ZunAgePXXE 9
evXEFhBpzX 3
ZxkXstXEwJ 7
yVpjHzXECY 7
FkaPMpXEgR 7
RuXUnXEQSO 6
OooRCjXXEK 8
XzceiWPXEr 8
XEZlpdNMhG 1
jJjzfXEdDw 6
CowyXEuHDm 5

21 rows selected

Hmmm… there should be a better way.
Let’s try a simple function-based index to speed things up:

SQL> create index idx_regexp on testtable
(regexp_instr(string10, ‘XE’));

Will this query use the index?

SQL> explain plan for
select string10, regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 4444 | 48884 | 465
* 1 | TABLE ACCESS FULL| TESTTABLE | 4444 | 48884 | 465

1 – filter( REGEXP_INSTR (“STRING10”,’XE’)>0)

Why not? The calculated cost with the index was higher than with a full table scan.

SQL> explain plan for
select /*+ index(testtable idx_regexp) */ string10, regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 4444 | 48884 | 1512
1 | TABLE ACCESS BY INDEX ROWID| TESTTABLE | 4444 | 48884 | 1512
* 2 | INDEX RANGE SCAN | IDX_REGEXP | 4444 | | 10

2 – access( REGEXP_INSTR (“STRING10”,’XE’)>0)

Why was the cost higher? Let’s look at what would be in that index…

SQL> select regexp_instr(string10, ‘XE’) from testtable;

REGEXP_INSTR(STRING10,’XE’)
—————————
0
0
0
0
0
1
0
0
0
0
… (lots of zeroes and the occasional positive integer) …
0
0
0

5000 rows selected

The index has a row for each block in the table. No histogram on it, so it doesn’t know that most of the index is zeroes.
What if we just want stuff from the index?

SQL> explain plan for
select regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 4444 | 48884 | 4
* 1 | INDEX FAST FULL SCAN| IDX_REGEXP | 4444 | 48884 | 4

1 – filter( REGEXP_INSTR (“STRING10”,’XE’)>0)

Yes, that uses the index. So how do we get the rest of the data from the table? Let’s try something else.

SQL> drop index idx_regexp;

We can take advantage of the fact that NULLS are not stored in an index by converting any zeroes (i.e. the regular expression didn’t match) to NULL:

SQL> create index idx_regexp_better on testtable
(CASE WHEN regexp_instr(string10, ‘XE’) > 0
THEN regexp_instr(string10, ‘XE’)
ELSE NULL END);

SQL> exec dbms_stats.gather_table_stats(user, ‘TESTTABLE’, cascade=>TRUE);

Will our query use the new improved index?

SQL> explain plan for
select string10, regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 250 | 2750 | 465
* 1 | TABLE ACCESS FULL| TESTTABLE | 250 | 2750 | 465

1 – filter( REGEXP_INSTR (“STRING10”,’XE’)>0)

No. Why? Because it can’t use the index, even if we try to force it with a hint. The optimiser doesn’t know it can modify my expression into the one that was used to build the index.

SQL> explain plan for
select /*+ index(testtable idx_regexp_better)*/
string10, regexp_instr(string10, ‘XE’)
from testtable
where regexp_instr(string10, ‘XE’) > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 250 | 2750 | 465
* 1 | TABLE ACCESS FULL| TESTTABLE | 250 | 2750 | 465

1 – filter( REGEXP_INSTR (“STRING10”,’XE’)>0)

What if we copy the expression from the index into the query? Yes, that’s better.

SQL> explain plan for
select string10, regexp_instr(string10, ‘XE’)
from testtable
where CASE WHEN regexp_instr(string10, ‘XE’) > 0
THEN regexp_instr(string10, ‘XE’)
ELSE NULL END > 0;

Id | Operation | Name | Rows | Bytes | Cost
0 | SELECT STATEMENT | | 21 | 231 | 22
1 | TABLE ACCESS BY INDEX ROWID| TESTTABLE | 21 | 231 | 22
* 2 | INDEX RANGE SCAN | IDX_REGEXP_BETTER | 21 | | 1

2 – access(CASE WHEN REGEXP_INSTR (“STRING10”,’XE’)>0 THEN REGEXP_INSTR(“STRING10”,’XE’) ELSE NULL END >0)

Brilliant!


Faster APEX using mod_expires

This is a followup on my earlier series on setting up APEX on Linux. In it I described how I used Apache web server in front of APEX, instead of accessing APEX directly via OWA (this was so that I could use the same port to serve ordinary web pages and files via HTTP).

A consequence of that set up is that all my APEX pages are a little slow to load up because none of the images are being cached on the client. To solve this I added the following lines to my httpd.conf:

<LocationMatch /i>
   ExpiresActive on
   ExpiresDefault "access plus 1 month"
</LocationMatch>

<LocationMatch /apex>
   ExpiresActive on
   ExpiresDefault "access"
   ExpiresByType image/gif "access plus 1 month"
   ExpiresByType image/jpeg "access plus 1 month"
   ExpiresByType image/bmp "access plus 1 month"
</LocationMatch>

My location “/i” points to the images used by APEX. Since there are only images in there I’ve told it to expire everything (ExpiresDefault) after a month.

The location “/apex” is for the actual APEX pages, including APEX application images which are not in /i/. I’ve set the default to “access” (in other words, don’t cache them), but added exceptions for the various image types in use to cache for a month.


Which APEX am I logging into?

I’ve got two computers at home both running APEX, and when I log into APEX the page looks exactly the same – except for the IP address in the URL, which is just slightly different. If you’re forgetful like me it’s easy to confuse which machine I’m logged in to. Here’s a tip:

Login to apex_admin on each machine in turn and click Manage Service, then Messages. Then select a Custom Message for the Login Message and/or a System Message, and enter the name of the machine, or some other message that will remind you which machine it is. The message will be shown whenever you see the login screen, and when you login to APEX.

If you find the text isn’t visible enough, you can insert HTML into the message, e.g. <B>Development</B>.


Un-riching Rich Text Format

Let’s just call it Legacy because I’m not going to say what the source is. It is a single-user desktop application that after a little investigation (i.e. searching the online forum for the app) was found to be storing its data in tables readable by MS Access. I wanted to get at this data, analyze it, maybe do some smart things to it, and then present it via Apex.

Step 1: Get the data into Oracle.

Simple matter of exporting from MS Access via ODBC. At least, it was simple once I replaced the Oracle ODBC drivers with the latest download from OTN. Before that I was getting a number of annoying TNS errors.

Step 2: Transform the data.

Most of the tables are easy-to-understand normalized relational tables. One of them, however, has a column that came through as a CLOB containing strange values like this:

{\rtf1\ansi\deff0\deftab254
{\fonttbl{\f0\fnil\fcharset0 Arial;}
{\f1\fnil\fcharset0 Verdana;}}{\colortbl\red0\green0\blue0;\red255\green0\blue0;\red0\green128\blue0;\red0\green0\blue255;\red255\green255\blue0;\red255\green0\blue255;\red128\green0\blue128;\red128\green0\blue0;\red0\green255\blue0;\red0\green255\blue255;\red0\green128\blue128;\red0\green0\blue128;\red255\green255\blue255;\red192\green192\blue192;\red128\green128\blue128;\red255\green255\blue255;}
\paperw12240\paperh15840\margl1880\margr1880\margt1440\margb1440
{\*\pnseclvl1\pnucrm\pnstart1\pnhang\pnindent720
{\pntxtb}{\pntxta{.}}}
...

Now based on my knowledge of the application I knew that this column was used to store small pieces of text (typically 8 to 30 short lines), with some amount of formatting (e.g. fonts, alignment, etc.). Again the online forum came in useful in that a side comment from one of the developers (regarding a small bug undocumented anti-feature) revealed that they stored the formatted text as RTF – Rich Text Format. Should have known from the opening 6 bytes in the data.

Somewhere in these oceans of rtf codes were swimming the plain text I craved. So Googled RTF, skimmed this old RTF specification, and ended up with this admittedly poorly-performing PL/SQL, which for the 651 rows in this table, each with an RTF of average 3KB, works just well enough for my purposes. As it turned out the only RTF codes I was interested in were \fcharset and \*, both of which I used to ignore bits of text I didn’t want in my output. Oh and \par, which denotes the end of a paragraph. I can run this script once a month on the freshly exported data and apply the full weight of Oracle’s analytic capabilities on it.

The code below exemplifies the use of a pipelined function. This is not a good idea, by the way, if you want to use it in regular queries, e.g. a view. In my case, however, I only wanted to call this from within PL/SQL, and then only once a month. Because of the way parameters work with functions like this, I had to call it with dynamic SQL (execute immediate).

That was kind of fun, but I’d rather not have to deal with RTF ever again, thank you.

create or replace package myutil_rtf is
  type t_v4000_table is table of varchar2(4000);
  function extract_text (p_recid in number)
  return t_v4000_table pipelined deterministic;
end;
/

create or replace package body myutil_rtf is
  function extract_text (p_recid in number)
    return t_v4000_table pipelined deterministic is

    l_ch varchar2(1);
    l_ctrl varchar2(4000);
    l_line varchar2(4000);
    l_rtf clob;

    --don't output any text between
    --\fcharset and closing }
    l_fcharset boolean := false;

    --increments for each enclosed pair of { }
    --within a discard section
    l_discard number;

  begin

    select rtf_clob into l_rtf
    from rtf_table where recid = p_recid;

    for i in 1..dbms_lob.getlength(l_rtf) loop

      l_ch := substr(l_rtf,i,1);

      if l_ch = '}' then

        if l_fcharset then
          --closing } found; re-enable output
          l_fcharset := false;
          l_line := null;
        end if;

        if l_discard > 0 then
          l_discard := l_discard - 1;
          if l_discard = 0 then
            l_discard := null;
          end if;
        end if;

      elsif l_ch = '{' then

        if l_discard is not null then
          l_discard := l_discard + 1;
        end if;

      elsif l_ch = '\' then

        --controls start with a backslash
        l_ctrl := '\';

      elsif l_ctrl is not null then

        --controls are always ended by some
        --non-alphanumeric character
        if instr('abcdefghijklmnopqrstuvwxyz'
        || '0123456789',lower(l_ch)) > 0 then
          l_ctrl := l_ctrl || lower(l_ch);
        else
          if l_ctrl = '\par' then
            pipe row (l_line);
            l_line := null;
          elsif substr(l_ctrl,1,9) = '\fcharset' then
            l_fcharset := true;
          elsif l_ctrl || l_ch = '\*' then
            --{\* ... } means you can ignore
            --anything between the { }
            if l_discard is null then
              l_discard := 1;
            end if;
          end if;
          l_ctrl := null;
        end if;

      elsif l_ch not in (chr(10), chr(13), '{')
        and not l_fcharset and l_discard is null then

        l_line := l_line || l_ch;

      end if;

    end loop;

    if l_line is not null and not l_fcharset then
      pipe row (l_line);
    end if;

    return;
  end extract_text;
end myutil_rtf;
/

To extract the text from the table with recid=1:

select column_value line_of_text
      ,rownum line_number
from table(myutil_rtf.extract_text(1));