Unique constraint WWV_FLOW_WORKSHEET_RPTS_UK violated

If your APEX application import log shows something like this:

...PAGE 73: Transaction Lines Report
declare
*
ERROR at line 1:
ORA-00001: unique constraint (APEX_040200.WWV_FLOW_WORKSHEET_RPTS_UK)
violated
ORA-06512: at "APEX_040200.WWV_FLOW_API", line 16271
ORA-06512: at line 6

(this is on an Apex 4.2.4 instance)

This is due to a Saved Report on an Interactive Report that was included in the export, which conflicts with a different Saved Report in the target instance. The log will, conveniently, tell you which page the IR is on.

The solution for this problem is simple – either:

(a) Export the application with Export Public Interactive Reports and Export Private Interactive Reports set to No;
OR
(b) Delete the Saved Report(s) from the instance you’re exporting from.

You can find all Saved Reports in an instance by running a query like this:

select workspace
      ,application_id
      ,application_name
      ,page_id
      ,application_user
      ,report_name
      ,report_alias
      ,status
from APEX_APPLICATION_PAGE_IR_RPT
where application_user not in ('APXWS_DEFAULT'
                              ,'APXWS_ALTERNATIVE');

You can delete Saved Reports from the Application Builder by going to the page with the Interactive Report, right-click on the IR and choose Edit Saved Reports, then select the report(s) and click Delete Checked.


SQL Problem Solved: Merge History Tables

Yesterday I published a SQL problem (that I I had solved – or at least, thought I had solved!) and invited readers to submit their own solutions.

SPOILER ALERT: if you haven’t had a chance to have a go yourself, go back, don’t read the comments (yet), but see if you can solve it yourself.

I was very pleased to see two excellent submissions that were both correct as far as I could tell, and are (in my opinion) sufficiently simple, elegant and efficient. Note that this was not a competition and I will not be ranking or rating or anything like that. If you need to solve a similar problem, I would suggest you try all of these solutions out and pick the one that seems to work the best for you.

Solution #1

Stew Ashton supplied the first solution which was very simple and elegant:

with dates as (
  select distinct deptid, dte from (
    select deptid, from_date, to_date+1 to_date from t1
    union all
    select deptid, from_date, to_date+1 to_date from t2
  )
  unpivot(dte for col in(from_date, to_date))
)
select * from (
  select distinct a.deptid, dte from_date,
  lead(dte) over(partition by a.deptid order by dte) - 1 to_date,
  data1, data2 from dates a
  left join t1 b on a.deptid = b.deptid and a.dte between b.from_date and b.to_date
  left join t2 c on a.deptid = c.deptid and a.dte between c.from_date and c.to_date
)
where data1 is not null or data2 is not null
order by 1,2;

SQL Fiddle

Stew’s solution does a quick pass over the two tables and generates a “date dimension” table in memory; it then does left joins to each table to generate each row. It may have a disadvantage due to the requirement to make two passes over the tables, which might be mitigated by suitable indexing. As pointed out by Matthias Rogel, this solution can be easily generalised to more than two tables, which is a plus.

Solution #2

The second solution was provided by Kim Berg Hansen which was a bit longer but in my opinion, still simple and elegant:

with v1 as (
   select t.deptid
        , case r.l
             when 1 then t.from_date
             when 2 then date '0001-01-01'
             when 3 then t.max_to + 1
          end from_date
        , case r.l
             when 1 then t.to_date
             when 2 then t.min_from -1
             when 3 then date '9999-01-01'
          end to_date
        , case r.l
             when 1 then t.data1
          end data1
     from (
      select deptid
           , from_date
           , to_date
           , data1
           , row_number() over (partition by deptid order by from_date) rn
           , min(from_date) over (partition by deptid) min_from
           , max(to_date) over (partition by deptid) max_to
        from t1
     ) t, lateral( 
         select level l
           from dual
         connect by level <= case t.rn when 1 then 3 else 1 end
     ) r
), v2 as (
   select t.deptid
        , case r.l
             when 1 then t.from_date
             when 2 then date '0001-01-01'
             when 3 then t.max_to + 1
          end from_date
        , case r.l
             when 1 then t.to_date
             when 2 then t.min_from -1
             when 3 then date '9999-01-01'
          end to_date
        , case r.l
             when 1 then t.data2
          end data2
     from (
      select deptid
           , from_date
           , to_date
           , data2
           , row_number() over (partition by deptid order by from_date) rn
           , min(from_date) over (partition by deptid) min_from
           , max(to_date) over (partition by deptid) max_to
        from t2
     ) t, lateral( 
         select level l
           from dual
         connect by level <= case t.rn when 1 then 3 else 1 end
     ) r
)
select v1.deptid
     , greatest(v1.from_date, v2.from_date) from_date
     , least(v1.to_date, v2.to_date) to_date
     , v1.data1
     , v2.data2
  from v1
  join v2
      on v2.deptid = v1.deptid
     and v2.to_date >= v1.from_date
     and v2.from_date <= v1.to_date
 where v2.data2 is not null or v1.data1 is not null
 order by v1.deptid
        , greatest(v1.from_date, v2.from_date)

Since it takes advantage of 12c’s new LATERAL feature, I wasn’t able to test it on my system, but I was able to test it on Oracle’s new Live SQL and it passed with flying colours. I backported it to 11g and it worked just fine on my system as well. An advantage of Kim’s solution is that it should only make one pass over each of the two tables.

Oracle Live SQL: merge-history-kimberghansen (this is the 12c version – Oracle login required)

SQL Fiddle (this is my attempt to backport to 11g – you can judge for yourself if I was faithful to the original)

Solution #3

When my colleague came to me with this problem, I started by creating a test suite including as many test cases as we could think of. Developing and testing the solution against these test cases proved invaluable and allowed a number of ideas to be tested and thrown away. Big thanks go to Kim Berg Hansen for providing an additional test case, which revealed a small bug in my original solution.

with q1 as (
select nvl(t1.deptid, t2.deptid) as deptid
      ,greatest(nvl(t1.from_date, t2.from_date), nvl(t2.from_date, t1.from_date)) as gt_from_date
      ,least(nvl(t1.to_date, t2.to_date), nvl(t2.to_date, t1.to_date))            as lt_to_date
      ,t1.from_date AS t1_from_date
      ,t1.to_date   AS t1_to_date
      ,t2.from_date AS t2_from_date
      ,t2.to_date   AS t2_to_date
      ,min(greatest(t1.from_date, t2.from_date)) over (partition by t1.deptid) - 1 as fr_to_date
      ,max(least(t1.to_date, t2.to_date)) over (partition by t2.deptid)        + 1 as lr_from_date
      ,t1.data1
      ,t2.data2
      ,row_number() over (partition by t1.deptid, t2.deptid order by t1.from_date) as rn_fr1
      ,row_number() over (partition by t1.deptid, t2.deptid order by t2.from_date) as rn_fr2
      ,row_number() over (partition by t1.deptid, t2.deptid order by t1.to_date desc) as rn_lr1
      ,row_number() over (partition by t1.deptid, t2.deptid order by t2.to_date desc) as rn_lr2
from   t1
full outer join t2
on     t1.deptid = t2.deptid
and    (t1.from_date between t2.from_date and t2.to_date
        or t1.to_date between t2.from_date and t2.to_date
        or t2.from_date between t1.from_date and t1.to_date
        or t2.to_date between t1.from_date and t1.to_date)
order by 1, 2
)
select deptid
      ,gt_from_date AS from_date
      ,lt_to_date   AS to_date
      ,data1
      ,data2
from   q1
union all
select deptid
      ,t1_from_date as from_date
      ,fr_to_date AS to_date
      ,data1
      ,'' AS data2
from   q1
where  fr_to_date between t1_from_date and t1_to_date
and    rn_fr1 = 1
union all
select deptid
      ,t2_from_date as from_date
      ,fr_to_date AS to_date
      ,'' AS data1
      ,data2
from   q1
where  fr_to_date between t2_from_date and t2_to_date
and    rn_fr2 = 1
union all
select deptid
      ,lr_from_date as from_date
      ,t2_to_date AS to_date
      ,'' AS data1
      ,data2
from   q1
where  lr_from_date between t2_from_date and t2_to_date
and    rn_lr2 = 1
union all
select deptid
      ,lr_from_date as from_date
      ,t1_to_date AS to_date
      ,data1
      ,'' AS data2
from   q1
where  lr_from_date between t1_from_date and t1_to_date
and    rn_lr1 = 1
order by deptid, from_date

SQL Fiddle

This solution is probably the ugliest of the three, and though it only requires one pass over the two tables, it materializes all the data in a temp table so probably requires a fair amount of memory for large data sets.

I’m so glad I posted this problem – I received help in the form of two quite different solutions, which help to show different ways of approaching problems like this. Not only that, but it highlighted the value of test-driven development – instead of groping in the dark to come up with a quick-and-dirty solution (that might or might not work), starting with comprehensive test cases is invaluable for both developing and validating potential solutions.


SQL Problem: Merge record history tables

A colleague asked me if I could help solve a problem in SQL, and I was grateful because I love solving problems in SQL! So we sat down, built a test suite, and worked together to solve it.

I will first present the problem and invite you to have a go at solving it for yourself; tomorrow I will post the solution we came up with. I think our solution is reasonably simple and elegant, but I’m certain there are other solutions as well, some of which may be more elegant or efficient.

I have two “record history” tables, from different data sources, that record the history of changes to two sets of columns describing the same entity. Here are the example tables:

T1 (deptid, from_date, to_date, data1)
T2 (deptid, from_date, to_date, data2)

So T1 records the changes in the column “data1” over time for each deptid; and T2 records the changes in the column “data2” over time for each deptid.

Assumptions/Constraints on T1 and T2:

  • All columns are not Null.
  • All dates are just dates (no time values).
  • All date ranges are valid (i.e. from_date <- to_date).
  • Within each table, all date ranges for a particular deptid are gap-free and have no overlaps.

We want to produce a query that returns the following structure:

T3 (deptid, from_date, to_date, data1, data2)

Where the dates for a particular deptid overlap between T1 and T2, we want to see the corresponding data1 and data2 values. Where the dates do not overlap (e.g. where we have a value for data1 in T1 but not in T2), we want to see the corresponding data1 but NULL for data2; and vice versa.

Below is the setup and test cases you can use if you wish. Alternatively, you can use the SQL Fiddle here.

create table t1
  (deptid number not null
  ,from_date date not null
  ,to_date date not null
  ,data1 varchar2(30) not null
  ,primary key (deptid, from_date)
  );

create table t2
  (deptid number not null
  ,from_date date not null
  ,to_date date not null
  ,data2 varchar2(30) not null
  ,primary key (deptid, from_date)
  );

--Test Case 1 (4001)
--T1  |-------------||-------------|
--T2         |-------------||-------------||-------------|

insert into t1 values (4001, date'2015-01-01', date'2015-01-31', 'jan data 1');
insert into t1 values (4001, date'2015-02-01', date'2015-02-28', 'feb data 1');
insert into t2 values (4001, date'2015-01-15', date'2015-02-14', 'jan-feb data 2');
insert into t2 values (4001, date'2015-02-15', date'2015-03-14', 'feb-mar data 2');
insert into t2 values (4001, date'2015-03-15', date'2015-04-14', 'mar-apr data 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4001    01-JAN-2015  14-JAN-2015  jan data 1
--4001    15-JAN-2015  31-JAN-2015  jan data 1      jan-feb data 2
--4001    01-FEB-2015  14-FEB-2015  feb data 1      jan-feb data 2
--4001    15-FEB-2015  28-FEB-2015  feb data 1      feb-mar data 2
--4001    01-MAR-2015  14-MAR-2015                  feb-mar data 2
--4001    15-MAR-2015  14-APR-2015                  mar-apr data 2

--Test Case 2 (4002)
--T1         |-------------||-------------||-------------|
--T2  |-------------||-------------|

insert into t1 values (4002, date'2015-01-15', date'2015-02-14', 'jan-feb data 1');
insert into t1 values (4002, date'2015-02-15', date'2015-03-14', 'feb-mar data 1');
insert into t1 values (4002, date'2015-03-15', date'2015-04-14', 'mar-apr data 1');
insert into t2 values (4002, date'2015-01-01', date'2015-01-31', 'jan data 2');
insert into t2 values (4002, date'2015-02-01', date'2015-02-28', 'feb data 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4002    01-JAN-2015  14-JAN-2015                  jan data 2
--4002    15-JAN-2015  31-JAN-2015  jan-feb data 1  jan data 2
--4002    01-FEB-2015  14-FEB-2015  jan-feb data 1  feb data 2
--4002    15-FEB-2015  28-FEB-2015  feb-mar data 1  feb data 2
--4002    01-MAR-2015  14-MAR-2015  feb-mar data 1
--4002    15-MAR-2015  14-APR-2015  mar-apr data 1

--Test Case 3 (4003)
--T1  |-------------|
--T2  |-------------|

insert into t1 values (4003, date'2015-01-01', date'2015-01-31', 'jan data 1');
insert into t2 values (4003, date'2015-01-01', date'2015-01-31', 'jan data 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4003    01-JAN-2015  31-JAN-2015  jan data 1      jan data 2

--Test Case 4 (4004)
--T1  |-------------|
--T2      |-----|

insert into t1 values (4004, date'2015-01-01', date'2015-01-31', 'jan data 1');
insert into t2 values (4004, date'2015-01-10', date'2015-01-20', 'jan data int 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4004    01-JAN-2015  09-JAN-2015  jan data 1
--4004    10-JAN-2015  20-JAN-2015  jan data 1      jan data int 2
--4004    21-JAN-2015  31-JAN-2015  jan data 1

--Test Case 5 (4005)
--T1      |-----|
--T2  |-------------|

insert into t1 values (4005, date'2015-01-10', date'2015-01-20', 'jan data int 1');
insert into t2 values (4005, date'2015-01-01', date'2015-01-31', 'jan data 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4005    01-JAN-2015  09-JAN-2015                  jan data 2
--4005    10-JAN-2015  20-JAN-2015  jan data int 1  jan data 2
--4005    21-JAN-2015  31-JAN-2015                  jan data 2

--Test Case 6 (4006)
--T1  ||------------|
--T2  |-------------|

insert into t1 values (4006, date'2015-01-01', date'2015-01-01', 'jan data a 1');
insert into t1 values (4006, date'2015-01-02', date'2015-01-31', 'jan data b 1');
insert into t2 values (4006, date'2015-01-01', date'2015-01-31', 'jan data 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4006    01-JAN-2015  01-JAN-2015  jan data a 1    jan data 2
--4006    02-JAN-2015  31-JAN-2015  jan data b 1    jan data 2

--Test Case 7 (4007)
--T1  |-------------|
--T2  ||------------|

insert into t1 values (4007, date'2015-01-01', date'2015-01-31', 'jan data 1');
insert into t2 values (4007, date'2015-01-01', date'2015-01-01', 'jan data a 2');
insert into t2 values (4007, date'2015-01-02', date'2015-01-31', 'jan data b 2');

--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4007    01-JAN-2015  01-JAN-2015  jan data 1      jan data a 2
--4007    02-JAN-2015  31-JAN-2015  jan data 1      jan data b 2

--Test Case 8 (4008)
--T1  |-------------|
--T2    |---||---||---|
 
insert into t1 values (4008, date'2015-01-01', date'2015-01-31', 'jan data 1');
insert into t2 values (4008, date'2015-01-09', date'2015-01-17', 'jan data a 2');
insert into t2 values (4008, date'2015-01-18', date'2015-01-26', 'jan data b 2');
insert into t2 values (4008, date'2015-01-27', date'2015-02-04', 'jan-feb data 2');
 
--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4008    01-JAN-2015  08-JAN-2015  jan data 1
--4008    09-JAN-2015  17-JAN-2015  jan data 1      jan data a 2
--4008    18-JAN-2015  26-JAN-2015  jan data 1      jan data b 2
--4008    27-JAN-2015  31-JAN-2015  jan data 1      jan-feb data 2
--4008    01-FEB-2015  04-FEB-2015                  jan-feb data 2

--Test Case 9 (4009)
--T1    |---||---||---|
--T2  |-------------|
 
insert into t1 values (4009, date'2015-01-09', date'2015-01-17', 'jan data a 1');
insert into t1 values (4009, date'2015-01-18', date'2015-01-26', 'jan data b 1');
insert into t1 values (4009, date'2015-01-27', date'2015-02-04', 'jan-feb data 1');
insert into t2 values (4009, date'2015-01-01', date'2015-01-31', 'jan data 2');
 
--Expected Output:
--DEPTID  FROM_DATE    TO_DATE      DATA1           DATA2
--======  ===========  ===========  ==============  ==============
--4009    01-JAN-2015  08-JAN-2015                  jan data 2
--4009    09-JAN-2015  17-JAN-2015  jan data a 1    jan data 2
--4009    18-JAN-2015  26-JAN-2015  jan data b 1    jan data 2
--4009    27-JAN-2015  31-JAN-2015  jan-feb data 1  jan data 2
--4009    01-FEB-2015  04-FEB-2015  jan-feb data 1

EDIT 15/10/2015: added Kim Berg Hansen’s test cases

UPDATE 15/10/2015: solutions (spoiler alert!)


Static File not updating in runtime APEX environment

The UAT environment is a runtime APEX installation (4.2.4.00.08) and all deployments are done via SQL scripts. My application uses a small number of static files that for convenience we are serving from APEX (at least for now); to deploy changes to these static files, I export f100.sql and static_files.sql from APEX in dev and we run them in the target environment after setting a few variables, like this:

declare
  v_workspace CONSTANT VARCHAR2(100) := 'MYWORKSPACE';
  v_workspace_id NUMBER;
begin
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = v_workspace;
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  apex_application_install.set_schema('MYSCHEMA');
  apex_application_install.set_application_id(100);
end;
/

@f100.sql
@static_file.sql

Many months after this application went live, and after multiple deployments in all the environments, we suddenly had an issue where the static files being served from one instance (UAT) were an older version. The logs showed the correct files had been deployed, and re-deploying into DEV seemed to work fine. I got the DBA to temporarily change the schema password in UAT so I could login to see what was going on.

When I ran this query in DEV, I got the expected two records:

select * from apex_workspace_files
where file_name in ('myapp.css', 'myapp.js');

When I ran it in UAT, I got four records – two copies of each file, and the BLOB contents showed that the older copies were the ones being served to the clients. I have no idea how the extra copies got created in that environment. It must have been due to a failed deployment but the deployment logs didn’t seem to show any errors or anomalies.

Killing the Zombie Static File

I tried editing the static_file.sql script to remove the files (as below), but it only ever removed the new files that were created; re-running it never causes it to drop the old file copies.

...
declare
  l_name    varchar2(255);
begin
  l_name := 'myapp.css';
  wwv_flow_html_api.remove_html(
    p_html_name => l_name,
    p_flow_id   => nvl(wwv_flow.g_flow_id, 0) );
end;
/
...

Next thing I tried was something I picked up from here:

NOTE: run this at your own risk! It is not supported by Oracle.

declare
  v_workspace CONSTANT VARCHAR2(100) := 'MYWORKSPACE';
  v_workspace_id NUMBER;
begin
*** WARNING: DO NOT RUN THIS UNLESS YOU KNOW WHAT YOU ARE DOING ***
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = v_workspace;
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  delete from wwv_flow_files where filename like 'myapp.%';
*  commit;
end;
/

That seemed to do the trick. Thankfully this problem only occurred in a test environment – I would be uncomfortable running this in Prod.


“Smart quotes” showing as “?” in emails

When some of my users were using my system to send emails, they’d often copy-and-paste their messages from their favourite word processor, but when my system sent the emails they’d have question marks dotted around, e.g.

“Why doesn’t this work?”

would get changed to

?Why doesn?t? this work??

Simple fix was to detect and replace those fancy-pants quote characters with the equivalent html entities, e.g.:

function enc_chars (m in varchar2) return varchar2 is
begin
  return replace(replace(replace(replace(m
    ,chr(14844060),'&#8220;')/*left double quote*/
    ,chr(14844061),'&#8221;')/*right double quote*/
    ,chr(96)      ,'&#8216;')/*left single quote*/
    ,chr(14844057),'&#8217;')/*right single quote*/
    ;
end enc_chars;

P.S. Stupid wordpress keeps mucking around with my code, trying to replace the html entities with the unencoded versions. In case this doesn’t work, here’s an image of what the above code is supposed to look like:
enc_chars


Help for your keyboard users

APEX’s Blue Responsive Theme 25 is a great theme for building a user-friendly website, and unlike many other themes which make the item labels clickable to get help, it renders little question-mark icons to the right of each item that has a help message defined.

theme-25-help-icon

One issue with this design, however, is that a keyboard-savvy user (such as myself) hates to switch between keyboard and mouse – so they Tab between each field as they fill in a form. With this theme, however, those little question-mark icons are in the tab order, so the user has to Tab twice between each field. If they’re not looking at the page they might forget; and if one item doesn’t happen to have a Help icon, they have to remember to only Tab once. All of this adds up to a poor user experience.

To fix this, we simply tell the browser to move the Help icons out of the tab order – and yet again, jQuery comes to the rescue as we can simply pick out all the elements with the class itemHelpButton and set their tabindex to “-1”:

$(".itemHelpButton").attr('tabindex',-1);

Put this code in the page attribute Execute when Page Loads – when the page loads, this will set the tabindex on all the help icons on the page, so now the user can tab through the page without interruption.


Add a Dynamic Total to a Tabular Report

I have a Tabular Report with an editable Amount item. When the page loads, the total amount should be shown below the report; and if the user updates any amount on any row, the total amount should be updated automatically.

Note: this method does not work if you have a tabular report that might have a very large number of records (as it relies on all records being rendered in the page at one time).

1. Make sure the report always shows all the records. To do this, set the Number of Rows and the Maximum Row Count to a large number (e.g. 1000).

2. Add an item to show the total, e.g. P1_TOTAL_AMOUNT. I use a Number field, and add “disabled=true” to the HTML Form Element Attributes so that the user won’t change it.

3. Examine the generated HTML to see what ID is given to the amount fields in the tabular report. In my case, the amount field is rendered with input items with name “f04” and id “f04_0001”, “f04_0002”, etc.

4. Add the following code to the page’s Function and Global Variable Declaration:

function UpdateTotal () {
  var t = 0;
  $("input[name='f04']").each(function() {
    t += parseFloat($(this).val().replace(/,/g,''))||0;
  });
  $s("P1_TOTAL_AMOUNT",t.formatMoney());
}

This strips out any commas from the amounts before parsing them as Floats and adding them to a running total; it finally formats the total using my formatMoney function and updates the total amount item.

5. Add the following to the page’s Execute when Page Loads:

$("input[name='f04']").change(function(){UpdateTotal();});

To prime the total amount field when the page is loaded, I have a Before Header process that calculates the total based on a simple query on the table.

Now, in my case I want to have two running totals: one for “Cash” lines and another for “Salary” lines. My tabular report renders a radio button on each record which the user can select “Cash” or “Salary”. So instead of just the one total amount field, I have two: P1_TOTAL_CASH and P1_TOTAL_SALARY. The radio buttons have hidden input items with the value, rendered with id “f05_nnnn” (where nnnn is the row number).

My UpdateTotal function therefore looks like this:

function UpdateTotals () {
  var sal = 0, cash = 0, amt, rownum, linetype;
  $("input[name='f04']").each(function() {
    amt = parseFloat($(this).val().replace(/,/g,''))||0;
    // determine if this is a Cash or Salary line
    rownum = $(this).prop("id").split("_")[1];
    linetype = $("input[id='f05_"+rownum+"']").val();
    if (linetype == "SALARY") {
      sal += amt;
    } else if (linetype == "CASH") {
      cash += amt;
    }
  });
  $s("P52_TOTAL_SALARY",sal.formatMoney());
  $s("P52_TOTAL_CASH",cash.formatMoney());
}

And my Execute when Page Loads has an additional call:

$("input[name='f05']").change(function(){UpdateTotals();});

Now, when the user changes the amounts or changes the line type, the totals are updated dynamically.

EDIT: simplified jquery selectors based on Tom’s feedback (see comments) and use the hidden field for the radio buttons instead of querying for “checked”

tabularreportdynamictotal

UPDATE: If the tabular form has an “Add Row” button, the above code won’t work on the newly added rows. In this case, the Execute when Page Load should be this instead:

$(document).on("change", "input[name='f05']", function(){UpdateTotals();});

Show/Hide Multi-row Delete button for a Tabular Report

I have a standard tabular report with checkboxes on each row, and a multi-record delete button called MULTI_ROW_DELETE.

If the user clicks the button before selecting any records (or if there are no records), they get an error message. Instead, I’d rather hide the button and only show it when they have selected one or more records.

showhidemultirowdelete

To do this:

1. Edit the MULTI_ROW_DELETE button to have a Static ID (e.g. “MULTI_ROW_DELETE”).

2. Add this function to the page’s Function and Global Variable Declaration:

function ShowHideMultiRowDelete () {
  if ($("input[id^='f01_']:checked").length==0) {
    $x_Hide("MULTI_ROW_DELETE");
  } else {
    $x_Show("MULTI_ROW_DELETE");
  }
}

This looks to see if there are any checkboxes selected, if none are found it hides the delete button, otherwise it shows it.

3. Add this code to the page’s Execute when Page Loads:

ShowHideMultiRowDelete();
$("input[id^='f01_']").change(function(){ShowHideMultiRowDelete();});
$x_Hide("check-all-rows");

This does the initial check on form load (i.e. it initially hides the button, since none of the checkboxes will be selected yet), and adds a listener to the checkboxes so that if any of them are changed, the function is re-run to show or hide the button as needed.

Unfortunately this doesn’t work with the “all rows” checkbox that was generated by the tabular report, so I’ve added a step to hide that checkbox (“check-all-rows”) until I can find a solution for that.


Autoformat ANY amount item, anywhere

If you’re building a “finance-ey” application you probably have plenty of fields that should show and accept monetary amounts – and quite possibly these items may be implemented in a variety of ways – ordinary apex number items, edit fields in tabular reports, or even dynamically generated items using APEX_ITEM.text.

In my case I had all three, scattered throughout the application. Our users routinely deal with multi-million dollar amounts and they had trouble checking the amounts visually, especially when there are a lot of zeros, e.g. “10000010.00” – so they asked for them all to be formatted with commas, e.g. “10,000,010.00”.

Step 1. SQL number format

So in my first release of the apex application I applied the “FM999G999G999G999G990D00” format to all the money amount items, including in reports etc. The users were reasonably happy with this, but thought it wasn’t working in all cases – e.g. they’d type in a new amount, and the item wouldn’t get formatted until after they Saved the record. This is because the format is only applied when the APEX rendering engine is formatting the page for display – it doesn’t apply it dynamically as the items are changed.

Step 2. Dynamic Actions using SQL

So I started adding dynamic actions to all the apex items which would call the database to format the amount every time the item was changed. This was ok, but performance wasn’t that great – there was a visible sub-second delay while the page did an ajax call to the database just to do the formatting.

Step 3. Dynamic Actions using Javascript

So then I found a Javascript money formatter and modified my dynamic actions to call that instead. The only downside is that it is not internationally-aware. In my case this application’s target users are all here in Australia, are in the education industry, and they haven’t complained about the lack of international money-formatting support (yet).

Number.prototype.formatMoney = function(decPlaces, thouSep, decSep) {
  var n         = this
     ,decPlaces = isNaN(decPlaces = Math.abs(decPlaces)) ? 2 : decPlaces
     ,decSep    = decSep == undefined ? "." : decSep
     ,thouSep   = thouSep == undefined ? "," : thouSep
     ,sign      = n < 0 ? "-" : ""
     ,i         = parseInt(n = Math.abs(+n || 0).toFixed(decPlaces)) + ""
     ,j         = (j = i.length) > 3 ? j % 3 : 0;
  return sign
    + (j ? i.substr(0, j) + thouSep : "")
    + i.substr(j).replace(/(\d{3})(?=\d)/g, "$1" + thouSep)
    + (decPlaces ? decSep + Math.abs(n - i).toFixed(decPlaces).slice(2) : "");
};

That worked really well, there was no visible delay, and the users were pleased. But I wasn’t satisfied – this trick doesn’t work on the tabular reports or on my APEX_ITEM-generated items.

Step 4. jQuery to the rescue!

So I’ve gone back to the drawing table and decided that I don’t want to have to add Dynamic Actions to each and every item that needs it, which doesn’t work for the items that are generated dynamically (e.g. when the user adds a record to a tabular report). This formatting should be applied automatically to each item, and the only thing I’m going to add to each item is a CSS class. I needed to use some jquery to dynamically bind some javascript to every item that has a particular class, even if the item is added after the page has loaded.

This stackoverflow question came in useful. I added the following to my global javascript file:

$(document).ready(function() {
  $( document ).on('change', '.edit_money', function(){
    var i = "#"+$(this).attr("id")
       ,v = parseFloat($(i).val().replace(/,/g,''))||0;
    $(i).val( v.formatMoney() );
  });
});

All I have to do is add the “edit_money” class to all my money items. For ordinary Apex items, you put the class in the HTML Form Element CSS Classes attribute. For items in a tabular report, the same attribute is under Column Attributes, called Element CSS Classes.

For items generated using APEX_ITEM, I just had to add some extra parameters (p_attributes and p_item_id), e.g.

SELECT APEX_ITEM.text
  (p_idx        => 2
  ,p_size       => 16
  ,p_maxlength  => 22
  ,p_attributes => 'class="edit_money" style="text-align:right"'
  ,p_item_id    => 'f02_'||TO_CHAR(ROWNUM,'fm0000')
  )
...

So, that was a reasonably good couple of hour’s work, I think. I’m not the world’s expect on javascript or jquery by any stretch of the imagination, but I’m quite happy with the result so far. I’m sure there are even better ways of doing this, so if you know of a better way please comment.


Don’t mess with my page, bro

One of my clients reported an issue – they were seeing “Waiting for 1.2.3.4” and a blank screen when they tried to access the Apex web site I’d built for them. They were using Mozilla on a Windows PC, connecting via Vodaphone 3G – the problem was consistent, and it went away when they used their ADSL connection.

My initial response was “don’t use Vodaphone 3G” because the problem seemed to be outside of my area.  It appears to be a common issue, something that some mobile operators do to reduce image sizes – c.f. http://support.mozilla.org/en-US/questions/791180  and http://www.geekstogo.com/forum/topic/277895-suspected-issue-waiting-for-1234-in-firefox-on-at/

My client did a little more digging (he’s a techie as well) and found this: http://stackoverflow.com/questions/4113268/how-to-stop-javascript-injection-from-vodafone-proxy

After reading that I said I’d give it another go and see what could be done. As far as I could see, the only really viable solution is to add the “Cache-Control: no-transform” header to the responses. Since I’m using Apache, to do this I added the following to my apache config as per http://httpd.apache.org/docs/current/mod/mod_headers.html:

Header merge Cache-Control no-transform

That seemed to fix the problem. What this header does is instruct all intermediaries to not modify the content in any way – i.e. don’t try to recompress the images, don’t inject any extra CSS or javascript into the page, nothing. Adding this header does carry the risk that performance on some mobile networks may suffer (because they will no longer do the image compression), so it’s now up to me to make sure my pages and images are as small as possible.