Disable IE Compatibility Mode

Most places I’ve worked at allow employees to use any of the major browsers to do their work, but mandate an “SOE” that only supports IE, presumably because that generates the most amount of work for us developers. I’d conservatively estimate that 99% of the rendering bugs I’ve had to deal with are only reproducible in IE. (cue one of the thousands of IE joke images… nah, just do a Google Image search, there’s plenty!)

Anyway, we had a number of these rendering issues in Apex on IE8, IE9 and IE10, mainly in edge cases involving some custom CSS or plugins. In some cases I was never able to reproduce the issue until we noticed that the user had inadvertently switched “IE Compatility Mode” on:

iecompatibility1

We told them to make sure the icon was grey, like this:

iecompatibility2

– and most of the issues went away.

Since there’s nothing in our Apex application that requires compatibility mode, we would rather the option not be available at all. To this end, we simply add this code to all the Page templates in the application:

<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">

This is added just after the <head> tag, like this:

iecompatibility3

Now, the compatibility button doesn’t appear at all – one less choice for users and less bug reports to deal with:

iecompatibility5

For more information, see this stackoverflow question and read all the answers. Note that it may be better to add this as a header in the response generated by your web server. In our case it was simpler to just add it into the html.


Deploying APEX: showing an “Under Maintenance” web page

systemdown

I’ve added this script to our toolbelt for future upgrades. We have a friendly “System is under maintenance, sorry for any convenience” web page that we want to show to users while we run upgrades, and we want it to be shown even if we’re just doing some database schema changes.

So I took the script from here and adapted it slightly, here’s our version:

declare PRAGMA AUTONOMOUS_TRANSACTION;
  v_workspace CONSTANT VARCHAR2(100) := 'MYSCHEMA';
  v_workspace_id NUMBER;
begin
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = v_workspace;
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  wwv_flow_api.set_flow_status
    (p_flow_id             => 100
    ,p_flow_status         => 'UNAVAILABLE_URL'
    ,p_flow_status_message => 'http://www.example.com/system_unavailable.html'
    );
  commit;
end;
/

It uses an autonomous transaction because we want the system to be unavailable immediately for all users while the deployment is running.

Warning: WWV_FLOW_API is an undocumented package so this is not supported.

The opposite script to make the application available again is:

declare PRAGMA AUTONOMOUS_TRANSACTION;
  v_workspace CONSTANT VARCHAR2(100) := 'MYSCHEMA';
  v_workspace_id NUMBER;
begin
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = v_workspace;
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  wwv_flow_api.set_flow_status
    (p_flow_id             => 100
    ,p_flow_status         => 'AVAILABLE'
    );
  commit;
end;
/

However, if we run the f100.sql script to deploy a new version of the application, we don’t need to run the “set available” script since the redeployment of the application (which would have been exported in an “available” state already) will effectively make it available straight away.


Estimating APEX Development

You’ve finished the design for an Apex application, and the manager asks you “when will you have it ready to test”. You resist the temptation to respond snarkily “how long is a piece of string” – which, by the way, is often the only appropriate answer if they ask for an estimate before the design work has started.

chuck-norris-knows-how-long-a-piece-of-string-is-thumb

Since you have a design and a clear idea of what exactly this application will do, you can build a reasonable estimate of development time. The starting point is to break down the design into small chunks of discrete modules of work, where each chunk is something you can realistically estimate to take between half a day up to a maximum of three days. In cases where it’s something you haven’t done before, you can reduce uncertainty by creating a small Proof-of-Concept application or code snippet and seeing how the abstract ideas in the design might work in reality. This is where Apex comes in handy – by the time the design has completed, you’ll have a database schema (you created the schema in a tool like SQL Developer, didn’t you – so you can generate the DDL for that in a jiffy) and you can just point Apex to that and use the New Application wizard to create a simple application.

I plan all my projects using Trello, recording estimates using the Scrum for Trello extension. I don’t know much about the “Scrum” method but this extension allows me to assign “points” to each task (blue numbers on each card, with a total at the top of each list). I used to use a 1 point = 1 day (8 hours) convention, but I was finding most of the tasks were more fine grained (I was assigning 0.5 and 0.25 points to most tasks) so I’ve now switched to a convention of 1 point = 1 hour (more or less). In other words, I’d report my estimates with the assumption that my “velocity” would be 8 points per day.

Trello-SUM

(note: the blue numbers are the Scrum for Trello points, and the orange numbers are simply the number of cards in the list)

My points system looks roughly like this:

  • Simple report, form or process = 4 points
  • Complex report or form or process = 8 points
  • Very complex form or process = 24 points
  • Simple bug fix / tweak = 1 point
  • Complex bug fix / enhancement = 2 to 8 points depending on complexity

In addition to that, I keep a daily diary of work done in Evernote which has allowed me to do what I’ve wanted to do for a while now: measure my actuals vs. estimates for my projects, in order to calibrate my estimates for future projects. I made up a quick spreadsheet showing the development work for two projects, showing the original estimate, the start and finish dates, and actual development days worked (accurate roughly to the nearest half day, not counting non-development time like meetings and other project work). This allows me to see how my actual development time compares to my original estimates.

SAM Development (16 Jun 2014 to 11 Dec 2014):

  • Estimate: 715 points (“18 weeks”)
  • Actual: 103.5 days (21 weeks)
  • Avg. points per day (“velocity”): 6.9

SAM Support (12 Dec 2014 to 29 Jul 2015):

  • Estimate: 460 points (“12 weeks”)
  • Actual: 64.5 days (13 weeks)
  • Avg. points per day (“velocity”): 7.1

SUM Development (4 Jun 2015 to present):

  • Estimate: 238 points (“6 weeks”)
  • Actual: 31 days (6 weeks)
  • Avg. points per day (“velocity”): 7.7

Details: EstimatesCalibration.xlsx

Since my reported estimates were roughly 8 points = 1 day, it seems I tend to underestimate. It may seem that my estimates are getting better (since my Points per Day is approaching 8), but that trend is more likely a result of SUM involving a lot less uncertainty and risk and being a smaller project overall. SAM was in a new environment and new client, whereas SUM is merely an additional application in the same environment for the same client. I was also able to re-use or adapt a lot of infrastructure-type code and Apex components from SAM in SUM.

The other thing that I can see from the details is that my “velocity” (points per day) is higher in the earlier stages of development, probably because I have plenty of work planned out, and I can work for several days in a row, uninterrupted by meetings and other distractions. In later stages, my attention gets divided by bug fixes, enhancement requests, meetings, doing design work for the next project, and general waiting-for-other-people-to-do-their-jobs.

For my next project I’ll estimate using the same points system as above, but factor in a 7 points-per-day to the estimates that I report.

Do you have a system of your own you use to create estimates and measure their accuracy? If not, why not try this for yourself? Do you keep track of your estimates and progress? If not, I recommend you start 🙂


Tabular Form – set default values with jQuery

tabular-form-default-jquery4

I have a simple tabular form with a numeric “sort order” column. I want the value of this column to be defaulted automatically, based on the maximum value of the rest of the records on the screen. Unfortunately the builtin Apex default type for columns in a tabular form can only be based on an Item, or a PL/SQL expression or function. I didn’t want to make a database call to get the maximum value because the user may have added multiple records which have not yet been saved to the database.

I tried basing the default on a hidden page item which I kept updated based on the values entered, but it seems the tabular form only gets the item’s value on page load and doesn’t re-examine the item’s value when you click the “Add” button. Instead, I had to turn to javascript and jQuery to get the job done.

1. Add a class to the tabular form column by setting the Element CSS Classes column attribute (I’ve used “sortorder” as the class name).

tabular-form-default-jquery2

2. Add a javascript function to the page that searches for all the “sortorder” items, calculates the maximum value and adds 10, and assigns it to the last sortorder item on the page.

function setNextSortOrder() {
  var highest=0;
  $(".sortorder").each(function(){
    if (this.value != "") {
      highest=Math.max(highest,parseFloat(this.value));
    }
  });
  $(".sortorder").last().val(highest+10);
}

tabular-form-default-jquery1

3. Modify the javascript for the “Add” button to call the setNextSortOrder function immediately after adding a row.

javascript:{apex.widget.tabular.addRow();setNextSortOrder();}

tabular-form-default-jquery3

It’s a bit hackish and might not work correctly in some cases but it’ll do for now.


APEX CSV Import: “Data Loading Failed”

If you are using the APEX built-in Data Loading feature to allow your users to upload CSV files, you may have encountered this error.

Data-Loading-Failed
(Note: the error may appear differently in your application as I have built a custom error handling function)

It’s not a particularly useful message, and the logs don’t seem to shed much light on the problem either – reporting only the following:

DATA_LOAD - Final collection is created
...Execute Statement: select 1 from "DEMO"."MY_TABLE" where "RECORD_ID" = :uk_1
Add error onto error stack
...Error data:
......message: Data Loading Failed
......additional_info: ORA-01403: no data found
...
......ora_sqlerrm: ORA-01403: no data found
......error_backtrace:
  ORA-06512: at "APEX_040200.WWV_FLOW_DATA_UPLOAD", line 4115
  ORA-06512: at "APEX_040200.WWV_FLOW_PROCESS_NATIVE", line 213
  ORA-06512: at "APEX_040200.WWV_FLOW_PROCESS_NATIVE", line 262
  ORA-06512: at "APEX_040200.WWV_FLOW_PLUGIN", line 1808
  ORA-06512: at "APEX_040200.WWV_FLOW_PROCESS", line 453

After trial and error I tracked down one potential cause of this error so I thought I’d share it in case it happens again. I’ll probably come across this again later and forget what the solution was and find this article.

In my case (APEX 4.2.4), the problem was caused by an invalid entry in the Column Name Aliases list of values. I was using a custom List of Values so that alternative names for the columns would be automatically mapped without the user having to select them every time. To do this, I had to edit the List of Values directly to add the alternative names; but I had mistyped one of the Return Values which must map to a real column name on the target table. Whenever I picked this column for an import, I’d get the “Data Loading Failed” error message. Correcting the return value resolved the issue.

In order to stop this happening again, I added the following check to my Apex QA script (this is run whenever the application is deployed):

PROMPT Invalid dataload column mappings (expected: none)
SELECT REPLACE(lt.owner,'#OWNER#',USER) AS owner
      ,lt.table_name
      ,le.return_value  AS target_column_not_found
      ,le.list_of_values_name
      ,le.display_value AS col_alias
      ,lt.application_id
      ,lt.application_name
      ,lt.name AS dataload_definition
FROM   apex_appl_load_tables lt
JOIN   apex_application_lov_entries le
ON     le.lov_id = lt.column_names_lov_id
WHERE NOT EXISTS (
  SELECT NULL
  FROM   all_tab_columns tc
  WHERE  tc.owner = REPLACE(lt.owner,'#OWNER#',USER)
  AND    tc.table_name = lt.table_name
  AND    tc.column_name = le.return_value)
ORDER BY lt.name, le.display_sequence;

If the above query returns any rows, it’ll be a problem.


Remove punctuation from string using Javascript

I’m a morning person, and my mind is usually sharpest on Monday or Tuesday mornings, so these are the best times for me to work on fiddly javascript stuff. Today was one of those mornings and here are the results, just in case I want to refer back to them later on.

I had many items dotted around an Apex application where the user is allowed to enter “Codes” – values that must be uppercase and contain no spaces or other punctuation characters – except underscores (_) were allowed.

To make things easier for the user, I wanted the page to automatically strip these characters out when they exit the field, instead of just giving validation errors (Note: I still included the validations, but the javascript just makes the process a bit smoother for the user doing the data entry).

My APEX application already has a global .js file that is loaded with each page, so all I had to do was add the following code to it:

function cleanCode (c) {
  return c.replace(/[^A-Za-z0-9_]/g,"");
}

$(document).ready(function() {

  //automatically remove non-code characters from
  //"edit_code" class
  $( document ).on('change', '.edit_code', function(){
    var i = "#"+$(this).attr("id");
    $(i).val( cleanCode($(i).val()) );
  });

});

EDIT: greatly simplified regexp based on the excellent contribution by Jacopo 🙂
EDIT #2: corrected, thanks to Sentinel

Finally, on each “Code” page item, I set the following attribute (or append, if other classes have already been added):

HTML Form Element CSS Classes = edit_code

For code items within a tabular form, I set the following column attribute:

Element CSS Classes = edit_code


Static File not updating in runtime APEX environment

The UAT environment is a runtime APEX installation (4.2.4.00.08) and all deployments are done via SQL scripts. My application uses a small number of static files that for convenience we are serving from APEX (at least for now); to deploy changes to these static files, I export f100.sql and static_files.sql from APEX in dev and we run them in the target environment after setting a few variables, like this:

declare
  v_workspace CONSTANT VARCHAR2(100) := 'MYWORKSPACE';
  v_workspace_id NUMBER;
begin
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = v_workspace;
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  apex_application_install.set_schema('MYSCHEMA');
  apex_application_install.set_application_id(100);
end;
/

@f100.sql
@static_file.sql

Many months after this application went live, and after multiple deployments in all the environments, we suddenly had an issue where the static files being served from one instance (UAT) were an older version. The logs showed the correct files had been deployed, and re-deploying into DEV seemed to work fine. I got the DBA to temporarily change the schema password in UAT so I could login to see what was going on.

When I ran this query in DEV, I got the expected two records:

select * from apex_workspace_files
where file_name in ('myapp.css', 'myapp.js');

When I ran it in UAT, I got four records – two copies of each file, and the BLOB contents showed that the older copies were the ones being served to the clients. I have no idea how the extra copies got created in that environment. It must have been due to a failed deployment but the deployment logs didn’t seem to show any errors or anomalies.

Killing the Zombie Static File

I tried editing the static_file.sql script to remove the files (as below), but it only ever removed the new files that were created; re-running it never causes it to drop the old file copies.

...
declare
  l_name    varchar2(255);
begin
  l_name := 'myapp.css';
  wwv_flow_html_api.remove_html(
    p_html_name => l_name,
    p_flow_id   => nvl(wwv_flow.g_flow_id, 0) );
end;
/
...

Next thing I tried was something I picked up from here:

NOTE: run this at your own risk! It is not supported by Oracle.

declare
  v_workspace CONSTANT VARCHAR2(100) := 'MYWORKSPACE';
  v_workspace_id NUMBER;
begin
*** WARNING: DO NOT RUN THIS UNLESS YOU KNOW WHAT YOU ARE DOING ***
  select workspace_id into v_workspace_id
  from apex_workspaces where workspace = v_workspace;
  apex_application_install.set_workspace_id (v_workspace_id);
  apex_util.set_security_group_id
    (p_security_group_id => apex_application_install.get_workspace_id);
  delete from wwv_flow_files where filename like 'myapp.%';
*  commit;
end;
/

That seemed to do the trick. Thankfully this problem only occurred in a test environment – I would be uncomfortable running this in Prod.


Make Tabular Form Conditionally Read-only

If you decide to use an editable tabular form to present a number of records for viewing and/or editing, but you have some users who are only allowed to view the data but not edit it, you’d think you could set the “Readonly” condition on the region; but this condition is only applied to any extra region items you add, not to the editable items within the report itself.

tabular-form-readonly1

Here’s my tabular form, with the records still editable:

tabular-form-readonly3

One way to get around this is to have two separate report regions on the page – one is the editable tabular report, the other is an ordinary standard report that doesn’t have any of the edit capabilities – and use conditions to hide one or the other depending on the user’s authorisation.

Another way is to use conditions and jQuery to make all the items in the tabular form readonly:

1. Put a condition on all the buttons (e.g. “Add Row”, “Delete”, “Save”, etc) so they are not shown if the user doesn’t have edit privilege

2. Put the same condition on the Multi-Row processes so that they will not run if the user doesn’t have edit privilege.

3. Set the static ID on the region so jquery can find it:

tabular-form-readonly2

4. Add a Dynamic Action to make all the input items within that region disabled:

Event: Page Load

Authorization Scheme: {Not Editor} (this is just an example where I have an Authorization scheme called “Editor”; alternatively you could set a Condition instead)

True Action: Execute Javascript Code

$("#linesreport input, #linesreport select").prop("disabled",true)

Now, when the page loads, if the user doesn’t have edit privilege the items are rendered readonly, e.g.:

tabular-form-readonly4

There are other variations on this theme, e.g. we could target the jQuery expression to just the text inputs while still allowing the user to use the checkboxes (e.g. if there was some action that we wanted to allow). Of course, if I wanted to hide the checkboxes completely, I’d just put the authorization on the [row selector] column in the tabular report definition.


“Smart quotes” showing as “?” in emails

When some of my users were using my system to send emails, they’d often copy-and-paste their messages from their favourite word processor, but when my system sent the emails they’d have question marks dotted around, e.g.

“Why doesn’t this work?”

would get changed to

?Why doesn?t? this work??

Simple fix was to detect and replace those fancy-pants quote characters with the equivalent html entities, e.g.:

function enc_chars (m in varchar2) return varchar2 is
begin
  return replace(replace(replace(replace(m
    ,chr(14844060),'&#8220;')/*left double quote*/
    ,chr(14844061),'&#8221;')/*right double quote*/
    ,chr(96)      ,'&#8216;')/*left single quote*/
    ,chr(14844057),'&#8217;')/*right single quote*/
    ;
end enc_chars;

P.S. Stupid wordpress keeps mucking around with my code, trying to replace the html entities with the unencoded versions. In case this doesn’t work, here’s an image of what the above code is supposed to look like:
enc_chars


Review all item help texts

The business analyst or QA wants to check all the help texts for all items in your apex application – don’t force them to navigate to each page and click on the labels, one by one; instead, give them a spreadsheet to review at their leisure.

Method 1: use the APEX data dictionary viewer

1. Open your application in the APEX application builder

2. Utilities -> Application Express Views

3. Choose APEX Application Page Items

4. Include PAGE_ID, PAGE_NAME, REGION, ITEM_NAME, LABEL, DISPLAY_AS, ITEM_HELP_TEXT

5. Click Filter >

6. Select APPLICATION_ID = <your app id>

7. Select ITEM_HELP_TEXT IS NOT NULL

8. Click Results >

9. Click Download

Method 2: query the data dictionary directly using your tool of choice

select page_id, page_name, region, item_name, label, display_as, item_help_text
from apex_application_page_items
where application_id = :my_app_id and item_help_text is not null
order by page_id, region, display_sequence;