Got a lot of APEX applications, and/or a schema with lots of objects?
Not sure exactly what database objects are used by which application?
Not sure dropping a particular schema object might break something in your application?
Not sure if all the SQL and PL/SQL in every page of your application still runs without error?
If your answer to any of the above is “Yes”, you may be interested in a new API that has been added in APEX 24.1.
You can use this API to scan your application for any references to any schema objects, whether it refers to objects in a region, SQL queries, PL/SQL processes, and even plugins. Run this in your workspace, giving it the ID of an application you want to scan:
begin
apex_app_object_dependency.scan(p_application_id => :app_id);
end;
This scans through the whole application by generating a small temporary procedure that tests each schema object name, SQL, or PL/SQL. As it goes, it checks that the code compiles without error, and if it compiles, it saves a list of dependencies detected by the database including tables, views, stored functions and procedures, packages, and synonyms. It drops the temporary procedure at the end.
Depending on how big your application is, the scan may take some time to complete (e.g. 30 to 60 seconds) due to the time required to compile and analyze each temporary procedure. If you find it gets stopped prematurely due to a timeout error, you can run it in the background (for example, I like to use an “Execution Chain” process in an APEX application with “Run in Background” enabled).
Note that none of your application code is actually executed, so there should be no side effects of running the scan. However, if your database has any DDL triggers, they may fire as the temporary procedure is created and dropped.
Viewing the Scan Results
Once the scan is complete, you can query the results at your leisure by querying these views:
APEX_USED_DB_OBJECTS
APEX_USED_DB_OBJECT_COMP_PROPS
APEX_USED_DB_OBJ_DEPENDENCIES
APEX_USED_DB_OBJECTS
This lists each schema object that is used at least once in your application.
select
referenced_type, referenced_owner,
referenced_name, referenced_sub_name,
usage_count
from apex_used_db_objects
where application_id = :app_id;
Note that “USAGE_COUNT” is the number of distinct component properties that refer to the schema object; if a single component (e.g. a Process) refers to an object multiple times, it will only count as one usage.
If your database package is compiled with PL/Scope enabled, the dependency analysis will also report fine-grained dependencies on the functions, procedures, and other components within the database package.
APEX_USED_DB_OBJECT_COMP_PROPS
This lists each component property in your application that references at least one schema object.
select
page_id,
component_type_name, component_display_name,
property_group_name, property_name
from apex_used_db_object_comp_props
where application_id = :app_id;
You can also include the column CODE_FRAGMENT to show the object name, SQL or PL/SQL that was analyzed.
If a component property cannot be analyzed due to a compile error (e.g. if an expected database object is missing) the same view will tell you what the compile error was, which may help you to determine what’s gone wrong.
select
page_id,
component_type_name, component_display_name,
property_group_name, property_name,
code_fragment, error_message
from apex_used_db_object_comp_props
where application_id = :app_id
and error_message is not null;
APEX_USED_DB_OBJ_DEPENDENCIES
This is the complete report showing for each component property, all the database objects it refers to.
select
page_id,
component_type_name, component_display_name,
property_group_name, property_name,
code_fragment,
referenced_type, referenced_owner,
referenced_name, referenced_sub_name
from apex_used_db_obj_dependencies
where application_id = :app_id;
Using the API
The results of a scan will be retained until the application is re-scanned. You can scan any number of applications in your workspace and analyze the results all together if you need. If an application is modified, you can re-scan it to refresh the report.
You may find it useful, so I encourage you to give it a try. If you do, please note a few caveats:
The documentation for this API is not yet available, but is being worked on.
When APEX is upgraded to a new version, all report results are wiped. You can then re-scan your applications to get up-to-date results.
The reports do not include recursive dependencies – e.g. if your application refers to a view, the report will not list the underlying tables of the view.
If the application includes any plugins, the dependencies report will include references to some internal plugin-related APIs even if your code doesn’t directly reference them.
If your application executes any dynamic SQL or PL/SQL (e.g. using “execute immediate”), any dependencies arising from the dynamic code will not be reported.
There are some component properties that are not included in the scan, such as the column names in a report (however, the data source for the region is scanned).
In spite of the caveats, I’m sure there are quite a few ways this new API will prove useful. We expect it will be further improved in future releases, including being integrated into the APEX Application Builder.
Wherein I natter on about gear and how I choose what to buy. Perhaps with a point to make about software development.
At least once a year we pack the Prado with a tent, gear and food and go camping with a group of friends. For a few days to a week or so we’d unplug, relax and enjoy God’s creation somewhere in our vast state of Western Australia.
I’m a frugal person so at first we only collected the bare essentials, got a tent on special for $100 big enough to fit the whole family. Every time we’d go camping we’d think about what worked well last time, and what could be improved, and perhaps get some extra gear or replace something to make things a bit easier. Got a 4WD vehicle so we could go to more places. Got a fast frame tent to reduce the time to set up (this makes a huge difference when we rock up to the site with 5 minutes of light left and three hungry kids). Bought a collapsible basin and bucket which allowed us to fit more gear in. Got some hardy camping boxes for gear. A waterproof rooftop bag for the car which fits all our clothing, pillows and sleeping bags. Ditched the self-inflating mattress that got all warped and took up heaps of space, for some cheap inflatable ones that would only last a year or so but pack down very tightly.
Improvements to the car have also been made. Instead of spending thousands of dollars getting a custom set up done, while not really knowing exactly how we want it set up, we have gradually added small upgrades a bit at a time. Auxiliary battery installed with an isolator so that our fridge and chargers would run without draining the starter battery. My friend installed some power points in the boot so I could run the fridge there, as well as some USB points for charging phones. This year he upgraded my setup to a DC/DC charger which has improved things a lot, and I got a folding solar panel so that I can keep things running (as long as there’s sunlight). Ran two fridges, one as a freezer and one as a fridge – and learned that my aux battery doesn’t have the capacity to keep them both running well, especially in high temperatures. So now I have an idea of what sort of fridge/freezer combo I’d like, and what sort of power usage efficiency I need to be looking for in such a unit.
Early on we were short on funds and we did everything “on the cheap”. This required being very selective about what things we bought, looking for bargains, and buying some things secondhand. Over time we got better at managing our finances, got our spending under control – and so were able to afford the occasional big upgrade when it seemed worthwhile. Instead of just buying cheap gear that frustrates us for a couple of years before they break, we can invest in higher quality products that last many years. Our spending is reasonable and intentional.
Software development works well when done this way, as well. Sometimes it’s a good idea to quickly throw together a rough solution to get the job done. It only costs a small amount of time and gives us the opportunity to see if it really works, where are the rough edges, what areas need work, what features were never really needed in the first place. Later, we can throw part of it out and replace it with a more expensive component (expensive, as in taking a longer period of time and perhaps more manpower to build).
The “big bang” approach never really works in my opinion. It’s so tempting to want to divert down the “tear it all down and rebuild from scratch” road, especially when in the depths of trying to analyse and understand some complex behaviour arising from code that some developer (probably myself) wrote many months ago. You want to throw it all out and start again because that’s a lot more fun. Don’t believe it!
Any non-trivial project will have complexity that, whether you are on your own or are in a team, will have many issues that are difficult to resolve, many of which would have been resolved early on if a small prototype had been built first. This complexity is a simple consequence of the fact that every non-trivial system will be comprised of multiple smaller systems which need to work together; there are exponentially more ways for these mini-systems to interact and fail than you have consciously considered or tested.
I’m currently several years into a large project that is currently being used in production but still has a long list of features yet to be built. It originally started with a small prototype, that itself was an adaptation of a much older, smaller application. The prototype involved a few changes, improvements and refactorings but it was still recognizable from its progenitor. This prototype became a system that was used worldwide with great success, but not without some issues internally that made it difficult to extend and adapt.
These were architectural design issues that required a major redesign – but instead of building a whole new system “from scratch”, we designed the new system with the prototype as a basis. The new data model was an evolution of the original rather than being completely new. In some ways it might have been nicer to start with a completely new data model, but the approach we took meant that a lot of the code from the original simply needed to be adapted to the changes, and meant that we could get our beta version out for testing and evaluation quicker.
Several years later, we had gone back and rewritten, refactored, or even completely replaced large portions of the code; now, I suspect that there are very few, if any, lines of code that have survived from the original source. We’ve changed approaches, technologies and frameworks gradually, replacing one thing at a time. This has not always been smooth; sometimes edge cases and smaller features are forgotten or lost in a rewrite. On the whole, however, we have maintained continuity and releasing the product in small(ish) releases has been successful and generally been received positively.
Recently we went on a 4WD trip in my home state, Western Australia. The photos in this post were from that trip.
Quite often I will need to export some data from one system, such as system setup metadata, preferences, etc. that need to be included in a repository and imported when the application is installed elsewhere.
I might export the data in JSON or CSV or some other text format as a CLOB (character large object) variable. I then need to wrap this in suitable commands so that it will execute as a SQL script when installed in the target system. To do this I use a simple script that takes advantage of the APEX_STRING API to split the CLOB into chunks and generate a SQL script that will re-assemble those chunks back into a CLOB on the target database, then call a procedure that will process the data (e.g. it might parse the JSON and insert metadata into the target tables).
This will work even if the incoming CLOB has lines that exceed 32K in length, e.g. a JSON document that includes embedded image data encoded in base 64, or documents with multibyte characters.
This is clob_to_sql_script:
function clob_to_sql_script (
p_clob in varchar2,
p_procedure_name in varchar2,
p_chunk_size in integer := 8191
) return clob is
-- Takes a CLOB, returns a SQL script that will call the given procedure
-- with that clob as its parameter.
l_strings apex_t_varchar2;
l_chunk varchar2(32767);
l_offset integer;
begin
apex_string.push(
l_strings,
q'[
declare
l_strings apex_t_varchar2;
procedure p (p_string in varchar2) is
begin
apex_string.push(l_strings, p_string);
end p;
begin
]');
while apex_string.next_chunk (
p_str => p_clob,
p_chunk => l_chunk,
p_offset => l_offset,
p_amount => p_chunk_size )
loop
apex_string.push(
l_strings,
q'[p(q'~]'
|| l_chunk
|| q'[~');]');
end loop;
apex_string.push(
l_strings,
replace(q'[
#PROC#(apex_string.join_clob(l_strings));
end;
]',
'#PROC#', p_procedure_name)
|| '/');
return apex_string.join_clob(l_strings);
end clob_to_sql_script;
Note that the default chunk size is 8,191 characters which is the safe limit for multi-byte characters. You can choose a smaller chunk size if you want, although if the incoming CLOB is very large, the smaller the chunk size the bigger the expanded SQL script will be.
A simple test case will demonstrate what it will do:
declare
l_input clob;
l_output clob;
begin
l_input := q'[
{
"data": "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."
}
]';
l_output := clob_to_sql_script(
p_clob => l_input,
p_procedure_name => 'mypackage.import',
p_chunk_size => 60 );
dbms_output.put_line( l_output );
end;
/
The above script would output this:
declare
l_strings apex_t_varchar2;
procedure p (p_string in varchar2) is
begin
apex_string.push(l_strings, p_string);
end p;
begin
p(q'~
{
"data": "Lorem ipsum dolor sit amet, consectetur adip~');
p(q'~iscing elit, sed do eiusmod tempor incididunt ut labore et d~');
p(q'~olore magna aliqua. Ut enim ad minim veniam, quis nostrud ex~');
p(q'~ercitation ullamco laboris nisi ut aliquip ex ea commodo con~');
p(q'~sequat. Duis aute irure dolor in reprehenderit in voluptate ~');
p(q'~velit esse cillum dolore eu fugiat nulla pariatur. Excepteur~');
p(q'~ sint occaecat cupidatat non proident, sunt in culpa qui off~');
p(q'~icia deserunt mollit anim id est laborum."
}
~');
mypackage.import(apex_string.join_clob(l_strings));
end;
/
Recently I’ve been reviewing and updating my knowledge of APEX security, especially protection from URL tampering. I’ve read the documentation, a number of blogs, and heard from people with experience in the field such as Lino. By default, when you create a new application in APEX you get the following security settings set automatically, which is a good start:
Application Session State Protection is Enabled.
Each page has Page Access Protection set to Arguments Must Have Checksum.
Each Application Item has Protection Level set to Restricted – May not be set from browser.
Each Primary Key Item* created by a wizard has Protection Level set to Checksum Required – Session Level.
(* that is, any item mapped from a table column that is, or forms part of, a Primary Key constraint).
These default settings are considered best practice. If you change these, it becomes your responsibility to ensure that your application is protected against security vulnerabilities from URL tampering.
For page items, however, the Protection Level defaults to Unrestricted. This is ok for Form items because the page fetch process will set their values on page load, rendering any attempt at URL tampering ineffective.
For non-form page items, unless the Page Access Protection is relaxed (Unrestricted), leaving items unrestricted is safe since URL tampering is blocked for the entire page anyway. At runtime, if a malicious visitor tries to modify the item value via the URL, they will get the error “No checksum was provided to show processing for a page that requires a checksum when one or more request, clear cache, or argument values are passed as parameters.“
However, what if a developer later needs to change the page to Unrestricted? They may unwittingly introduce a potential URL tampering issue because one or more items were not protected.
UPDATE: in fact, this applies even if it’s a different page in the same application. As Martin D’Souza pointed out a decade ago, URL tampering for any item in the application is possible from any page in the application that is Unrestricted.
The majority of these items are editable input items, so the fact that someone may input a value via the URL is not a big deal. However, for Hidden and Display Only items, it is common for application logic to depend on their values; this logic may be adversely affected by malicious values supplied via the URL.
In some cases, this default is needed in order for the application to work. Some examples when an item must be left Unrestricted are:
An item is changed by a Dynamic Action (whether via a Set Item Value, via the Items to Return of a Execute Server-side Code action, or in some custom JavaScript), and cannot have Value Protected set because the page may be submitted.
We do actually intend the item to be set via the URL, e.g. when an external web page has a link that sets the item’s value.
In all these cases, the application must be designed to ensure it does not “trust” the value of these items; it should apply suitable checks to ensure the values are valid.
In most cases, it is best practice to set the item Protection Level to Checksum Required – Session Level (or Restricted – May not be set from browser where supported).
You can use a query like this to discover all items that may need to be reviewed:
select
i.application_id,
i.page_id,
i.page_name,
i.region,
i.item_name,
i.display_as
from apex_application_page_items i
where i.application_id = :app_id
and i.item_protection_level = 'Unrestricted'
and i.display_as_code in ('NATIVE_HIDDEN','NATIVE_DISPLAY_ONLY')
order by i.application_id, i.page_id, i.region, i.item_name;
Other excellent options are using third-party security scanners such as APEXSec and APEX-SERT to alert you to potential issues in your application. We mandate the use of tools like these internally at Oracle for our own applications and they are a great help.
Using the Session State Protection Wizard
One easy step you can take right now is to use the Session State Protection wizard. It gives you a quick overview of what level of protection your application has against URL tampering, and gives an easy way of fixing the relevant attributes in bulk.
You can access the wizard via Shared Components > Session State Protection
Alternatively, you can access the wizard via Edit Application Definition > Security > Session State Protection > Manage Session State Protection
The wizard starts by showing an overview of the current state of your application’s protection against URL tampering.
You can see if your application has Session State Protection enabled (which it should, really), and if any pages, page items, and/or application items are unprotected. In my sample app here, it’s obvious that there are some potential security issues that need to be reviewed.
You can click the > buttons next to each category to list all the pages and items that need to be reviewed.
The main things to watch out for are Pages, Page Items, and Application Items that are set to Unrestricted. Other values are generally fine.
If you see any Items which are set to Checksum Required but not at the Session Level, you may find that a developer has simply set them incorrectly and you should consider changing them to Session Level. However, there are some scenarios where the other levels (Application Level, or User Level) are required.
Now, I might now go through the application page-by-page and set the protection level on each page and item as appropriate. This could be a laborious process for a large application.
A good alternative is to use this wizard to set the protection level in bulk. In this case, I’m going to click Set Protection.
Perfect!
Final Steps
Now, I need to check for hidden page items that are now restricted that might need to be returned to Unrestricted. Otherwise, users will see the error “Session state protection violation” when they submit the page, if a dynamic action has changed them.
The following query will alert me to any Hidden items that have Value Protected switched off (e.g. because they need to be submitted):
select
i.application_id,
i.page_id,
i.page_name,
i.region,
i.item_name,
i.display_as
from apex_application_page_items i
where i.application_id = :app_id
and i.item_protection_level != 'Unrestricted'
and i.display_as_code = 'NATIVE_HIDDEN'
and i.attribute_01 = 'N' -- Value Protected
order by i.application_id, i.page_id, i.region, i.item_name;
Now I can review this item to check if Value Protected really needed to be switched off. If the page is never submitted, or the item is never changed by any dynamic actions, this could be switched On. Otherwise, I need to set the item protection to Unrestricted in order for the page to work.
Having made changes to the application, I need to test to ensure I haven’t introduced any issues. My focus will be mainly on the following areas:
Navigation – e.g. do the View or Edit buttons in all reports still work?
Dynamic actions – e.g. do all the dynamic actions and custom javascript still work on all pages that set item values?
For #1, I’m looking for any links that include item values that were not correctly built. If the application generates any links using just string concatenation, it will fail if the target page expects a checksum. The application should build these links using declarative link attributes if possible, or by calling apex_page.get_url (or apex_util.prepare_url at least).
For #2, I would test to ensure that after triggering a dynamic action or javascript code that modifies an item’s value, that the form is still submitted (saved) without error.
I had an APEX page based on a Form region that I’d built by hand (rather than using the wizard). I was wondering why the user always got an unexpected warning “Changes that you have made may not be saved.” – even though they hadn’t changed anything on the page.
I noticed that the item had a List of Values, and it had the Display Null Value setting set to No; however, the value in the underlying column was NULL. What was happening was that the item could not handle a null value, so it was changing to the first value in the LOV; this in turn marked the item as “changed” which caused the “unsaved changes” warning to show when the user tries to navigate away from the page.
When I set Display Null Value to Yes, the problem was resolved. Alternatively, I could have ensured that the underlying column would always have a value (e.g. by putting a NOT NULL constraint on it), which would also have resolved this problem.
Within the APEX development team it has been known for some time that maps are frequently required and this has been on the roadmap to be built in to the core product.
Since 2016 I have built and maintained plugins that make it easy to integrate Google Maps in your APEX applications. The most popular of these is the ReportMap Plugin which has received a wealth of features and enhancements, and is being used widely.
Introducing: Map regions
In the latest pre-production release APEX 21.1 which you can try today for yourself at apex.oracle.com, the APEX development team has introduced the new Map region. This component allows the developer to show a world map on the page, and add one or more “Layers”, each based on a table or SQL query, to show features on the map.
This is a welcome move because it means maps will be easy for all APEX developers to use, including those who avoid plugins (e.g. due to the lack of direct support from Oracle). It also means that maps added to applications based on the core feature will be fully supported in future releases of APEX without needing to rely on a 3rd party.
Note: this post is based solely on my first impressions of the new feature and is not intended to answer all questions about it. There will be more official blog posts coming directly from the APEX development team about this and other new features being introduced in APEX 21.1, so keep a look out for those!
The APEX Map region renders the map using the MapBox API, with a number of tile backgrounds included based on OpenStreetMap and HERE map data.
Implementing a Map Region
Let’s try the new Map region, and compare it to the jk64 ReportMap plugin. To add a map to a page, I added a region of type “Map” and nominated a data source for the Layer to show on it.
First thing you might notice is what I didn’t need:
no 3rd party plugin to install
no account on a 3rd party cloud service
no API key, and no APIs need to be enabled
no credit card details
The Map region is free to use within your applications, just like any other built-in feature of APEX.
If you already have a map based on the ReportMap plugin, you can if you wish convert it to a map by simply changing the region Type to Map – the SQL data source will be preserved and you can use the same query for the new Map region type. However, the danger with this approach is that you may lose some attributes and customisations that you had on the plugin region. Instead, I recommend using the Duplicate feature to create a copy of your map region, change the attributes on the copy, using the original as a reference as you go. Once you’re happy with the new map you can then delete the original plugin region.
You can choose to set the data source directly on each Layer (e.g. if you want to get data from multiple data sources), or you can set the data source on the map Region. If you set it on the Region, each Layers may refer to the Region data source by setting the layer’s Location attribute to Region Source.
If you have a single data source that you want to split up into multiple Layers, you can set the Row Assignment column which will be used to differentiate the data for each layer. This would be more efficient than adding separate SQL queries for each layer if the queries are based on the same table or view.
The first thing I noticed when converting my demo application was that since my data source was based on an APEX collection, the Lat and Lng columns are strings, not numbers; to use this as a data source for the Map region I had to convert the strings to numbers first using TO_NUMBER.
Once that was done, I could map the LAT and LNG columns to the layer by setting the Geometry Column Data Type to Longitude/Latitude:
This is the result:
I wanted to style it as close as possible to my plugin demo so I tweaked a few attributes on the Region:
Appearance / Template Options / Remove Body Padding = on
Region Attributes:
Background = Custom
Standard (tile background) = Oracle World Map
Height = 350
Controls / Options / Mousewheel Zoom = on
Legend / Show = off
Layer Attributes:
Point Objects / Shape Scale = 1.5
Appearance / Fill Color = #e21818 (reddish)
Tooltip / Column = NAME
Info Window / Title Column = NAME
Info Window / Body Column = INFO
Much nicer colours, in my humble opinion 🙂
Map Dynamic Actions
The next thing I need to do is handle click events on the pins. To do this, I added a Dynamic Action to the map region, on the event Map Object Clicked.
Notice that the Map region can trigger the following events:
Map Initialized – the map has loaded, including all the data
Map Changed – the user pans the map or zooms in or out
Map Clicked – the user clicks in the map area (but not on a pin)
Map Object Clicked – the user clicks on a pin
Whenever I’m experimenting with dynamic actions one of the first things I’ll add is a Action to run this JavaScript:
At runtime in debug mode, the console log will then show the content of this.data which shows the structure it takes.
Based on this, I can see that we get the id, infoWindow, lat, lng, and tooltip attributes – similar to the structure provided by the ReportMap plugin events. If I want to display the Latitude and Longitude values on the page, I create a display-only item (e.g. P1_POSITION), and add a “Set Value” action to the dynamic action as follows:
When a pin is clicked, we now see this data on the page:
Of course, we could make this item hidden if we want, and use its value in other ways.
The Map Clicked event returns this.data.lat and this.data.lng to indicate where the user clicked.
The Map Changed event returns:
this.data.changeType – e.g. ‘map-resized’, ‘map-drag’, ‘map-zoom’
this.data.bbox – bounding box, an array of numbers of longitudes and latitudes [west, south, east, north]
this.data.layers – an array with attributes for each layer (id, name, visible)
this.data.bearing
this.data.pitch
this.data.zoom
New Features
The following is a short list of features or enhancements that the new Map region has, which are not (currently) supported by the ReportMap plugin:
Multiple data layers, each with their own distinct data source; you can also use one SQL query for multiple layers if you want
REST data sources
Declarative styles – options for customing how each data layer is rendered, including pin and feature icon, colour and size
Min/Max Zoom – each layer can be shown or hidden automatically depending on the zoom level (e.g. show a more detailed layer as the user zooms in)
Legend – each data layer (optionally) can be shown in a legend, allowing the user to selectively show/hide each layer
Extruded polygons – show a 3D polygon rising from the ground with the height determined by a column from your data source
Initial Position and Zoom is remembered between page requests
Circle tool – allows the user to draw a circle on the map; you can then respond to this by filtering data by points that fall within the circle
Distance tool – allows the user to calculate the distance between two points
Conclusion
That’s page 1 of my side-by-side demo done. Not too shabby!
Only 34 other pages to go over… eventually I plan to release my demo application for you to play with. Some of the demo pages will not be so easy or even possible to replicate, however, until we get the ability to use the Google Maps JavaScript API in the native Map region – so they’ll need to wait for some potential future release.
More posts to come…
Q: Should we switch now, or wait?
If you have a simple map on your page showing some pins, with no custom JavaScript code that provide specific custom behaviour, I think you should consider switching now. Of course, you should make sure to test the new region thoroughly – as with any major new feature, there will be a few bugs here and there. Overall however the Map region looks solid so far and is based on a well-known open source API with a long history.
One of the things to remember if you’re planning to switch from the ReportMap plugin to the new Map region is that not every feature is implemented the same way, and some features are not available at all (at least, not yet, if ever). This is partly because they are based on different JavaScript APIs (Google Maps vs MapBox). In addition, some UI behaviour is different and will likely always be different, although a lot of it will be customisable.
Q: Is this the end for the JK64 ReportMap plugin?
No, the ReportMap plugin is not going away any time soon. I plan to enhance and support it for years to come. My intent is to continue supporting the plugin at least until the APEX Map component supports integration of the Google Maps JavaScript API, and provides declarative support for most (ideally, all) of the declarative features of the ReportMap plugin. I expect that after this, most developers will naturally choose the native Map region over the plugin for new projects, and I’ll only need to provide bug fix support for the plugin while existing users of the plugin migrate gradually to the new component.
I consider this one of the strengths of the plugin system: the APEX development team will never have the capacity to support every last feature needed for every last application built for any customer in the world, so developers and companies should be empowered to create and use plugins to fill those gaps.
These plugins often serve to prove there is demand and need for a particular feature, and this often prompts the APEX development team to consider building these features into the core product, making it better with each release. Exciting times are ahead of us.
The plugin now supports Overlays, which allows you to add almost any arbitrary HTML or image content to particular points on the map. For example, instead of showing the default red pin, you might want to show a pretty Information card at a location. The cards are clickable as well, and you can use a dynamic action to make your app respond however you wish.
You can even show an image as a map overlay, which will be scaled automatically as the user zooms and pans the map.
In the previous release I added a companion Dynamic Action “Action” plugin that allows you to declaratively perform actions on the map. This release fixes a few bugs in that plugin, and also adds a new, second companion Dynamic Action plugin: Show Directions. This plugin allows you to invoke the Directions API on the map between one location and a destination without needing to write JavaScript. The locations may be specified with static values (lat,lng coordinates or addresses), from items on your page, JavaScript expressions, or jQuery selectors. The travel mode (e.g. driving, bicycling, etc.) can also be specified the same way.
The Dynamic Action plugin, called JK64 Report Google Map R1 Action, allows you to implement any of a range of custom behaviours on your map region. If installed, you can make the map respond to user behaviour or other events without needing to resort to writing your own custom JavaScript.
You can add a dynamic action to modify a variety of options and attributes of the map, execute searches, pan and zoom, load features via geoJson, and more – and these can be based on the value of items on your page, or via JavaScript expressions that you specify.
This is implemented as a radio item with a dynamic action on the Change event:
The dynamic action has the following attributes:
Note that in this case, it sets an Option – Map Type, based on the triggering element (the P28_MAP_TYPE item). The Affected Elements is required, and must refer to the map region that we want to change.
Notice anything missing? That’s right – No Code needed!
The plugin makes it easy to customise which of the default Google Map controls (buttons, etc.) are shown to the user:
Full Screen control
Map Type control
Rotate control
Scale control
Street View Pegman control
Zoom control
Other options that can be set include:
Clickable Icons
Disable default UI
Gesture Handling
Heading
Keyboard shortcuts
Map Type
Maximum Zoom level
Minimum Zoom level
Restrict search to Country
Styles
Tilt
Zoom level
In addition, the plugin allows you to restrict the map to a set of bounds, via the Restrict to Bounds or Restrict to Bounds Strict Mode actions.
Another enhancement included in this release is explicit support for the Table / View data source. This is simple to use, although not quite as flexible as the SQL Query option. Your table or view must include columns with the correct column names expected by the selected Visualisation – for example, if your Visualisation is Pins, the table or view must have columns named lat, lng, name and id. Click the Help tab on Table Name for more details, or review the WIKI (https://github.com/jeffreykemp/jk64-plugin-reportmap/wiki/SQL-Query-Examples).
NOTE: the plugin supports APEX 18.2 and later. It is no longer planned to include backports for older versions of APEX.
A big thanks to many APEX developers around the world who have installed and used the map plugin over the years. Your suggestions, questions and bug reports have contributed a great deal to improving the plugin.
Sometimes we can too quickly say “No” when we should take a moment to think about whether we can actually say “Yes”. This was one of those times for me.
We had just gone live with a fairly big system move + upgrade – a suite of APEX applications shifted from one database instance running APEX 5 to another instance running version 19.1. The migration went smoothly. After the new instance was released to the business to do some final shakedown testing, they noticed one problem:
“All our saved reports are gone.”
Oops. We had built the new instance by simply importing each application from source control; since these scripts don’t include the Saved Report formats that users had customised in many of the Interactive Reports in prod, they didn’t get migrated to the new database.
When they asked if we can transfer the saved reports across, I initially replied, “Sorry, we can’t” – thinking that redoing the migration from scratch with a full export from the old database, followed by re-doing all the app changes, was going to take too much time and effort.
A minute later, I sent another email. “Hold that thought – there is a way.” I’d forgotten that my script for exporting the applications uses an APEX API with some parameters that until now I’d rarely used, but which would solve our problem. My script included this:
I call this the “Yes” script. Using this “Yes” script, we performed the following steps:
Log into the schema on the old database (we hadn’t shut it down yet, thankfully – we’d just locked the schemas and set all the applications to “Unavailable”)
Run the “Yes” script. Rename the resulting export files, e.g. f100_ir.sql, f110_ir.sql, etc.
Log into the new database.
Run the IR scripts (f100_ir.sql, etc.). This reverted all the applications to their old versions, but they included all the user’s saved reports.
Re-import the new versions of the applications from source control. This upgraded all the applications, leaving the user’s saved reports intact.
Version 1.2 of the ReportMap Google Map plugin has been released today. While the rest of you have been idling away under Covid-19 restrictions, I’ve been happy as a clam working on some exciting enhancements to the plugin.
Included in this release are the following new features:
New visualisation: Spidifier
Show turn-by-turn Directions
Customise each Marker with your own JavaScript function
Load large data sets in batches
Show spinner while data is loading
Localisation options
A bug when the new Friendly URLs feature of Oracle APEX 20.1 is used with the Clustering visualisation has also been fixed in this release.
The full list of enhancements and bugfixes, with links to the issues register, may be viewed here.
The documentation has been updated. The plugin now has four new plugin attributes, as well as a number of other attributes that can be set via JavaScript (the officially supported ones are documented on the plugin attributes page). Three new plugin events have also been added to support the new features.
Spiderfier
If you have a map that needs to show a lot of pins, especially ones that are close together, the plugin previously had the option of Clustering them at high zoom levels. The user could click on a cluster to zoom in enough to show the individual pins. One weakness of this approach is that if one or more pins are almost (or exactly) overlapping, the cluster never “unclusters” – the user cannot zoom in far enough to get the pins to show individually.
This release provides another Visualisation option, Spiderfier. This uses the OverlappingMarkerSpiderfier to control how pins react when clicked. When the user clicks a pin that is close or overlapping with other pins, it shifts the pins in that area into a ring, or a spiral (depending on how many pins are there) with lines pointing back to their original location. It also colours them blue to indicate they’ve been “spiderfied”. The user can then hover and click each marker separately.
If the user zooms in, the Spiderfier automatically returns all the pins to their original location.
I think the defaults I’ve set work reasonably well. If needed, you can customise the Spiderfier by setting its options via the JavaScript Initialisation Code (refer to “spiderfier options” here for details). You can also provide your own formatting function to change how the markers look when they are “spiderfied”.
Marker Icons
The WIKI has been augmented with a handy guide to Map Icons. The plugin has long supported the ability to specify custom images for the marker icons. This release gives a whole lot more control over the markers to the developer:
If all the icons in the query are being loaded from the same location, you can now set the iconBasePath option once and just have a relative icon file name in the query. When there is a lot of data to show in the map, this can significantly reduce the volume of data loaded to the client, which can lead to a significant performance improvement.
The developer can now supply a custom JavaScript function (via the markerFormatFn option) to format each marker using whatever logic they need.
For example, if the marker icon needs to be different according to some data value, you can send the data via one of the flex fields, and then write your custom function to set the marker icon depending on the value of the flex field.
You could also modify other characteristics of the marker, such as the title (hover text), info text (popup window), icon anchor point, opacity, and even position (although usually I’d expect your query would provide the correct lat/lng coordinates).
If you have a large number of custom icons you wish to use, along with a large data set of pins to render on the map, you could even compile the icons into a single sprite map to reduce network overhead. This means the image file is loaded once to the client, and then the map “cuts out” bits of the sprite map to render the marker icons. This can be done by setting just a few attributes of the marker’s Icon object. I haven’t tried it myself yet, but this tool looks like it would be useful for this purpose.
Loading Large Datasets
This release adds the Show Spinner and the Rows Per Batch attributes. These attributes are independent of each other, and they help to improve the quality of the experience for your end users when you are rendering a large number of pins on the map.
By default, new maps will have Show Spinner set on. For any existing maps, after upgrading you can turn this option on by setting it in the plugin attributes. This option causes the map to show the default APEX spinner while the data is loaded. The spinner is then removed when the last marker has been rendered. The effect is to give the user an indication that the map is “working”, and gives them immediate feedback when the data has finished loading and they may now interact with the map.
If the spinner seems to stay forever, it may indicate an issue with connectivity to the server (or perhaps that the server is under severe load or has stopped responding to requests).
When the APEX page has been rendered on the client, the Google Map is shown but the data is not immediately loaded; instead, a separate AJAX request is sent to the server to run your query and download all the data to render the pins on the map. By default, this is all done in one single AJAX call, which is the fastest way to get from start to finish; the downside is that the user will not see any pins on the map until all the data has been downloaded. You can change this behaviour by setting Rows Per Batch to some number (e.g. 1000). With this attribute set, the plugin will send a series of AJAX calls to the database (one at a time) and get a batch of records at a time. After loading a batch, the plugin will render the pins on the map (and if necessary, it will pan / zoom the map to show them all) and then send another AJAX request to get the next batch. When it has finished receiving all the batches, it adds any finishing touches needed (e.g. for a visualisation) and returns control to the user.
The advantage of this approach is that the user can see the pins being shown gradually, and they will know that “something is working”. This may help to give them a nicer user experience.
The downside of this approach is that it may cause a bigger load on the server (because each AJAX request requires running a new query, with an offset) and will usually take longer from start to finish. Generally, if your data comprises only a few hundred records at most, you will probably want to leave the Rows Per Batch setting blank.
The Future
There are still a few little enhancements on my “todo” list, but I’m keen to hear how you are using (or perhaps planning to use) this plugin, and if there are any new features or improvements that you need or want. If so, please raise them on the GitHub Issues page.
Quite a few people have raised questions or ideas in the past and sometimes I’ve incorporated them straight away, and other times it’s taken a little longer but I get there eventually. If you’re keen to contribute, feel free to have a poke around in the code and perhaps even do a pull request on the GitHub source to suggest a change. It would be great to collaborate with you because everyone has something unique to offer.
Long-term, I’m watching with interest the future direction of Oracle APEX. I remember at one point they were talking about incorporating some sort of new map region into the product, although the mention of this seems to have been dropped from the Statement of Direction (or maybe my memory is misleading me). I guess time will tell.