I recently saw this question on StackOverflow (“Is there any way to determine if a package has state in Oracle?”) which caught my attention.
You’re probably already aware that when a package is recompiled, any sessions that were using that package won’t even notice the change; unless that package has “state” – i.e. if the package has one or more package-level variables or constants. The current value of these variables and constants is kept in the PGA for each session; but if you recompile the package (or modify something on which the package depends), Oracle cannot know for certain whether the new version of the package needs to reset the values of the variables or not, so it errs on the side of caution and discards them. The next time the session tries to access the package in any way, Oracle will raise ORA-04068, and reset the package state. After that, the session can try again and it will work fine.
Side Note: There are a number of approaches to solving the ORA-04068 problem, some of which are given as answers to this question here. Not all of them are appropriate for every situation. Another approach not mentioned there is to avoid or minimize it – move all the package variables to a separate package, which hopefully will be invalidated less often.
It’s quite straightforward to tell whether a given package has “state” and thus has the potential for causing ORA-04068: look for any variables or constants declared in the package specification or body. If you have a lot of packages, however, you might want to get a listing of all of them. To do this, you can use the new PL/Scope feature introduced in Oracle 11g.
select object_name AS package,
type,
name AS variable_name
from user_identifiers
where object_type IN ('PACKAGE','PACKAGE BODY')
and usage = 'DECLARATION'
and type in ('VARIABLE','CONSTANT')
and usage_context_id in (
select usage_id
from user_identifiers
where type = 'PACKAGE'
);
If you have compiled the packages in the schema with PL/Scope on (i.e. alter session set plscope_settings='IDENTIFIERS:ALL';
), this query will list all the packages and the variables that mean they will potentially have state.
Before this question was raised, I hadn’t used PL/Scope for real; it was quite pleasing to see how easy it was to use to answer this particular question. This also illustrates a good reason why I like to hang out on Stackoverflow – it’s a great way to learn something new every day.
What’s the biggest clue you can give that your database is vulnerable to SQL injection? When your list of “forbidden words” looks suspiciously like a sample of SQL / PL/SQL keywords:
I notice that they haven’t forbidden BEGIN, CREATE, MERGE, or TRUNCATE …
Congressman Peters, your IT staff are doing it wrong.
Via: http://thedailywtf.com/Articles/Out-of-Service.aspx#pic4
Recently I refactored some PL/SQL for sending emails – code that I wrote way back in 2004. The number of “WTF“‘s per minute has not been too high; however, I’ve cringed more times than I’d like…
1. Overly-generic parameter types
When you send an email, it will have at least one recipient, and it may have many recipients. However, no email will have more than one sender. Yet, I wrote the package procedure like this:
TYPE address_type IS RECORD
(name VARCHAR2(100)
,email_address VARCHAR2(200)
);
TYPE address_list_type IS TABLE OF address_type
INDEX BY BINARY_INTEGER;
PROCEDURE send
(i_sender IN address_list_type
,i_recipients IN address_list_type
,i_subject IN VARCHAR2
,i_message IN VARCHAR2
);
Why I didn’t have i_sender be a simple address_type, I can’t remember. Internally, the procedure only looks at i_sender(1) – if a caller were to pass in a table of more than one sender, it raises an exception.
2. Functional programming to avoid local variables
Simple is best, and there’s nothing wrong with using local variables. I wish I’d realised these facts when I wrote functions like this:
FUNCTION address
(i_name IN VARCHAR2
,i_email_address IN VARCHAR2
) RETURN address_list_type;
FUNCTION address
(i_address IN address_list_type
,i_name IN VARCHAR2
,i_email_address IN VARCHAR2
) RETURN address_list_type;
All that so that callers can avoid *one local variable*:
EMAIL_PKG.send
(i_sender => EMAIL_PKG.address('joe','joe@company.com')
,i_recipients => EMAIL_PKG.address(
EMAIL_PKG.address(
'jill', 'jill@company.com')
,'bob', 'bob@company.com')
,i_subject => 'hello'
,i_message => 'world'
);
See what I did there with the recipients? Populating an array on the fly with just function calls. Smart eh? But rather useless, as it turns out; when we need to send multiple recipients, it’s usually populated within a loop of unknown sized, so this method doesn’t work anyway.
Go ahead – face your past and dig up some code you wrote 5 years ago or more. I think, if you don’t go “WTF!” every now and then, you probably haven’t learned anything or improved yourself in the intervening years. Just saying 🙂
Ever since I downloaded the Alexandria PL/SQL library, I haven’t been able to put it down. Just recently I decided I wanted to serve up a whole lot of media files directly from Amazon’s S3 simple storage service, instead of serving them from within my EC2 (elastic compute) instance. They were just wasting my linux server’s time responding to http requests.
So, I quickly wrote the following code to transfer them:
DECLARE
l_blob BLOB;
BEGIN
/* initialise my AWS session */
ALEX.amazon_aws_auth_pkg.init
( 'yyy-my-aws-id-yyy'
, 'xxx-not-telling-xxx'
, p_gmt_offset => -8);
FOR rec IN (
SELECT id, filename, mime_type, location
FROM myfiles
WHERE location = 'http://myserver/media/'
) LOOP
/* read the file from its current location */
l_blob := ALEX.http_util_pkg.get_blob_from_url
(rec.location || rec.filename);
IF DBMS_LOB.getLength(l_blob) > 0 THEN
/* upload the file to Amazon S3 */
ALEX.amazon_aws_s3_pkg.new_object
( 'mybucket'
, rec.filename
, l_blob
, rec.mime_type
, ALEX.amazon_aws_s3_pkg.g_acl_public_read);
UPDATE myfiles
SET location = 'http://mybucket.s3-ap-southeast-1.amazonaws.com/'
WHERE id = rec.id;
COMMIT;
END IF;
END LOOP;
END;
After a short while, all the files had been copied across to my bucket on S3, and my table updated so that my web site now points people’s browsers to the new location for those files.
Of course, I could have used UTL_FILE to read the files from disk, but then I’d have to first create a directory, and write a loop to read the file in chunks into the BLOB. Why bother with all that when I can just call http_util_pkg.get_blog_from_url and get it all in one go?
That’s the trouble with powerful utilities like Alexandria: they’re too easy to use, make tasks like this trivial, and you start finding all sorts of uses for them. All of a sudden, Alexandria is your hammer, and the world is full of nails.
See also: this quick intro to using Alexandria’s API for Amazon S3.
Normally, when you see code like this in a production system, you should duck your head and run:
SELECT NVL( MAX( id ), 0 ) + 1
INTO :new_id
FROM mytable;
What’s wrong with this code?
I hope the first answer that rolls off your tongue has something to do with concurrency – i.e. two sessions that run this around the same time will not see uncommitted rows from each other, and so are likely to try to insert rows with conflicting identifiers.
I hope the second answer that you might mention has to do with performance – even considering there’s a unique index on the column, this code will still need to read at least one index block to get the latest ID (assuming the query optimiser chooses to do a MIN/MAX index scan so that it doesn’t have to scan the entire index before returning a result). In a high load system this cost might be unacceptable.
Of course, the first problem (concurrency) could be solved by serializing access to the “get the next ID” function, e.g. with a DBMS_LOCK. We all know, however, that there’s no sane reason to serialize this when Oracle already provides a perfectly good mechanism for generating unique IDs, with virtually no serialization – sequences.
CREATE SEQUENCE my_id_seq;
SELECT my_id_seq.NEXTVAL INTO :new_id FROM DUAL;
Sequences have the benefits of guaranteeing uniqueness, and if their cache setting is set appropriately, will add a negligible amount of overhead for serialization.
Problem solved. Easy, right? I bet you’re wondering why I added the word “Normally” to my first sentence in this post….
Question: When might using “SELECT MAX(id) + 1” ever be an acceptable source of unique identifiers?
Answer: Global Temporary tables.
If I’ve inserted any rows into a global temporary table, by definition no other session can see my data, so the first consideration, concurrency, is not an issue.
Also, if I’m not expecting to ever insert many rows into my global temporary table, I can be reasonably confident that performance will not be an issue either. Plus, if I put an index on the ID column, that query will be quite inexpensive.
Conclusion: if you are using global temporary tables, you don’t have to use sequences to generate unique identifiers for them. I’m not saying you shouldn’t, of course – a sequence may be faster, and may even lead to simpler code in some cases – but in other cases you might decide to forego a sequence – one less object, with perhaps its role grants and synonyms, to deploy.
…
Now, of course, you have to ask yourself, why query the table at all? Why not store that latest ID in a private global variable in a package? In fact, we can create a simple package to replace the sequence, e.g.:
CREATE OR REPLACE PACKAGE my_table_pkg IS
FUNCTION next_id RETURN my_table.id%TYPE;
END my_table_pkg;
CREATE OR REPLACE PACKAGE BODY my_table_pkg IS
g_latest_id my_table.id%TYPE;
FUNCTION next_id RETURN my_table.id%TYPE IS
BEGIN
g_latest_id := NVL(g_latest_id, 0) + 1;
RETURN g_latest_id;
END next_id;
END my_table_pkg;
Well, now you know what to do. Whenever you need to generate a unique set of identifiers for a global temporary table, you’ve got a choice of options: sequence, package variable, or a “max(id)+1” query.
A simple question: you’re designing an API to be implemented as a PL/SQL package, and you don’t (yet) know the full extent to which your API may be used, so you want to cover a reasonable variety of possible usage cases.
You have a function that will return a BOOLEAN – i.e. TRUE or FALSE (or perhaps NULL). Should you implement it this way, or should you return some other kind of value – e.g. a CHAR – e.g. ‘Y’ for TRUE or ‘N’ for FALSE; or how about a NUMBER – e.g. 1 for TRUE or 0 for FALSE?
This debate has raged since 2002, and probably earlier – e.g. http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6263249199595
Well, if I use a BOOLEAN, it makes the code simple and easy to understand – and callers can call my function in IF and WHILE statements without having to compare the return value to anything. However, I can’t call the function from a SQL statement, which can be annoyingly restrictive.
If I use a CHAR or NUMBER, I can now call the function from SQL, and store it in a table – but it makes the code just a little more complicated – now, the caller has to trust that I will ONLY return the values agreed on. Also, there is no way to formally restrict the values as agreed – I’d have to just document them in the package comments. I can help by adding some suitable constants in the package spec, but note that Oracle Forms cannot refer to these constants directly. Mind you, if the value is being stored in a table, a suitable CHECK constraint would be a good idea.
Perhaps a combination? Have a function that returns BOOLEAN, and add wrapper functions that converts a BOOLEAN into a ‘Y’ or ‘N’ as appropriate? That might be suitable.
Personally, though, I hate the NUMBER (1 or 0) idea for PL/SQL. That’s so C-from-the-1970’s. Who codes like that anymore?
It starts out as a fairly simple, innocent business requirement. Create a report to list records meeting some criteria, one of which is:
“List only records where today’s date is more than 35 business days after the due date from the record.”
When you delve deeper you find that querying the table with “DUE_DATE + 35 < SYSDATE
” is not going to cut it – “business days” do not include weekends. You might start with something similar to this. But even that’s not good enough, because business days should not include public holidays. How do you code that?
So, here’s my solution.
1. We need to know what days are public holidays for the region. In our case this application is only applicable for a single region, so we use a simple table:
CREATE TABLE holidays (holiday_date DATE PRIMARY KEY);
We create a simple form for users to enter new holidays every year, and give someone the job of making sure it’s up-to-date every year when the public holidays are announced.
2. Create a view that lists all non-business days – i.e. list all weekends and public holidays. To retain reasonable performance, we limit our solution to dates in the years 2000 to 2050.
CREATE VIEW non_business_days AS
SELECT TO_DATE('01012000','DDMMYYYY') + ROWNUM * 7
AS day -- Saturdays 2000 to 2050
FROM DUAL CONNECT BY LEVEL <= 2661
UNION ALL
SELECT to_date('02012000','DDMMYYYY') + ROWNUM * 7
AS day -- Sundays 2000 to 2050
FROM DUAL CONNECT BY LEVEL <= 2661
UNION ALL
SELECT holiday_date FROM holidays;
3. Now, when we need to take a date and add x business days to it, we query this table to find all the non-business-days that are applicable, e.g.:
SELECT day
,COUNT(*) OVER (ORDER BY day
ROWS BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW)
AS count_so_far
,(day - p_date) AS base_days
FROM NON_BUSINESS_DAYS
WHERE day > p_date;
If you run this query and examine each row in order of day, if you take base_days and subtract count_so_far, when the result is less than x, then base_days – count_so_far is the number of extra days we need to add to the holiday’s date to give us the answer. You’ll find this logic in the function below.
In our final solution, we’ll also need to UNION in the date parameter as well, for the case where there are no holidays between the starting date and the number of business days requested.
Here’s our function to take any date (at least, any date between 2000 and 2050) and add x business days (positive or negative):
FUNCTION add_working_days (p_date IN DATE, p_working_days IN NUMBER)
RETURN DATE IS
l_date DATE;
BEGIN
IF p_date IS NULL OR p_working_days IS NULL THEN
RETURN NULL;
END IF;
IF p_working_days != TRUNC(p_working_days) THEN
RAISE_APPLICATION_ERROR(-20000,
'add_working_days: cannot handle fractional p_working_days ('
|| p_working_days || ')');
END IF;
IF p_working_days > 0 THEN
SELECT MAX(day + p_working_days - (base_days - count_so_far))
INTO l_date
FROM (SELECT day
,COUNT(*) OVER (ORDER BY day
ROWS BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW)
AS count_so_far
,(day - p_date) AS base_days
FROM NON_BUSINESS_DAYS
WHERE day > p_date
UNION
SELECT p_date, 0, 0 FROM DUAL
)
WHERE base_days - count_so_far < p_working_days;
ELSIF p_working_days < 0 THEN
SELECT MIN(day - (ABS(p_working_days) - (base_days - count_so_far)))
INTO l_date
FROM (SELECT day
,COUNT(*) OVER (ORDER BY day DESC
ROWS BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW)
AS count_so_far
,(p_date - day) AS base_days
FROM NON_BUSINESS_DAYS
WHERE day < p_date
UNION
SELECT p_date, 0, 0 FROM DUAL
)
WHERE base_days - count_so_far < ABS(p_working_days);
ELSE
l_date := p_date;
END IF;
RETURN l_date;
END add_working_days;
Test cases (these are some public holidays in Western Australia):
insert into holidays values (to_date('27/12/2010','DD/MM/YYYY');
insert into holidays values (to_date('28/12/2010','DD/MM/YYYY');
insert into holidays values (to_date('03/01/2011','DD/MM/YYYY');
insert into holidays values (to_date('26/01/2011','DD/MM/YYYY');
— Expected: 06/01/2011
select cls_util.add_working_days(to_date('13/12/2010','DD/MM/YYYY')
,15) from dual;
— Expected: 31/01/2011
select cls_util.add_working_days(to_date('25/01/2011','DD/MM/YYYY')
,3) from dual;
— Expected: 13/12/2010
select cls_util.add_working_days(to_date('06/01/2011','DD/MM/YYYY')
,-15) from dual;
— Expected: 25/01/2011
select cls_util.add_working_days(to_date('31/01/2011','DD/MM/YYYY')
,-3) from dual;
I have two queries that need to be executed by a PL/SQL program. Both of them are quite complex, and both of them have a large section which is identical – because they are different views of the same underlying source data.
One option is to expand out both queries in full, e.g.:
Query 1:
SELECT <complicated expressions>
FROM (
<large complicated query>
), <other tables>
WHERE <complicated predicates>;
Query 2:
SELECT <different complicated expressions>
FROM (
<large complicated query>
), <other different tables>
WHERE <different complicated predicates>;
I don’t like the fact that my <large complicated query> is repeated in full in both cursor definitions. I’d rather have one place where that subquery is defined, because it should remain the same for both queries, since they are supposed to be different views of the same underlying data.
Another option is to create a view on the <large complicated query>, and refer to that in both queries. This is a perfectly acceptable option, and one which I often use. The only downside is if there are any parameters that need to be “fed” to the view. One way is for the view to expose the parameter as a column in the view, and for the calling query to simply query it on that column. This is not always the most efficient method, however, depending on the complexity of the view and how well Oracle can “push down” the predicate into the view at execution time. Another solution to the parameter problem is to use a user-defined context as described here.
The other downside which I don’t like for this case is that the view moves the query away from the package – I’d prefer to have the definitions close together and maintained in one location.
The solution which I used in this case is a pipelined function. For example:
FUNCTION large_complicated_query
RETURN source_data_table_type
PIPELINED IS
rc source_data_type;
BEGIN
FOR r IN (<large complicated query>) LOOP
rc.col1 := r.col1;
rc.col2 := r.col2;
-- etc.
PIPE ROW (rc);
END LOOP;
RETURN;
END;
Now, the two queries in my package can re-use it like this:
SELECT <complicated expressions>
FROM TABLE(my_package.large_complicated_query)
,<other tables>
WHERE <complicated predicates>;
In the package spec I have:
-- *** dev note: for internal use only ***
TYPE source_data_type IS
RECORD (col1 col1_data_type, etc....);
TYPE source_data_type_table IS TABLE OF source_data_type;
FUNCTION large_complicated_query
RETURN source_data_table_type PIPELINED;
-- *** ******************************* ***
Because the pipelined function is going to be called by SQL (in fact, two queries defined in the same package), its declaration must also be added to the package spec.
In the package body, I use private global variable(s) to hold the parameter for the large complicated query.
When the queries are run, the global variable(s) must first be set to the required parameter. The queries are run, then the global variables are cleared.
The pipelined function is deliberately not useful to other processes – if a developer tried to call it, they’d get no results because they can’t set the parameters (since they are declared as private globals).
A downside to this approach is that the optimizer will not be able to optimize the entire queries “as a whole” – it will execute the entire query in the pipelined function (at least, until the calling queries decide to stop fetching from it). For my case, however, this is not a problem. The entire process runs in less than a second – and this is 10 times faster than it needs to be. In other words, in this case maintainability is more important than performance.
There may be other ways to do this (in fact, I’m quite sure there are), but this way worked for me.
This is the query that never ends,
It just goes on and on, my friends.
Some people started fetching not knowing what it was,
And now they can’t stop fetching forever just because…
This is the query that never ends,
…
CREATE TYPE number_table_type IS TABLE OF NUMBER;
CREATE FUNCTION row_generator
RETURN number_table_type
PIPELINED IS
BEGIN
LOOP
FOR i IN 1..100 LOOP
PIPE ROW (i);
END LOOP;
END LOOP;
RETURN;
END;
SELECT * FROM TABLE(row_generator);
…inspired by…
I wrote this simple SQL*Plus script to loop through all the user’s tables and report how many rows were in them at this point in time. Normally I might estimate this information based on the table statistics, but in this case I had to know if any rows had been inserted since the last stats were gathered. Also, I preferred not to just run the stats gathering job. Obviously, it doesn’t pick up any uncommitted data.
It doesn’t list the tables which have zero (0) rows, but it report how many tables it counted, and the total number of rows across the entire schema.
If run on an 11g database, it will skip the count on tables which have had no segment created yet.
WARNING: this script is suitable for development environments only. Use at your own risk.
SET SERVEROUT ON FEEDBACK OFF
DECLARE
c INTEGER;
tc INTEGER := 0;
tr INTEGER := 0;
BEGIN
$IF DBMS_DB_VERSION.VER_LE_10 $THEN
DBMS_OUTPUT.enable(1000000);
$ELSE
DBMS_OUTPUT.put_line('Only tables with segment created are counted.');
$END
FOR r IN (
SELECT table_name
FROM user_tables
$IF DBMS_DB_VERSION.VER_LE_10 $THEN
$ELSE
WHERE segment_created = 'YES'
$END
ORDER BY table_name
)
LOOP
tc := tc + 1;
BEGIN
EXECUTE IMMEDIATE REPLACE(
'SELECT COUNT(*) FROM "#TABLE#"'
,'#TABLE#',r.table_name)
INTO c;
tr := tr + c;
IF c > 0 THEN
DBMS_OUTPUT.put_line(
RPAD(r.table_name,31,' ')
|| TO_CHAR(c,'9G999G999G990'));
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line(r.table_name);
DBMS_OUTPUT.put_line(DBMS_UTILITY.format_error_stack);
END;
END LOOP;
DBMS_OUTPUT.put_line('counted '
|| tc || ' table(s)');
DBMS_OUTPUT.put_line('total rows '
|| TO_CHAR(tr,'9G999G999G990'));
END;
/
SET SERVEROUT OFF FEEDBACK ON
Example output:
SQL> @count_all_tables
Only tables with segment created are counted.
counted 1 table(s)
total rows 0
This tells me that there are no rows yet, and of all the tables, only one has had a segment created for it (e.g. perhaps an INSERT was attempted but rolled back, or the row was deleted). I’m sure you’ll be able to adapt the script to suit your obsessive record-counting needs.