Category: SQL

Psychology and Large Tables

Today a project manager asked me about a change to a query being implemented for a search function on our 11gR2 database. His concern was that the new query had a plan that involved several full table scans; whereas the old version used primarily index range scans.

The query used to look something like this:

SELECT score(1) AS score, a.*
FROM (SELECT *
      FROM entity_details ed
      JOIN associations a
      ON ed.entity_id = a.entity_id
      LEFT JOIN addresses ad
      ON ed.entity_id = ad.entity_id
     ) a
WHERE CONTAINS(a.entity_name,
  :criterion, 1) > 0
ORDER BY a.entity_name;

Its plan looked like this:

SELECT STATEMENT ALL_ROWS cost=52 card=13
- SORT ORDER BY cost=52 card=13
  - NESTED LOOPS OUTER cost=51 card=13
    - NESTED LOOPS cost=31 card=12
      - TABLE ACCESS BY INDEX ROWID entity_details cost=7 card=12
        - DOMAIN INDEX entity_name_ic cost=4
      - TABLE ACCESS BY INDEX ROWID associations cost=2 card=1
        - INDEX RANGE SCAN asso_entity_i cost=1 card=1
    - TABLE ACCESS BY INDEX ROWID addresses cost=2 card=1
      - INDEX RANGE SCAN address_entity_fk_i cost=1 card=1

The new query had an additional predicate:

SELECT score(1) AS score, a.*
FROM (SELECT *
      FROM entity_details ed
      JOIN associations a ON ed.entity_id = a.entity_id
      LEFT JOIN addresses ad ON ed.entity_id = ad.entity_id
     ) a
WHERE CONTAINS(a.entity_name, :criterion, 1) > 0
OR a.entity_name LIKE '%' || UPPER(:criterion) || '%'
ORDER BY a.entity_name;

The query plan for the original query involved index range scans on all the tables; whereas the query plan for the new query involved full table scans on associations and addresses.

SELECT STATEMENT ALL_ROWS cost=348 card=1269
- SORT ORDER BY cost=348 card=1269
  - HASH JOIN OUTER cost=347 card=1269
    - HASH JOIN cost=244 card=1187
      - TABLE ACCESS BY INDEX ROWID entity_details cost=107 card=1187
        - BITMAP CONVERSION TO ROWIDS
          - BITMAP OR
            - BITMAP CONVERSION FROM ROWIDS
              - SORT ORDER BY
                - INDEX RANGE SCAN entity_name_i cost=4
            - BITMAP CONVERSION FROM ROWIDS
              - SORT ORDER BY
                - DOMAIN INDEX entity_name_ic cost=4
      - TABLE ACCESS FULL association cost=136 card=2351
    - TABLE ACCESS FULL addresses cost=102 card=18560

Initial testing in dev revealed no noticeable performance difference between the two queries, so he was just concerned about the impact of the full table scans on the system.

As you can see, the new plan was still using the domain index, as well as using the ordinary index on entity_name; concatenating the two sets of ROWIDs (BITMAP OR) and then accessing the table as before. Previously, the cardinality estimate for just the CONTAINS predicate was 12 (side note: I’m curious as to how predicates using context indexes are costed – anyone know any details or references?); now, the total cardinality estimate for entity_details is 1187. The cardinality estimate if we just did the LIKE predicate is 1176 (which is simply 5% of the number of rows in the table). The higher cardinality has pushed the rest of the query away from index accesses for association and addresses, towards hash joins and full table scans on those tables.

If I override the cardinality estimate with a lower figure, e.g.

SELECT /*+CARDINALITY(a.ed 50)*/ score(1) AS score, a.* ...

the query plan changes into a typical nested-loops-with-index-access one:

SELECT STATEMENT ALL_ROWS cost=295 card=53
- SORT ORDER BY cost=295 card=53
  - NESTED LOOPS OUTER cost=294 card=53
    - NESTED LOOPS cost=207 card=50
      - TABLE ACCESS BY INDEX ROWID entity_details cost=107 card=50
        - BITMAP CONVERSION TO ROWIDS
          - BITMAP OR
            - BITMAP CONVERSION FROM ROWIDS
              - SORT ORDER BY
                - INDEX RANGE SCAN entity_name_i cost=4
            - BITMAP CONVERSION FROM ROWIDS
              - SORT ORDER BY
                - DOMAIN INDEX entity_name_ic cost=4
      - TABLE ACCESS BY INDEX ROWID association cost=2 card=1
        - INDEX RANGE SCAN asso_entity_i cost=1 card=1
    - TABLE ACCESS BY INDEX ROWID addresses cost=2 card=1
      - INDEX RANGE SCAN address_entity_fk_i cost=1 card=1

Substituting various cardinalities reveals the “tipping point” for this particular instance (with its particular set of statistics and optimizer parameters) to be around 50.

The cardinality estimates for the full table scans should be the major giveaway: they’re all less than 20,000. In fact, the total number of records in each of these tables does not exceed 25,000, and our dev instance has a full set of data.

I advised this manager to not worry about the new plan. These tables are unlikely to grow beyond 100,000 (they only hold records for about 25,000 associations, entered over the past 5 years) for the expected life of the product, and the entire tables fit in under 500 blocks. With a db_file_multiblock_read_count of 128, it’s likely that the first query of the day will load all the blocks of the tables into the buffer cache with just a dozen or so reads.

If this query were to use index range scans plus table accesses for each search, the performance would only become marginally better (and probably imperceptibly so), at the cost of slower queries on the occasions when users enter poor search criteria. Whereas, with full table scans, even with the worst search criteria, they will typically get their results in less than 5 seconds anyway.

We’re so accustomed to tables like “entities” and “addresses” having millions or tens of millions of rows, so instinctively recoil from full table scans on them; but, in this instance at least, to Oracle, these tables are tiny – for which full table scans are often better.

Infinite Query

This is the query that never ends,
It just goes on and on, my friends.
Some people started fetching not knowing what it was,
And now they can’t stop fetching forever just because…

This is the query that never ends,

CREATE TYPE number_table_type IS TABLE OF NUMBER;

CREATE FUNCTION row_generator
RETURN number_table_type
PIPELINED IS
BEGIN
  LOOP
    FOR i IN 1..100 LOOP
      PIPE ROW (i);
    END LOOP;
  END LOOP;
  RETURN;
END;

SELECT * FROM TABLE(row_generator);

…inspired by…

Question: why can’t the optimizer do better with these?

I’m scratching my head over this one. I thought the cost-based optimizer would be smart enough to eliminate certain predicates, joins and sorts automatically, and pick a cheaper plan accordingly; but in my tests it doesn’t seem to. Can you shed any light on this?

First, the setup for my test case:

select * from v$version;
-- Oracle Database 11g Enterprise Edition
--     Release 11.2.0.1.0 - 64bit Production
-- PL/SQL Release 11.2.0.1.0 - Production
-- CORE  11.2.0.1.0  Production
-- TNS for Linux: Version 11.2.0.1.0 - Production
-- NLSRTL Version 11.2.0.1.0 - Production

create table parent_table
(parent_id number(12)     not null
,padding   varchar2(1000)
);

create table child_table
(child_id  number(12)     not null
,parent_id number(12)     not null
,padding   varchar2(1000)
);

-- I find I need at least 100,000 rows for the
-- differences of costs to become significant

insert into parent_table
select rownum, lpad('x',1000,'x')
from dual connect by level <= 100000;

insert into child_table
select rownum, rownum, lpad('x',1000,'x')
from dual connect by level <= 100000;
alter table parent_table add (
  constraint parent_pk primary key (parent_id) );
alter table child_table add (
  constraint child_pk primary key (child_id)
 ,constraint fk foreign key (parent_id)
  references parent_table (parent_id) );
create index child_table_parent_i on child_table (parent_id);
begin dbms_stats.gather_table_stats(ownname => USER
, tabname => 'PARENT_TABLE'
, estimate_percent => 100
, method_opt => 'for all columns size auto'
, cascade => TRUE); end;
begin dbms_stats.gather_table_stats(ownname => USER
, tabname => 'CHILD_TABLE'
, estimate_percent => 100
, method_opt => 'for all columns size auto'
, cascade => TRUE); end;

Case 1

explain plan for
select count(*)
from child_table;

-- index fast full scan (unique) on child_pk - cost 58 - great

explain plan for
select count(*)
from child_table
where parent_id is not null;

-- index fast full scan on child_table_parent_i - cost 62

Q.1: Why can’t this query with the NOT NULL predicate on a NOT NULL column eliminate the predicate – and use the pk index instead?

Case 2

explain plan for
select count(*)
from parent_table inner join child_table
on (parent_table.parent_id = child_table.parent_id);

-- index fast full scan (unique) on child_pk - cost 58 - great

explain plan for
select count(*)
from parent_table inner join child_table
on (parent_table.parent_id = child_table.parent_id)
order by parent_table.padding;

-- hash join
--    index fast full scan on child_table_parent_i
--    full table scan parent_table
-- - cost 9080

Q.2: The first query eliminated the join; why can’t it eliminate the ORDER BY as well, since it’s irrelevant to the results?

Handling unique constraint violations by Hibernate

A particular table in our system is a M:M link table between Bonds and Payments, imaginatively named BOND_PAYMENTS; and to make the Java devs’ jobs easier it has a surrogate key, BOND_PAYMENT_ID. Its structure, therefore, is basically:

BOND_PAYMENTS
  (BOND_PAYMENT_ID,
   BOND_NUMBER,
   PAYMENT_ID)

This is a very simple design, quite common in relational database designs. There is a Primary key constraint on BOND_PAYMENT_ID, and we’ve also added a Unique constraint on (BOND_NUMBER, PAYMENT_ID) since it makes no sense to have more than one link between a Bond and a Payment.

The application allows a user to view all the Payments linked to a particular Bond; and it allows them to create new links, and delete existing links. Once they’ve made all their desired changes on the page, they hit “Save”, and Hibernate does its magic to run the required SQL on the database. Unfortunately, this was failing with ORA-00001: unique constraint violated.

Now, the way this page works is that it compares the old set of payments for the bond with the new target set, and Hibernate works out which records need to be deleted, which need to be inserted, and leaves the rest untouched. Unfortunately, in its infinite wisdom it does the INSERTs first, then it does the DELETEs. Apparently this order can’t be changed.

This is the cause of the unique constraint violation – if the user deletes a link to a payment, then changes their mind and re-inserts a link to the same payment, Hibernate quite happily tries to insert it then delete it. Since these inserts/deletes are running as separate SQL statements, Oracle validates the constraint immediately on the first insert.

We had only a few options:

  1. Make the constraint deferrable
  2. Remove the unique constraint

Option 2 was not very palatable, because the constraint provides excellent protection from nasty application bugs that might allow inconsistent data to be saved. We went with option 1.

ALTER TABLE bond_payments ADD
  CONSTRAINT bond_payment_uk UNIQUE (bond_number, payment_id)
  DEFERRABLE INITIALLY DEFERRED;

This solved it – and no changes required to the application. If a bug in the application were to cause it to try to insert a duplicate row, it will fail with ORA-02091 (transaction rolled back) and ORA-00001 (unique constraint violated) when the session COMMITs.

The only downside is that the index created to police this constraint is now a non-unique index, so may be somewhat less efficient for queries. We decided this is not as great a detriment for this particular case.

If you know of any other options that we should have considered, let me know :)