Psychology and Large Tables
Today a project manager asked me about a change to a query being implemented for a search function on our 11gR2 database. His concern was that the new query had a plan that involved several full table scans; whereas the old version used primarily index range scans.
The query used to look something like this:
SELECT score(1) AS score, a.* FROM (SELECT * FROM entity_details ed JOIN associations a ON ed.entity_id = a.entity_id LEFT JOIN addresses ad ON ed.entity_id = ad.entity_id ) a WHERE CONTAINS(a.entity_name, :criterion, 1) > 0 ORDER BY a.entity_name;
Its plan looked like this:
SELECT STATEMENT ALL_ROWS cost=52 card=13 - SORT ORDER BY cost=52 card=13 - NESTED LOOPS OUTER cost=51 card=13 - NESTED LOOPS cost=31 card=12 - TABLE ACCESS BY INDEX ROWID entity_details cost=7 card=12 - DOMAIN INDEX entity_name_ic cost=4 - TABLE ACCESS BY INDEX ROWID associations cost=2 card=1 - INDEX RANGE SCAN asso_entity_i cost=1 card=1 - TABLE ACCESS BY INDEX ROWID addresses cost=2 card=1 - INDEX RANGE SCAN address_entity_fk_i cost=1 card=1
The new query had an additional predicate:
SELECT score(1) AS score, a.* FROM (SELECT * FROM entity_details ed JOIN associations a ON ed.entity_id = a.entity_id LEFT JOIN addresses ad ON ed.entity_id = ad.entity_id ) a WHERE CONTAINS(a.entity_name, :criterion, 1) > 0 OR a.entity_name LIKE '%' || UPPER(:criterion) || '%' ORDER BY a.entity_name;
The query plan for the original query involved index range scans on all the tables; whereas the query plan for the new query involved full table scans on associations and addresses.
SELECT STATEMENT ALL_ROWS cost=348 card=1269 - SORT ORDER BY cost=348 card=1269 - HASH JOIN OUTER cost=347 card=1269 - HASH JOIN cost=244 card=1187 - TABLE ACCESS BY INDEX ROWID entity_details cost=107 card=1187 - BITMAP CONVERSION TO ROWIDS - BITMAP OR - BITMAP CONVERSION FROM ROWIDS - SORT ORDER BY - INDEX RANGE SCAN entity_name_i cost=4 - BITMAP CONVERSION FROM ROWIDS - SORT ORDER BY - DOMAIN INDEX entity_name_ic cost=4 - TABLE ACCESS FULL association cost=136 card=2351 - TABLE ACCESS FULL addresses cost=102 card=18560
Initial testing in dev revealed no noticeable performance difference between the two queries, so he was just concerned about the impact of the full table scans on the system.
As you can see, the new plan was still using the domain index, as well as using the ordinary index on entity_name; concatenating the two sets of ROWIDs (BITMAP OR) and then accessing the table as before. Previously, the cardinality estimate for just the CONTAINS predicate was 12 (side note: I’m curious as to how predicates using context indexes are costed – anyone know any details or references?); now, the total cardinality estimate for entity_details is 1187. The cardinality estimate if we just did the LIKE predicate is 1176 (which is simply 5% of the number of rows in the table). The higher cardinality has pushed the rest of the query away from index accesses for association and addresses, towards hash joins and full table scans on those tables.
If I override the cardinality estimate with a lower figure, e.g.
SELECT /*+CARDINALITY(a.ed 50)*/ score(1) AS score, a.* ...
the query plan changes into a typical nested-loops-with-index-access one:
SELECT STATEMENT ALL_ROWS cost=295 card=53 - SORT ORDER BY cost=295 card=53 - NESTED LOOPS OUTER cost=294 card=53 - NESTED LOOPS cost=207 card=50 - TABLE ACCESS BY INDEX ROWID entity_details cost=107 card=50 - BITMAP CONVERSION TO ROWIDS - BITMAP OR - BITMAP CONVERSION FROM ROWIDS - SORT ORDER BY - INDEX RANGE SCAN entity_name_i cost=4 - BITMAP CONVERSION FROM ROWIDS - SORT ORDER BY - DOMAIN INDEX entity_name_ic cost=4 - TABLE ACCESS BY INDEX ROWID association cost=2 card=1 - INDEX RANGE SCAN asso_entity_i cost=1 card=1 - TABLE ACCESS BY INDEX ROWID addresses cost=2 card=1 - INDEX RANGE SCAN address_entity_fk_i cost=1 card=1
Substituting various cardinalities reveals the “tipping point” for this particular instance (with its particular set of statistics and optimizer parameters) to be around 50.
The cardinality estimates for the full table scans should be the major giveaway: they’re all less than 20,000. In fact, the total number of records in each of these tables does not exceed 25,000, and our dev instance has a full set of data.
I advised this manager to not worry about the new plan. These tables are unlikely to grow beyond 100,000 (they only hold records for about 25,000 associations, entered over the past 5 years) for the expected life of the product, and the entire tables fit in under 500 blocks. With a db_file_multiblock_read_count of 128, it’s likely that the first query of the day will load all the blocks of the tables into the buffer cache with just a dozen or so reads.
If this query were to use index range scans plus table accesses for each search, the performance would only become marginally better (and probably imperceptibly so), at the cost of slower queries on the occasions when users enter poor search criteria. Whereas, with full table scans, even with the worst search criteria, they will typically get their results in less than 5 seconds anyway.
We’re so accustomed to tables like “entities” and “addresses” having millions or tens of millions of rows, so instinctively recoil from full table scans on them; but, in this instance at least, to Oracle, these tables are tiny – for which full table scans are often better.
Gary
7 May 2011 - 1:18 pm
Curious, because its been a while since I used Oracle Text, but it seems the CONTAINS and LIKE are doing a similar job here. I don’t see the point of both predicates in the same query. A simple text should work with both and one with the CONTAINS syntax wouldn’t match with LIKE.
I’m not sure how the CONTAINS estimate is made. I know Oracle Text is built on the Oracle Data Cartridge structure and that would get into the extensible optimizer. So there’d be some PL/SQL function that looks at the criterion parameter and returns a selectivity and that would be applied to the number of rows (not sure whether it is the number of rows in the table or the index).
Are the estimates generally accurate ? If they were underestimating, the original ‘index access’ path may have had a higher cost in practice than the estimates showed.
Also, I think those bitmap conversion plans chew a bit of CPU. Not sure whether that might be an issue in your environment.
Jeffrey Kemp
7 May 2011 - 2:30 pm
CONTAINS and LIKE do provide similar results, however in this case the client wants a full set of results – CONTAINS provides most of the results they want, but misses some results (e.g. parts of words) that LIKE does, and vice versa. For them, a missed entry is worse than having too many results.
I haven’t even gotten as far as checking the accuracy of the estimates – at the moment I don’t see any need for further tuning (all the numbers are so low that any tuning is unlikely to provide any significant gains), and we don’t even know yet what sort of searches the users are likely to do. Also, there are only a dozen or so users in total, so I expect the load on the server will be minimal.
Sergey
17 May 2011 - 10:10 am
Jeff,
As regards costing of predicates, there is Oracle documentation here:
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10765/ext_optimizer.htm#i1006541
You might’ ve already come across it, but just in case
Jeffrey Kemp
17 May 2011 - 10:14 am
Ah – I failed to RTFM… thanks 🙂 so it’ll be down to whether Oracle supplies an ODCIStats package for context indexes.
Sergey
17 May 2011 - 10:49 am
Yes, the task seems to expand CONTAINS to the level of ODCIStats interface. Looking forward for your research 🙂