Too much TABLE ACCESS FULL in your Oracle Database? Thus, SQL elapsed time too slow for the demand? Plenty of Buffer Cache to create a temporary fix? maybe you want to consider Automatic Big Table Caching. Usually, Oracle only does Full Table Caching for small tables. Big ones will only use the Buffer Cache for the current chunk of blocks that’s transported (depending on the access method). Now Oracle 12c’s Automatic Big Table Caching will reserve a part (by percent) of the Buffer cache for full table scans, its filling priority is based on a heat map for segments: The more full table scans you have, the higher the “temperature” will get, and the higher the priority becomes. I calculated the target size of the cache by simply using the size of the segment I hoped to get cached.
Activation is simple and needs no restart:
alter system set db_big_table_cache_percent_target=20 scope=memory;
See the success:
select * from V$BT_SCAN_OBJ_TEMPS;
And after a while, you can see which segments are operaed using the cache, and why (by temperature):
select ot.*, round((ot.cached_in_mem/size_in_blks) * 100,0) as Pct_in_memory, o.* from dba_objects o, V$BT_SCAN_OBJ_TEMPS ot where ot.dataobj#=o.object_id order by temperature desc;
That’s amazing, and so simple and so intriguing and so SEDUCTIVE…! But after all it’s just a full table scan, and if we can get rid of it, we should still get rid of it, at least for OLTP environments. But for a quick fix (instead of using a hint) I think Automatic Big Table Caching has a real value.
Keep your eyes wide open and your head on