Archive for the 'Performance' Category

Oracle: Create a Materialized View with Fast Refresh on commit and debug upcoming errors

Sometimes, our users and customers need rapid access to complex data structures. Materialized Views in their various forms are a very potent way to shift efforts from the query to insert/update time of base data. That’s the nature of a Materialized View: One way or another, we need to put the data into a redundant structure. With Fast Refresh, this way goes through Materialized View Logs, causing additional round trips and buffer modifications during transactions, or correctly, during DML operations.

Anyway, sometimes it’s just the best way to solve a problem, so this is how to create a Materialized View in Oracle Database 12.1 and after, with Fast Refresh on commit including debugging with DBMS_MVIEW.EXPLAIN_MVIEW.

Create a Materialized View with Fast Refresh on commit

Let’s drop the Base View first

drop view MYVIEW;

Define a Base View (not mandatory, direct queries would also work, even better)

create view MYVIEW as 
select a.id,
       b.id,
       c.id,
       a.rowid, -- ROWID is necessary if table has not Primary Key Constraint defined
       b.rowid,
       c.rowid
 from MYTABLE_A a, MYTABLE_B b, MYTABLE_C c
 where a.id=b.aid
       and b.id=c.bid
;

Read more…



Restarting Oracle for Performance is like a Coitus for Virginity

Today I had another case of that … total breakdown of a live environment by infrastructure defect, bad enough. Customer rushing up the database and all apps services, just to see thousands of locks and hanging transactions. Big panic, frequent restarting of databases and applications. Getting worse, finally calling me.

Told them I’ll manage that, asked them to shut down all apps until I tell them to start up. Started the DB, read the newspaper for 10 minutes, then they started calling me. Told them, that I am working on it. After 20mins of reading the paper and answering calls, asked them to start the apps. => Voila, all was well.

 

The answer? Just let your SMON do its job.

 

Ok, this initially was not exactly about performance, but the initial mistake was old Windows-like thinking: Reboot it, a reboot will activate magic self-healing powers. Fail, crashing Oracle (what the service infrastructure did in this case) is a very bad thing to do, and even worse, it “registers a mortgage” on the system: As soon as you start up an instance for this database later, it has not only to crash-recover, but ALSO to roll back all open transactions after the DB was opened. The latter was the problem in the above case: SMON kicks in late. Now the swarm of busy application bees buzzing around after such a crash, has to share resources with SMON. And has to wait for still-open row locks, spinning on them, eating more resources… and now it is a performance issue – a pointless, home made performance catastrophe.

Keep calm
Martin



Oracle: How to identify SQL doing TABLE ACCESS FULL on a given partition

In my last post, I described a quick show case for Oracle 12c Automatic Big Table Caching. But ABTC is just a crock, nevertheless a useful one. The biggest help from Automatic Big Table Caching for “sustainable” database performance management is the temperature of segments, heated up by TABLE ACCESS FULL. In my case, after enabling ABTC the hottest segment was one I did not expect: A sink containing lots of passive data, a LIST partition holding the passive status. (Not) nice, and now the next question is, who does that?

Finding SQL_IDs doing Full Table Scans on a given table is no magic, V$SQL_PLAN allows it in a very simple way:

select distinct sql_id, plan_hash_value
 from v$sql_plan
 where
 object_owner='ME'
 and operation='TABLE ACCESS'
 and options like '%FULL%'
 and object_name='MYTABLE'
 group by sql_id,plan_hash_value
 order by sql_id,plan_hash_value;

But the challenge is, to find every TABLE ACCESS FULL that comes from a PARTITION LIST operation. Thus, we need a query that allows looking back one row in V$SQL_PLAN, to decide if the TABLE ACCESS FULL is relevant:

How to identify SQL doing TABLE ACCESS FULL on a given partition

Read more…



Oracle Automatic Big Table Caching – an improvement, not a fix

Too much TABLE ACCESS FULL in your Oracle Database? Thus, SQL elapsed time too slow for the demand? Plenty of Buffer Cache to create a temporary fix? maybe you want to consider Automatic Big Table Caching. Usually, Oracle only does Full Table Caching for small tables. Big ones will only use the Buffer Cache for the current chunk of blocks that’s transported (depending on the access method). Now Oracle 12c’s Automatic Big Table Caching will reserve a part (by percent) of the Buffer cache for full table scans, its filling priority is based on a heat map for segments: The more full table scans you have, the higher the “temperature” will get, and the higher the priority becomes. I calculated the target size of the cache by simply using the size of the segment I hoped to get cached.

Activation is simple and needs no restart:

alter system set db_big_table_cache_percent_target=20 scope=memory;

See the success:

select * from V$BT_SCAN_OBJ_TEMPS;

And after a while, you can see which segments are operaed using the cache, and why (by temperature):

select ot.*,
 round((ot.cached_in_mem/size_in_blks) * 100,0) as Pct_in_memory,
 o.*
from dba_objects o, V$BT_SCAN_OBJ_TEMPS ot
where ot.dataobj#=o.object_id
order by temperature desc;

That’s amazing, and so simple and so intriguing and so SEDUCTIVE…! But after all it’s just a full table scan, and if we can get rid of it, we should still get rid of it, at least for OLTP environments. But for a quick fix (instead of using a hint) I think Automatic Big Table Caching has a real value.

Keep your eyes wide open and your head on
Martin Klier



Oracle SQL showing current expensive queries plus most expensive object from execution plan

Sometimes, it’s useful to see what’s currently going on in your Oracle Database. This query is meant to help with this task, that haunts every DBA from time to time. It’s just using v$sql and v$sql_plan, so no need for Enterprise Edition and Diagnostics Pack.

Key features:

  • find top buffer getters per execution from v$sql
  • add most costly object access from corresponding execution plan (from all cursors in question), using v$sql_plan
  • shows possible duplicates of the latter with same cost in a list aggregate, to reduce number of lines
  • shows the percentage of how much cost impact the most expensive object access has. related to the overall cost of the execution plan.

Exclusions:

  • does not show plans without SQL and no SQL without plan
  • has a limited view on object impact: Accessing objects is rated, but not that they cause bigger join costs etc.

Read more…




You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.