Effecting Oracle Miracles With Standard Edition And Statspack (=without AWR)

For the most time, I have to work with Oracle Standard Edition database systems without any chance to use AWR, ASH or Diagnostics Pack in general. But performance problems have to be analysed in budget environments as well, and many people are complaining about Oracle’s policy in this area. But in my experience, it’s also possible to solve most response time issues with SE as well – maybe you have to go without a GUI or know a little bit more than the average Click-And-Buy EE DBA. (pardon!) :)

Today and from my position I can’t change the first fact for you, but maybe the latter: Here comes my personal Oracle SE grimoire – not much, but it works like a charm.

Your personal environment

I assume you are using SQL*PLUS or SQL*DEVELOPER on your workstation with a direct TNS/jdbc connection to the database in question.

Database settings

Especially if you can’t change or run the SQL in question with a hint added (/*+ gather_plan_statistics */) it can be necessary to switch on STATISTICS_LEVEL=ALL temporarily. This gives you additional and very useful information. I assume that you did so for the next steps.

Statspack

  1. Create a PERFSTAT tablespace.
  2. Install statspack with $ORACLE_HOME/rdbms/admin/spcreate.sql, choose the PERFSTAT tablespace for using with statspack.
  3. Create a Snapshot Job with DBMS_SCHEDULER. I don’t like the old-style DBMS_JOB stuff, especially in RAC environments the new 10g+ scheduler is top.
    Use an interval of 10 minutes and a snapshot_level of 5 or 6 to get enough and not too much information into your repository. Go to ten only if necessary.
  4. Create a Clean-up Job to kill snapshots after 5 days with DBMS_SCHEDULER. (Scripts for creating both jobs can be found in a text file here.)
  5. Extract the information you need from a report generated with ?/rdbms/admin/spreport. For example, look for the SQL causing the most buffer gets (indicating a bad plan or many repeats of the statement). If you don’t have an instance- or database configuration issue, you will end up with an OLD_HASH_VALUE here, since in statspack we don’t have a SQL_ID as we have in AWR. But it’s just a little inconvenience, not more.
  6. Keep in mind, that Adaptive Cursor Sharing only shows effect if the statement has been executed three times. In this case, you can see both plans: The bad and the adapted one. Earlier, you only can see the old plan.

Analysing The Culprit

Under the assumption that the SQL causing trouble is still in shared pool, we can have a look at all of its execution plans directly from the cursor cache. I am using DBMS_XPLAN.DISPLAY_CURSOR for that, and love it. If you want, it puts out all information in a neatly formatted ASCII-Art table. My script for creating the execution plan from cursor cache is suitable for most environments and works best in SQL*PLUS or SQL*DEVELOPER as described above.

Now you should be able to see lots of information, but the most interesting parts are those given to you by elevates statistics generation (see Database Settings above).

  1. For me, the most important part is the Buffer Gets per operation. If we are talking about slow SQL, Buffer Gets will show you indirectly what makes the SQL slow. Why? Because Buffer Gets are at least doing RAM operations, which are causing CPU load (locks, serialisation, whatever…), which costs time and affects others.This is your first and basic indicator, what’s going wrong at all.
  2. If you are unlucky, the execution plan shows an additional column, showing disk reads. And they are REALLY slow.
  3. The columns A-Rows and E-Rows are worth comparing them: E-Rows is the number of rows the Cost-Based-Optimizer estimates for this this operation to be done. A-Rows are facts – what we really did. If the CBO estimated way too much or too less, you have an additional approach for a solution. It might be worth a glance or ten in your statistics collection concept.

Countermeasures

This is thin ice – since I don’t know your problem, I can’t tell what helps in your case. But under another assumption (we have wrong CBO decisions, which is 75% of my optimisation business), I can continue.

  1. Are all your statistics “good”? This means, do they represent the CURRENT situation, which means, are they collected during a time comparable to the time when users complain and plans are bad? If not, try to change that. If cardinalities are different at night, but users are working during the day, it’s pointless to collect those stats at night!
  2.  Do you have any selectivity that can’t be calculated by looking at single columns? The CBO can’t combine selectivities (and thus, cardinalities) consisting of multiple columns of the same table without a little help from us. Use extended statistics to fill that gap!
    Example:
    select  dbms_stats.create_extended_stats(null,’EMPLOYEE’,'(NAME,SALARY)’) from dual;
  3. Do you cross-join through tables, without using more fields there? Consider creating combined indexes for both directions (for example, PrimaryKey+ForeignKey and ForeignKey+PrimaryKey)

Conclusion

It’s not impossible to find things without GUI tools and expensive add-ons. But you should start developing a feeling for issues, and maybe visualize stuff by pen and paper – especially in an unknown environment. Try to understand what’s going on.

Think big, start small – but you can find anything by starting with a decent STATSPACK report!

 

Good luck
Martin Klier




You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Leave a Reply