DOAG Noon2Noon Event Nürnberg / MySQL vs. Oracle – Review

It was time to try something new in our DOAG Database Community. And please, please just not another frontal conference: twenty guys look forward, one looks back, like eight-oars-and-coxswain. I don’t know if DOAG invented it, but it was a success: The Noon2Noon Event.

How does it work? It’s like 24 hours of BarCamp, starting with a lunch, having a topic-of-the-day, lasting overnight, with winter barbecue, compatible for elders (thus, with Hotel and beds, no after-midnight hacking :) ). Ah, forgot to mention: One single talk being completely away from the topic, but somehow related to our work.

This first time, topic was “MySQL versus Oracle Database”. Johannes Ahrends and Oli Sennhauser as “headliners” ignited discussions about features, technologies and strategies known from Oracle, and how they are (or aren’t) in MySQL. And vice versa, but less. Participants came from all over Germany, plus Denmark and Switzerland. End users, consultants, technocrats, “boys” who go ahead and fail, and “girls” who test and succeed…

I greatly enjoyed the open format – listening, talking, drawing, discussing, swaggering, ignoring, pushing, pulling – the full repertoire. :)

From the technological aspect, it was great to learn something about MySQL – how consistent reads, clustering, and lots of other stuff works, that we never ever thought that it could be done different from the way the Oracle Database works. And that MyISAM isn’t the norm, but just another PITA. :) Go for InnoDB. What is MariaDB? And what the fork is Percona Server?


Oli swiss-talks about MySQL.

Read more…

Oracle 12c Multitenant: impdp fails w/ ORA-31625 and ORA-01031 because of Database Vault

Things are different in Oracle Database 12c with multitenancy option. My most recent example:

I tried to import a schema (new name “NEWSCHEMA”) with datapump IMPDP and REMAP_SCHEMA into the same pluggable database it has been exported from with EXPDP immediately before (name “OLDSCHEMA”), running as SYSTEM. I’m doing things like that with DBA permissions, since my users have lots of grants and stuff in the schemas, and when a DBA does the export and import, all settes right. (See the details for commands and parfiles below.)

But IMPDP fails with
ORA-39083: Object type INDEX failed to create with error:
ORA-31625: Schema NEWSCHEMA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges

So what? I’m SYSTEM and thus, DBA, and the user NEWSCHEMA is there. And SYSTEM of course has the “IMPORT FULL DATABASE” privilege, it’s a DBA! So you may think.
Read more…

Oracle: Did my SQL get worse over time? (AWR query)

Sometimes, we get statements to look at, and are told “it’s getting worse and worse”. Since DBAs are well advised not to take anything for granted and only to believe what they see with own eyes, here comes a SQL on AWR to see Buffer Gets per Minute, over time.

 (extract (day from (s.END_INTERVAL_TIME-s.BEGIN_INTERVAL_TIME))*24*60)+
from dba_hist_sqlstat t, 
 dba_hist_snapshot s
where t.snap_id = s.snap_id
 and t.dbid = s.dbid
 and t.instance_number = s.instance_number
 and t.SQL_ID='vwxyz'
 and s.begin_interval_time between sysdate-90 and sysdate
order by t.SNAP_ID 

Pseudocode explanation:
Get all SQL ID’s from the historical SQL STAT view. Refer it to the snapshot details to get real-world date/time of the events. Since nobody knows how long the AWR snapshot interval was at the time of interest, make BUFFER GETS relative per minute by dividing each BUFFER GETS DELTA by the length of its interval.
Configure the SQL_ID and the interval to be reviewed in WHERE.

You can create a chart like that when exporting the result to the spreadsheet software of your choice:


Basically, this concept will also work with all other columns available in dba_hist_sqlstat, such as CPU consumption, Interconnect load, Disk IO etc.

“Everybody lies”, says Dr. House :)

PS: Please keep in mind, the system(s) to run this query on, will need Oracle’s Diagnostics Pack licensed on top of Enterprise Edition.

Martin Klier in Interview: Oracle Standard Edition

A while ago, interviewed me about Oracle’s Standard Edition and database system migrations. The material was published just recently, so I’d like to share it. Enjoy, and if there are questions, just let me know!


I have to add a corrigenda: SE RAC is not – at least not at the moment – limited to a number of nodes. It’s currently limited to CPU sockets, to be exact, 4 of them. So a four-node SE RAC is possible, if I did not completely misunderstand the licencing policies.

Disclaimer: Licensing and pricing here are my OPINIONS, and not a reliable source to make decisions or confront Oracle with. :) If you need some tailored licensing information, feel free to email for advice.


DOAG 2014 Presentation and Whitepaper online: Database I/O


my #DOAG2014 presentation and whitepaper are online now!

“Oracle Core für Einsteiger: Datenbank I/O”





Thank you all for attending!

Martin Klier

It’s #DOAG2014 time!

Hello World!

It’s time for all Oracle folks to congregate in Nuremberg for DOAG Konferenz 2014!


I’d love to meet and greet you there – maybe you are also interested in my talk for Database Rookies: “Oracle Core für Einsteiger: Database I/O”:

Hope to have a great week with you!
Martin Klier


Visited Germany’s first Spatial Database, Size 26 kiloStones

Last week, I had the chance to visit Bavaria’s (and so also Germany’s) oldest Spatial Database. It’s buried deep below Munich, and contains all the geo information about Bavaria in scale 1:5000 and some in 1:2500. It was introduced in 1808 and was in use until 1950. That’s also the current state of the data.

Each of the 26,000 official maps is painted in oil and mirror-invertedly on polished lime sand brick. Each “page” is 1m x 1m (3.2ft x 3.2ft) in size, 4-6cm (2-3in) thick and the weight of each stone “disk” is approximately 70kg (154lbs). That makes it 26 Kilostones in size, with a dump size of 1,820,000 kg or 3,968,000 lbs.

Read more…

New German Linux Forum (

In the last weeks, some folks were busy to build a new German Linux Forum “”, since the predecessor was systematically ruined by the commercial owners.

Especially Jean (wdp) and Hendrik (Nilpferd) invested much time and money into building the new environment. So the new forum is completely free of ads and commercials, and the content is QA’ed by a team of experienced Linux admins as moderators.

Please hang out there, and help us to (re)build a cool community.

Martin Klier (Usn)

Performance is rarely an accident (Deutsch)

Some time ago, I saw a great presentation of Cary Millsap: „Thinking clearly about performance”. It was obviously relevant for our internal developers, so he unhesistantly granted me permission to reproduce some of his ideas for us. Cary, thank you very much!

Here you can see, what I made out of the topic, mostly for visualization purposes.



Martin Klier: Performance is Rarely an Accident (pdf)

As I said, the intention was to show development teams, how beneficial it would be to think about performance at all, and that you’d need code instrumentation (=runtime meta information about application behavior) to get better.

I hope you enjoy the slide deck.
Martin Klier

Edit: Exchanged the basic version for Second Edition in Wiesau and Munich

Oracle 12c InMemory – don’t stop thinking about performance

Oracle has released its new database version that includes the famous in-memory column store. InMemory option  promises a big advantage for OLAP-like work loads by keeping table contents in a columnar in-memory structure. InMemory is not new, they did that for decades, but the interesting part is “columnar”. There’s much writing about that on the net and in the Oracle Concepts Guide, no need to reproduce that here.

But though the new feature is very young, we already can see a “you can stop using your brain, we have a new catch-them-all feature” thinking, at least such a marketing sound. But it’s quite easy to show that this is not real. As for many other features we got over the years, using Oracle InMemory still needs a concept, done by an architect knowing the ups and downs.

What I can see from playing with Oracle Inmemory is, that it’s only beneficial when all data you (might) have to query from is already in the columnar cache (Oracle term is “populated”). If not, query response times don’t improve much. Let me show you my test case.

Read more…