Last weekend, I had the chance to attend the Linux Days Chemnitz. It’s an annual meeting of the German Linux family with roundabout 2,500 attendees and a FANTASTIC atmosphere. It was two days of hearing tech talks, enjoying rich nerd content and talkin’ shop.
I feel VERY proud, honoured and grateful, that Oracle awarded me with the Oracle ACE in December 2014. The Oracle ACE program is a community reward, and encourages us to participate, enrich, promote and organise Oracle community events.
When speaking about the community, first of all I’d like to highlight my Oracle User Group of Germany, the DOAG (Deutsche Oracle Anwendergruppe). They are doing lots of nice, and educational things, lifting the fog, and practicing free and self-assured community work. Helping to spread this in our region, at conferences and events, is a pleasure. I’m very proud to be part of this great team.
A little less, due to the geographical distance, but with very similar motivation and experience, I feel connected and involved with the U.S. pendant IOUG (Independent Oracle Users Group). They are open for the international crowd, and it simply feels good to be there.
Just in case somebody cares, here’s my Oracle ACE profile.
I hope to keep up the level, and will continuously try hard to find the time to give back knowledge to the community.
The DOAG 2015 Database conference is on the horizon: June 16, 2015 in Düsseldorf, Germany.
I’m proud to announce my participation as a speaker, as I was honoured with in the years before. This year, my part will be a new “Beginners” talk in German: “Oracle Core für Einsteiger: InMemory Column Store”
Der Vortrag richtet sich an Einsteiger oder IT-Fachkräfte die nicht in Vollzeit als DBA arbeiten, jedoch Interesse an Datenbanktechnologie haben, bzw. für Entscheidungen hinsichtlich eingesetzter Technologien, Features und Lizenzen Anhaltspunkte suchen.
Der InMemory Column Store ist eine relativ neue Struktur der Oracle Datenbank 12c, und wird vom Hersteller massiv beworben. Der Vortrag möchte zeigen, wie diese sogenannte “In-memory Datenbank” konzipiert ist, funktioniert, und in welchen Szenarien sie sinnvoll eingesetzt werden kann.
I’m looking forward to seeing you there, for tech talk, hanging out and more tech talk. ;)
It was time to try something new in our DOAG Database Community. And please, please just not another frontal conference: twenty guys look forward, one looks back, like eight-oars-and-coxswain. I don’t know if DOAG invented it, but it was a success: The Noon2Noon Event.
How does it work? It’s like 24 hours of BarCamp, starting with a lunch, having a topic-of-the-day, lasting overnight, with winter barbecue, compatible for elders (thus, with Hotel and beds, no after-midnight hacking :) ). Ah, forgot to mention: One single talk being completely away from the topic, but somehow related to our work.
This first time, topic was “MySQL versus Oracle Database”. Johannes Ahrends and Oli Sennhauser as “headliners” ignited discussions about features, technologies and strategies known from Oracle, and how they are (or aren’t) in MySQL. And vice versa, but less. Participants came from all over Germany, plus Denmark and Switzerland. End users, consultants, technocrats, “boys” who go ahead and fail, and “girls” who test and succeed…
I greatly enjoyed the open format – listening, talking, drawing, discussing, swaggering, ignoring, pushing, pulling – the full repertoire. :)
From the technological aspect, it was great to learn something about MySQL – how consistent reads, clustering, and lots of other stuff works, that we never ever thought that it could be done different from the way the Oracle Database works. And that MyISAM isn’t the norm, but just another PITA. :) Go for InnoDB. What is MariaDB? And what the fork is Percona Server?
Things are different in Oracle Database 12c with multitenancy option. My most recent example:
I tried to import a schema (new name “NEWSCHEMA”) with datapump IMPDP and REMAP_SCHEMA into the same pluggable database it has been exported from with EXPDP immediately before (name “OLDSCHEMA”), running as SYSTEM. I’m doing things like that with DBA permissions, since my users have lots of grants and stuff in the schemas, and when a DBA does the export and import, all settes right. (See the details for commands and parfiles below.)
But IMPDP fails with
ORA-39083: Object type INDEX failed to create with error:
ORA-31625: Schema NEWSCHEMA is needed to import this object, but is unaccessible
ORA-01031: insufficient privileges
So what? I’m SYSTEM and thus, DBA, and the user NEWSCHEMA is there. And SYSTEM of course has the “IMPORT FULL DATABASE” privilege, it’s a DBA! So you may think.
Sometimes, we get statements to look at, and are told “it’s getting worse and worse”. Since DBAs are well advised not to take anything for granted and only to believe what they see with own eyes, here comes a SQL on AWR to see Buffer Gets per Minute, over time.
select s.BEGIN_INTERVAL_TIME, round(t.BUFFER_GETS_DELTA/ 0.0001+ (extract (day from (s.END_INTERVAL_TIME-s.BEGIN_INTERVAL_TIME))*24*60)+ (extract (HOUR from (s.END_INTERVAL_TIME-s.BEGIN_INTERVAL_TIME))*60)+ (extract (MINUTE from (s.END_INTERVAL_TIME-s.BEGIN_INTERVAL_TIME)))+ (extract (SECOND from (s.END_INTERVAL_TIME-s.BEGIN_INTERVAL_TIME))/60) ,0) as BG_PER_MINUTE from dba_hist_sqlstat t, dba_hist_snapshot s where t.snap_id = s.snap_id and t.dbid = s.dbid and t.instance_number = s.instance_number and t.SQL_ID='vwxyz' and s.begin_interval_time between sysdate-90 and sysdate order by t.SNAP_ID ;
Get all SQL ID’s from the historical SQL STAT view. Refer it to the snapshot details to get real-world date/time of the events. Since nobody knows how long the AWR snapshot interval was at the time of interest, make BUFFER GETS relative per minute by dividing each BUFFER GETS DELTA by the length of its interval.
Configure the SQL_ID and the interval to be reviewed in WHERE.
You can create a chart like that when exporting the result to the spreadsheet software of your choice:
Basically, this concept will also work with all other columns available in dba_hist_sqlstat, such as CPU consumption, Interconnect load, Disk IO etc.
“Everybody lies”, says Dr. House :)
PS: Please keep in mind, the system(s) to run this query on, will need Oracle’s Diagnostics Pack licensed on top of Enterprise Edition.
A while ago, DOAG.tv interviewed me about Oracle’s Standard Edition and database system migrations. The material was published just recently, so I’d like to share it. Enjoy, and if there are questions, just let me know!
I have to add a corrigenda: SE RAC is not – at least not at the moment – limited to a number of nodes. It’s currently limited to CPU sockets, to be exact, 4 of them. So a four-node SE RAC is possible, if I did not completely misunderstand the licencing policies.
Disclaimer: Licensing and pricing here are my OPINIONS, and not a reliable source to make decisions or confront Oracle with. :) If you need some tailored licensing information, feel free to email info-at-performing-db.com for advice.
my #DOAG2014 presentation and whitepaper are online now!
“Oracle Core für Einsteiger: Datenbank I/O”
Thank you all for attending!
It’s time for all Oracle folks to congregate in Nuremberg for DOAG Konferenz 2014!
I’d love to meet and greet you there – maybe you are also interested in my talk for Database Rookies: “Oracle Core für Einsteiger: Database I/O”:
Hope to have a great week with you!
Last week, I had the chance to visit Bavaria’s (and so also Germany’s) oldest Spatial Database. It’s buried deep below Munich, and contains all the geo information about Bavaria in scale 1:5000 and some in 1:2500. It was introduced in 1808 and was in use until 1950. That’s also the current state of the data.
Each of the 26,000 official maps is painted in oil and mirror-invertedly on polished lime sand brick. Each “page” is 1m x 1m (3.2ft x 3.2ft) in size, 4-6cm (2-3in) thick and the weight of each stone “disk” is approximately 70kg (154lbs). That makes it 26 Kilostones in size, with a dump size of 1,820,000 kg or 3,968,000 lbs.