About

Martin Klier

usn-it.de

Oracle 11gR2 ASM / ACFS: A first benchmark (poorly)

Hi folks,

since Oracle 11g Release 2 is out now, I had to test one of the most-missed ASM features: the ASM cluster file system ACFS.

My Setup:

  • Two VMware nodes with 2 CPUs and 1,5GB of RAM each
  • Oracle Enterprise Linux 5.3 x86_64
  • Four virtual cluster disks from the ESX server, 10GB in size each
  • Building disk group DATA from them, with redundancy NORMAL
  • containing four failgroups with each one of the disks within
  • In DATA, one ACFS volume of 1 GB in size, mounted to /acfs1

(Thanks to arup for the new asmcmd commands!)

Create test file:

Now I wrote the file “test” into /acfs1:

[root@ASM01 acfs1]# LANG=C dd if=/dev/zero of=test bs=100M count=100
dd: writing `test': No space left on device
10+0 records in
9+0 records out
943718400 bytes (944 MB) copied, 66.6082 seconds, 14.2 MB/s

(=> 14 MB per second write rate mirrored over four disks)

Rebalance to two-disk (4×1):

Now I rebalanced the Diskgroup to three disks and after this to two disks, to see what the AS and the IO subsystem CAN DO at all.

alter diskgroup DATA drop disks in failgroup DATA_0003;
alter diskgroup DATA drop disks in failgroup DATA_0002;

I watched the IO rate with gkrellm on that disks, and it was around 10 MB/s reading for the dropped disk.

Rebalance to four-disk (2×2):

To have a four-wheel-driven diskgroup again, I reconfigured it to have only two diskgroups (what I consider as useful for two storage locations)

alter diskgroup data add failgroup DATA_0000 disk '/dev/disk/by-id/scsi-36000c290892e697922d52c0c37122a03';
alter diskgroup data add failgroup DATA_0001 disk '/dev/disk/by-id/scsi-36000c293719153128ecd26c8f35d48a9';

Once again I watched the IO rate with gkrellm on that disks, and again it was always about 10 MB/s writing for the added disk.

Read the test file:
Now I did read the test file (/acfs1 has been mounted all the time, look at this great cluster storage manager!)

[root@ASM01 acfs1]# LANG=C dd of=/dev/null if=uga bs=100M count=100
9+0 records in
9+0 records out
943718400 bytes (944 MB) copied, 53.4053 seconds, 17.7 MB/s

(=> 17 MB per second write rate S.A.M.E. over four disks)

Clarifications:

Of course I did the dd’s more than one time, and cross-checked read and write cycles with the same disc configuration as well. The values did not change – the fact, that all four disks come from the same media makes SAME a joke here. And, of course, benchmarking in VMware is always nasty, but to compare the results within this virtual world may give you an impression of the relations.

Question:

Have a look at the difference between the Linux dd performance (about 4MB per disk) and the IO ASM can do natively (about 10MB per disk) – I wonder where the losses have to be accounted to: Is it frictional heat in the end? 🙂

Some SQL and configuration:
This one is to see some disk characteristics:

select GROUP_NUMBER,DISK_NUMBER,NAME,HEADER_STATUS,
   FAILGROUP,FREE_MB,OS_MB,LABEL,PATH from v$asm_disk;

And this query is to see the ongoing rebalance operation:

select * from v$asm_operation;

That’s how to configure the rebalance power (parallel threads for rebalancing):

alter system set asm_power_limit=3 scope=both sid='*';

Take care
Usn

PS: Have a look at ASM permission pitfalls as well.

Oracle 11gR2 ASM: Changed permission policy (ORA-15260)
AIX: Avoiding “ORA-27126: unable to lock shared memory segment in core”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.