Oracle 11gR2 ASM / ACFS: A first benchmark (poorly)

Hi folks,

since Oracle 11g Release 2 is out now, I had to test one of the most-missed ASM features: the ASM cluster file system ACFS.

My Setup:

  • Two VMware nodes with 2 CPUs and 1,5GB of RAM each
  • Oracle Enterprise Linux 5.3 x86_64
  • Four virtual cluster disks from the ESX server, 10GB in size each
  • Building disk group DATA from them, with redundancy NORMAL
  • containing four failgroups with each one of the disks within
  • In DATA, one ACFS volume of 1 GB in size, mounted to /acfs1

(Thanks to arup for the new asmcmd commands!)

Create test file:

Now I wrote the file “test” into /acfs1:

[root@ASM01 acfs1]# LANG=C dd if=/dev/zero of=test bs=100M count=100
dd: writing `test': No space left on device
10+0 records in
9+0 records out
943718400 bytes (944 MB) copied, 66.6082 seconds, 14.2 MB/s

(=> 14 MB per second write rate mirrored over four disks)

Rebalance to two-disk (4×1):

Now I rebalanced the Diskgroup to three disks and after this to two disks, to see what the AS and the IO subsystem CAN DO at all.

alter diskgroup DATA drop disks in failgroup DATA_0003;
alter diskgroup DATA drop disks in failgroup DATA_0002;

I watched the IO rate with gkrellm on that disks, and it was around 10 MB/s reading for the dropped disk.

Rebalance to four-disk (2×2):

To have a four-wheel-driven diskgroup again, I reconfigured it to have only two diskgroups (what I consider as useful for two storage locations)

alter diskgroup data add failgroup DATA_0000 disk '/dev/disk/by-id/scsi-36000c290892e697922d52c0c37122a03';
alter diskgroup data add failgroup DATA_0001 disk '/dev/disk/by-id/scsi-36000c293719153128ecd26c8f35d48a9';

Once again I watched the IO rate with gkrellm on that disks, and again it was always about 10 MB/s writing for the added disk.

Read the test file:
Now I did read the test file (/acfs1 has been mounted all the time, look at this great cluster storage manager!)

[root@ASM01 acfs1]# LANG=C dd of=/dev/null if=uga bs=100M count=100
9+0 records in
9+0 records out
943718400 bytes (944 MB) copied, 53.4053 seconds, 17.7 MB/s

(=> 17 MB per second write rate S.A.M.E. over four disks)

Clarifications:

Of course I did the dd’s more than one time, and cross-checked read and write cycles with the same disc configuration as well. The values did not change – the fact, that all four disks come from the same media makes SAME a joke here. And, of course, benchmarking in VMware is always nasty, but to compare the results within this virtual world may give you an impression of the relations.

Question:

Have a look at the difference between the Linux dd performance (about 4MB per disk) and the IO ASM can do natively (about 10MB per disk) – I wonder where the losses have to be accounted to: Is it frictional heat in the end? :)

Some SQL and configuration:
This one is to see some disk characteristics:

select GROUP_NUMBER,DISK_NUMBER,NAME,HEADER_STATUS,
   FAILGROUP,FREE_MB,OS_MB,LABEL,PATH from v$asm_disk;

And this query is to see the ongoing rebalance operation:

select * from v$asm_operation;

That’s how to configure the rebalance power (parallel threads for rebalancing):

alter system set asm_power_limit=3 scope=both sid='*';

Take care
Usn

PS: Have a look at ASM permission pitfalls as well.




You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

3 Responses to “Oracle 11gR2 ASM / ACFS: A first benchmark (poorly)”

  1. Anonymous Says:

    [...] nach komplett Ende. Ich habe das Ganze heute mal getestet (wen es interessiert der kann hier: http://www.usn-it.de/index.php/2009/…chmark-poorly/ nachlesen). Mein Eindruck hinsichtlich Administration war gut, auch das Sperrverhalten scheint [...]

  2. Anonymous Says:

    What is your volume redundancy? If your volume redundancy is NORMAL than every write you are doing to the disk is going to be mirrored to 2 disks. If your underlying VM storage is a single disk, then every mirrored write is doing 2 writes to the same disk. That would account for the throughput being halved. If you had 2 real disks then the mirrored writes would be issued in parallel.

  3. usn Says:

    Easy to read above:
    “Building disk group DATA from them, with redundancy NORMAL”

    How ASM redundancy works isn’t the topic, it does not affect the comparision between Database ASM usage and ACFS if I want to compare them as-is.
    The entry above describes, how I did change the mirroring within NORMAL redundancy – sorry if this wasn’t clear?

    Regards
    Usn

Leave a Reply