Archive for the ‘Oracle Grid Control’ Category

Using EM 12c repository to find ASM growth rate

January 9, 2013 1 comment

ASM disk group growth from EM 12c repository

Our company collocates multiple databases on Linux RAC grids to reduce hardware, OS and HA license expenses. All the databases on a grid are sharing few ASM disk groups.

Sharing disk groups helps to reduce storage overallocation commonly seen when each database is running on its dedicated storage, be it file system-based or ASM-based. Of course, the databases continue to grow, but the hope is that with shared storage the overallocation can be reduced due to extremes canceling out each other.

In this environment, when an ASM disk group usage does grow, the question becomes how to track which database is causing most of the growth.

Unfortunately EM 12c ASM disk group page shows only last 7 days of individual database sizes in a particular data group. For capacity planning purposes you need more than that.

One solution is to query EM 12c repository directly.

Read more…


Extracting data from OEM 12c repository for database Capacity Planning

January 8, 2013 6 comments

Recently I was asked to give recommendation for next year business case on buying additional storage to accommodate database growth.

With several dozen of databases to support, there is simply no time to instrument each db with custom-built capacity planning scripts.

Since I had already invested considerable amount of time and energy in setting up EM 12c system to monitor my databases, I turned to OEM metrics to see if there is something useful being collected there.

I encountered two issues when looking at the EM.

First, to my surprise i could not find database size metrics anywhere in the web interface. There were tablespace-level used space metrics, but not full database size. This is such an obvious and important metrics that I hope that it is just me missing something.

Next issue was that metrics pages never display data beyond last 31 days. It turned out that OEM 12c has a bug in which when you go to a metric page it does not display historical data beyond last 31 days. This is confirmed as bug on (need to find the bug #). The repository has the data, but the web page rendering simply does not display it. Needless to say that 31 days is not anywhere near the time frame needed for long-term capacity planning.

I decided to look directly into the repository and bypass the web interface.

Below are 3 simple repository queries.

Read more…

Using Grid Control 10g to Build a Cumulative Counter Based Graph

November 17, 2010 Leave a comment

redo write speed graphOur redolog disk devices were experiencing periodical sudden slowdowns, but diagnosing the issue proved to be tricky. Yes it is possible to use Unix tools such as iostat –x or sar –d to collect device service times, but in our case the storage is a NAS array with 190 devices mounted by a Volume Manager. This  makes it difficult to map exactly what devices are used for redologs.

We wanted to get an idea of what redo device speed looks like from the database perspective and how it changes over time.

Unfortunately EM 10g does not have a built-in metric to monitor redo write speed.

The system view v$sysstat maintains cumulative counters ‘redo write time’ and ‘redo writes’. The average LGWR write speed since instance startup can be calculated as

SQL> select /*+ all_rows */ t.value*10/w.value as ms_per_write
(select value from v$sysstat
where statistic# = (select statistic# from v$statname where name = 'redo writes')
) w,
(select value from v$sysstat
where statistic# = (select statistic# from v$statname where name = 'redo write time')
) t;

The average redo speed since startup, while helpful, is still far from what we need to verify a suspected temporary slowdowns.

For this purpose we need a time graph, and a time graph is something that can be derived from cumulative counters using deltas.

Read more…