Archive

Archive for the ‘Oracle 11g’ Category

Moving 8TB database to Flash Storage

December 7, 2015 Leave a comment

In recent years we all get accustomed to all kinds of virtualization – VM Ware, Solaris Zones, Solaris Domains, etc… While mechanics may be different, one common trait remains – try and pile up more applications and databases into common set of resources.

One flavor of virtualization is Storage Arrays. Multiple databases and applications get their slices of big Storage Array in the form of LUN. As it happens when big companies in pursuit of efficiency turn every process into a conveyer, the storage provisioning planning is frequently reduced to LUN sizes. In the end you end up with mysterious storage slowdowns when one or more of “other” apps quietly rump up IO utilization to a point where Storage Array cache becomes saturated. Because you have no visibility into what else is running on the same storage – you are left just wondering. That’s the downside of “virtualize everything” trend.

I will have another post about practical ways to detect present and past storage slowdowns.

Here I wanted to share a remarkable IO improvement we got after moving to a Flash storage array.

Read more…

Using EM 12c repository to find ASM growth rate

January 9, 2013 1 comment

ASM disk group growth from EM 12c repository

Our company collocates multiple databases on Linux RAC grids to reduce hardware, OS and HA license expenses. All the databases on a grid are sharing few ASM disk groups.

Sharing disk groups helps to reduce storage overallocation commonly seen when each database is running on its dedicated storage, be it file system-based or ASM-based. Of course, the databases continue to grow, but the hope is that with shared storage the overallocation can be reduced due to extremes canceling out each other.

In this environment, when an ASM disk group usage does grow, the question becomes how to track which database is causing most of the growth.

Unfortunately EM 12c ASM disk group page shows only last 7 days of individual database sizes in a particular data group. For capacity planning purposes you need more than that.

One solution is to query EM 12c repository directly.

Read more…

System Throughput Comparison

April 1, 2012 2 comments

CPU Throughput Comparison

Recently I have compared single-process speeds of systems with different CPUs by running a simple test SQL on Oracle database. The SQL was simple and universal in that it produced same execution plans and execution statistics on both 10g and 11g and on all architectures I was able to get my hands on – Intel Xeon, IBM Power7, HP Itanium2, Sun UltraSPARC. This makes it possible to quickly compare relative CPU speed for a single database process. The result of the comparison was a surprisingly slow UltraSPARC T2 performance. These processors implement Chip Multi Threading where few physical cores run massive number of threads, with each thread presented as virtual CPU to the OS. The Sun assumption was that with CPU frequencies in Giga Hz the CPU spends most of the time waiting on RAM access and that this wait could be better utilized by running virtual threads. This sounds good in theory, but it seems like UltraSPARC T2 went overboard with virtualization. For example, one of my T5240 boxes has 2 physical CPUS, 12 cores and 96 virtual CPUs (threads). When I make a connection to database on this server and run something continuously and observe CPU utilization, the utilization never goes above 1-2%. It takes extraordinary efforts to get all the virtual CPUs working. For example, you have to run RMAN backup with parallel degree in hundreds to get into 30-50% cpu utilization range. The whole system acts as one with huge number of slow CPUs.

When I brought these observations up to our hardware planners, their counter-argument was – ok, single process may be somewhat slower, but with hundreds of database connections the total system throughput should be great.

This got me into thinking of how can I run same SQL test with multiple parallel processes.

Read more…

Relative CPU speed

April 1, 2012 1 comment

On my job I often have to move a database to a newer hardware.
Sometimes company decides to do a tech refresh, sometimes it is a new company spin-off or a data center consolidation. When you get to work on a new hardware, it is always interesting to see what it is capable of and how does it compare to the old system.

Back when I worked for Oracle as an instructor, we would frequently go to a customer location and install database software for the class on customers hardware. We would use a “quick and dirty” method to get a feeling on just how fast the systems CPU was. The method was to run same simple sql and measure execution time.
The sql was

select count(*) from emp,emp,emp,emp,emp

The beauty of this sql was that it required almost no IO.
There were only 14 records in EMP table and it took only 1 IO to read the block into buffer cache.
After that the 5-way cartesian join execution time would only depend on the CPU speed.
Every database we installed had Scott schema with EMP table, which made this simple sql a universally available CPU speed test tool.

These days I work for a company which does not install sample SCOTT schema.
So I decided to do something similar without EMP table.

The sql I came up with was this:

with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 100 )
select /*+ ALL_ROWS */ count(*) from t,t,t,t
/
COUNT(*)
----------
100000000

This sql does not need any table and therefore will run on any Oracle database.
It generates 100 records by selecting from dual and then joins the resultset to itself 4 times, producing count of 100 millions. Being select from dual, this sql needs miniscule IO and its elapsed time mostly depends on CPU speed. Since the sql does not use parallel queries, the execution time can be a measure of single-process CPU speed.

Read more…