Home > DBA, Interviews, NoCOUG, Oracle > Interview: Spotlight on Tuning With Guy Harrison

Interview: Spotlight on Tuning With Guy Harrison


Guy Harrison is the author of the newly released Oracle Performance Survival Guide, MySql Stored Procedure Programming (with Steven Feuerstein, 2006), Oracle SQL High Performance Tuning (2001), and Oracle Desk Reference (1999). He is currently a director of development at Quest Software, where he created the popular Spotlight family of products. He leads a software development team in Melbourne that is responsible for the Spotlight core technologies and for SQL Navigator, Spotlight on Oracle, Spotlight on Oracle RAC, Spotlight on MySQL, and Spotlight on SQL Server Enterprise. He can be found on the Internet at www.guyharrison.net.

We spend a lot of time tuning databases. Why does it have to be that way? Is Oracle difficult to tune? Are we building the databases or applications wrong?

We dream of having a database that requires no tuning, but that probably won’t happen—at least not until computers reach human levels of intelligence. We have to express our wishes to the database in terms of SQL and PL/SQL: poorly expressed or unnecessary requests are always going to put more demand on the DBMS, and so there’ll always be the need to look at those requests and make sure that they are optimal. Also, there’ll always be a variety of ways we can set up and configure the database to meet our anticipated demand. So it seems like we’ll always have to tune our code and optimize our database configuration.

Although every release of the database automates some aspect of tuning, every release also introduces more functionality that you need to tweak if you want to get the best performance. For example, in 11g, Oracle can automatically allocate memory between the buffer cache, shared pool, and PGA—which is a Good Thing. However, 11g also introduced new caches: the database flash cache and the result set cache for instance. These new areas are not automatically managed, so if you want to use them, you need to manually determine the best configuration.

Oracle has the most advanced features of any RDBMS, and this creates more tuning challenges and opportunities. On the other hand, Oracle gives you more visibility into internal state than MySQL or SQL Server do. The wait interface and the time model in particular let you easily diagnose issues that would be totally obscure in other databases.

Oracle gives you more visibility into internal state than MySQL or SQL Server do.

Your book talks about “tuning by layers.” At first glance, it sounds like the Compulsive Tuning Disorder that we are often warned about. We are now told to focus on response times and wait times (Method R, Yapp, etc). Why should we tune by layers?

You are always better off tuning to response times and wait times, but your tuning efforts will often be more effective if you focus on the higher levels in the software stack first. Tuning by layers attempts to guide your tuning efforts toward causes rather than symptoms.

It’s the higher levels of the stack—Application SQL for instance—that create the demand on the lower levels such as the I/O subsystem. It makes sense to minimize demand first: if you reduce the demand from the higher levels, the symptoms you are seeing in the lower levels may disappear.

For example it is better to reduce I/O demand by adding an index or otherwise tuning an SQL, rather than to increase I/O capacity by adding spindles to a disk array. Both approaches can reduce your I/O wait times, but the first focuses on the cause, while the second focuses on a symptom.

Because problems in one database layer can be caused or cured by configuration in the higher layer, the most efficient and effective way to optimize an Oracle database is to tune higher layers before tuning lower layers:

  1. Reduce application demand to its logical minimum by tuning SQL, optimizing physical design (partitioning, indexing), and tuning PL/SQL.
  2. Maximize concurrency by minimizing contention for locks, latches, buffers, and other resources in the Oracle code layer.
  3. Having normalized logical I/O demand by the preceding steps, minimize the resulting physical I/O by optimizing Oracle memory.
  4. Now that the physical I/O demand is realistic, configure the I/O subsystem to meet that demand by providing adequate bandwidth and evenly distributing the resulting load.

Tuning by layers attempts to guide your tuning efforts toward causes rather than symptoms.

Cars need regular maintenance to keep them running smoothly. Don’t databases need maintenance too? What maintenance would you suggest, if any?

When a car gets out of tune, it probably still gets you where you want to go, but it will cost you more in fuel and generate more pollution. That’s essentially true for databases as well. As well as tuning databases to realize a response time or throughput goal, we should tune to reduce CPU and disk I/O, which in turn translates to lower power costs and environmental impact. I read recently that IT is contributing to between 2% and 6% of global power consumption and growing exponentially: I think we’ll see increasing pressure to contain that growth, and performance optimization is an important part of the solution

The other reason to perform proactive tuning is to avoid the “brick wall” that you can hit when you run out of system resources. It’s a lot easier to create an index or add some memory when system performance is adequate than to have to do so in crisis mode when a performance issue is bringing the application to its knees.

Routine, proactive tuning is a good candidate for the tuning-by-layers approach. Start by seeing if the SQLs with the highest cost can be tuned, reduce contention points that might be masking demand, optimize memory to avoid I/O, and then optimize the I/O subsystem to meet the resulting realistic demand.

I think STATSPACK is a fine set of scripts and framework for keeping history.

STATSPACK seems to be out of fashion, but it’s the only system-level diagnostic tool available to most DBAs. What good is STATSPACK really?

Well, I work on third-party tuning tools for Oracle (Spotlight on Oracle/RAC for instance), so I have a slight conflict of interest here! However, my view is that while advanced tools can massively improve the efficiency of the DBA and allow DBAs with limited experience to quickly determine bottlenecks and tuning opportunities, there’s enough in the base Oracle product to solve most problems given enough time. If all I had were SQL*Plus and access to V$ views, I’d feel confident that I could eventually get to the bottom of most tuning issues.

I think STATSPACK is a fine set of scripts and framework for keeping history. I haven’t used it all that much, though, since I’ve always had access to Quest tools such as Spotlight J.

Why bother to tune at all? Is big iron the cheaper answer to our tuning problems? My time is not cheap but memory and RAM disks get cheaper every day and CPUs keep getting faster. In his latest book, Oracle on VMware, Dr. Bert Scalzo makes a case for “solving” performance problems with hardware upgrades (Footnote 1). What is your opinion about this approach?

As I said before, datacenter power consumption costs and environmental impact are likely to become increasingly important. Bert is correct in that the economics of tuning are such that hardware upgrades are often more viable than lengthy tuning cycles. However, companies are going to be increasingly challenged to reduce power consumption from their data centers, and this may become at least as important as optimizing DBA time.

The other issue is scalability. Poorly tuned SQL often increases in demand faster than you can add hardware—especially if the data volumes and transaction rates are growing at the same time. If you have a transaction that is taking longer for each execution, and the execution rate is also increasing, then you may see exponential cost increases and you definitely can’t add hardware at an exponential rate.

All that having been said, it’s true that for certain workloads—data warehousing being the prime example—there’s often no better solution to the performance challenge than to parallelize the workload across multiple RAC nodes and disk spindles and use a brute force solution.

One of the very unusual recommendations in your book is to consider adjusting _SPIN_COUNT in certain circumstances. Most Oracle performance tuning specialists are against the idea. Why do you recommend it?

Gulp…I didn’t really mean to recommend altering an undocumented parameter. Oracle support will have me excommunicated!

What I do argue is that when latch contention can’t be resolved by any other measure, then changing the spin count might be worth trying. After all, the _SPIN_COUNT has stayed at a constant value for decades, while CPUs are getting faster: the default value can’t possibly be at the absolutely correct value for all circumstances and probably is below optimal for some databases.

I’ve experimented with adjusting spin count on a couple of occasions, and in both cases I found that for the right workload, you could reduce latch contention and improve throughput if you increased _SPIN_COUNT. You can see the results of my most recent tests at tinyurl.com/spincount.

Although I think adjusting _SPIN_COUNT is a valid strategy to try when all else fails, I definitely advise that other mechanisms for reducing latch contention be attempted first.

You’re the architect of Spotlight on Oracle and similar products. Would you like to tell our members something about them while you have their attention?

Sure. Quest has been doing database tools for about 15 years now, and we have a huge user base in the Oracle community. Spotlight on Oracle and Spotlight on Oracle RAC are my “babies” at Quest—I designed them and I run the teams that work on them. My aim for Spotlight is to create the most sophisticated yet easy-to-use tuning tool for Oracle DBAs. The great thing about working at Quest is that I can build my own dream tool. Spotlight has the diagnostic capabilities that I most desire in a tuning tool.

The good news for Oracle DBAs—especially if you use TOAD—is that the prices of these tools have really dropped over the past few years. You can get Spotlight on Oracle— and soon Spotlight on RAC—in the TOAD DBA suite now, at a fraction of the cost that it would have cost you a few years ago. TOAD users should definitely check out the DBA suite which offers—I think—an almost irresistible value.

It costs us about $14,000 per year to produce and distribute the NoCOUG Journal and considerably more than that to organize four conferences. We have about 500 members and about 250 attendees at each conference; we’ve stagnated at those levels for many years. Has Google made us obsolete? I begged my previous company to buy a corporate membership but, except for my manager, not a single person from that company ever showed up at our conferences. Should we close shop? What could we do better to attract more members and conference attendees?

Google is great for quick answers to sudden questions, but it doesn’t provide a structured way to improve your overall expertise. It also doesn’t provide the opportunities for networking that user groups provide.

We’re clearly going through a transition in how we obtain and consume information. As an author I’m all too aware of how much the Web and web searches have changed the value proposition for periodicals, conferences, and books. I’m confident, though, that the value of the community in user groups will stand the test of time.

As far as the Journal goes, I think anyone who is publishing information needs to keep thinking about how best to get the information to the user. I’d be thinking how to exploit the new distribution mediums—Kindle, iPhone, podcasts, RSS aggregation, Twitter, etc. It’s not going to be easy: just look at the challenges faced by the newspaper industry.

The other thing to remember is that the IT industry in general and the database market in particular are reaching a level of maturity that might feel like stagnation given the crazy growth we had during the dot.com bubble and Y2K, but it’s probably a natural transition for an industry entering adulthood.

You and Steven Feuerstein are Oracle experts, yet you collaborated on a book on MySQL. Should we hedge our career bets by learning MySQL?

That’s a good question. When we wrote the MySQL book, a lot of people thought MySQL might start to compete more directly with Oracle in the enterprise market. It didn’t turn out that way: MySQL and Oracle still exist in pretty distinct segments of the overall database market. I don’t think Oracle professionals should feel their career is threatened by MySQL.

On the other hand, MySQL might become an Oracle database any day now if the Sun acquisition completes, and if that happens knowing both MySQL and Oracle could be an advantage.

I think that all IT professionals should try hard to keep a diverse skill set. Learning new technologies helps to keep your mind active, puts your primary skills in context, and helps you to avoid being on the wrong side of a paradigm shift. If I were a PL/SQL programmer, I’d be making sure I knew at least one other language pretty well—maybe Java or Ruby. As a DBA, I’d be keen to get some experience with any other RDBMS—especially SQL Server or MySQL.

At the moment I’m very interested in some of the non-relational technologies that may enter the mainstream over the next few years. Hadoop and the “NoSQL” databases (such as Cassandra and Voldemort) are really interesting. It’s too early to tell if they will become disruptive to the relational database, but they’re definitely worth keeping an eye on.

Thanks for an excellent book and thanks for spending time with us today. KahPlah! I have your book now, but I’m always interested in buying good Oracle books. Do you have some personal favorites that you can recommend?

Thanks, Iggy!

There are so many good books are out there! I’m sure I’m going to miss some worthy contributions, but here goes:

Tom Kyte’s Expert Oracle Database Architecture is the best overall book on Oracle architecture that I know of. It’s highly readable, comprehensive, and accurate. Despite the word “Expert” in the title, I think it’s suitable for people at almost any level of expertise. If I were only going to give someone a single book on Oracle, it would probably be this one.

In terms of introductory books, I thought your recent book (Beginning Oracle Database 11g Administration) was an excellent introduction to Oracle DBA. Julian Dyke’s Pro Oracle Database 10g RAC on Linux is a good book to get up to speed if you’re using RAC (with or without Linux).

For more advanced readers, the best book on Oracle SQL tuning is without doubt Jonathan Lewis’s Cost Based Oracle Fundamentals. Other recent advanced tuning books include Craig Shallahamer’s Oracle Performance Firefighting and Christian Antognini’s Troubleshooting Oracle Performance.

There are lots of other books. I recently set up an online bibliography at guyharrison.net/readingList, where I list the books that I’ve found particularly useful.

Live long and prosper!

Interview conducted by Iggy Fernandez for the February 2010 issue of the NoCOUG Journal (Click here to view the PDF version)

Footnote 1: Here’s the full quote from Dr. Scalzo’s book: “Person hours cost so much more now than computer hardware even with inexpensive offshore outsourcing. It is now considered a sound business decision these days to throw cheap hardware at problems. It is at least, if not more, cost effective than having the staff [sic] tuned and optimized for the same net effect. Besides, a failed tuning and optimization effort leaves you exactly where you started. At least the hardware upgrade approach results in a faster/better server experiencing the same problem that may still have future value to the business once the fundamental problem is eventually corrected. And, if nothing else, the hardware can be depreciated, whereas the time spent tuning is always just a cost taken off the bottom line. So, with such cheap hardware, it might be a wiser business bet to throw hardware at some solutions sooner than was done in the past. One might go so far as to make an economic principle claim that the opportunity cost of tuning is foregoing cheap upgrades that might fix the issue and also possess intrinsic value. Stated this way, it is a safe bet that is where the business people would vote to spend.”

Subscribe to this blog by Email

Categories: DBA, Interviews, NoCOUG, Oracle
  1. No comments yet.
  1. No trackbacks yet.

Leave a comment