Home > DBA, Oracle, Physical Database Design > Lesson 2 of 40: Physical Database Design for Oracle Databases: Non-Necessity?

Lesson 2 of 40: Physical Database Design for Oracle Databases: Non-Necessity?


In most people’s vocabularies, design means veneer. It’s interior decorating. It’s the fabric of the curtains and the sofa. But to me, nothing could be further from the meaning of design. Design is the fundamental soul of a man-made creation that ends up expressing itself in successive outer layers of the product or service.—Steve, Jobs. “Apple’s One-Dollar-a-Year Man.” Fortune Magazine. 24 Jan 2000.

Physical database design improvements are a cost-effective solution to a performance and scalability challenge.

Or are they?

Economic principles can equally well be used be used to make the case for faster hardware. In his book, Oracle on VMware, Dr. Bert Scalzo says:

Person hours cost so much more now than computer hardware even with inexpensive offshore outsourcing. It is now considered a sound business decision these days to throw cheap hardware at problems. It is at least, if not more, cost effective than having the staff [sic] tuned and optimized for the same net effect.

Besides, a failed tuning and optimization effort leaves you exactly where you started. At least the hardware upgrade approach results in a faster/better server experiencing the same problem that may still have future value to the business once the fundamental problem is eventually corrected. And, if nothing else, the hardware can be depreciated, whereas the time spent tuning is always just a cost taken off the bottom line. So, with such cheap hardware, it might be a wiser business bet to throw hardware at some solutions sooner than was done in the past. One might go so far as to make an economic principle claim that the opportunity cost of tuning is foregoing cheap upgrades that might fix the issue and also possess intrinsic value. Stated this way, it is a safe bet that is where the business people would vote to spend.

When I interviewed Jonathan Lewis for the November 2008 issue of the NoCOUG Journal, I asked him for a second opinion. He homed on a key element of Dr. Scalzo’s argument.

I think the key sentence in Dr. Scalzo’s case is “So, with such cheap hardware, it might be a wiser business bet to throw hardware at some solutions sooner than was done in the past.” And in this sentence, the most important word is “sooner.”

Throwing hardware at a problem has always been an option—and sometimes the correct option—but 20 years ago hardware was much more expensive than it is now, took up more space, used more power, and generated more heat so, in the standard analysis of cost/benefit/risk, correcting the bad code was usually seen to be more cost effective than using more hardware to live with bad code.

Nothing has really changed in the analysis—except the pricing. If you know that more hardware will give you the scalability you need for as long as you need; the installation process is fast enough and safe enough; and the alternative is going to take too long, need too much testing, or introduce too much risk, then the hardware is the right choice.

It has to be said, though that the analysis is not always carried out correctly. Issues of scalability can be deceptive—especially when you are talking about highly concurrent systems, and installing new hardware isn’t always risk-free or fast. The main drawback to the hardware solution (if the analysis isn’t done properly) is that you can still end up having to fix the code, except the fix is now an emergency that appears unexpectedly at some future time.

When I interviewed Dr. Scalzo for the next issue of the Journal, I asked him to clarify his remarks. He focused on the cost of tuning:

No—Oracle is a great database with extensive flexibility. Instead, it’s the cost of tuning (whether for database, operating system, and/or application) that has become somewhat cost ineffective. Let me give an example. A decade ago, tuning the SGA memory allocation and usage was critical—and it still is today. But with dirt-cheap memory, servers that can accommodate tons of memory, and Oracle automatic memory management—just buy more memory. It’s often far cheaper to add 16 GB RAM than pay two days’ expert database consultation fees. Yes, there are some scenarios where simply adding hardware might make a database bottleneck situation worse. But the odds and cost are now such that it’s a worthwhile first step—since you can always tune if that fails. I’m just preaching to try cheap and easy solutions first.”

In the next issue of the Journal, I interviewed Karen Morton and discussed the same topic with her. She emphasized the need for diagnosis before attempting any fix:

If you have performance problems that have been proven, through careful analysis, to have their root cause in substandard hardware, then the solution may be to upgrade. The investment made to make sure personnel are adequately trained to properly diagnose and repair performance problems can benefit your company in the long run much more so than taking what appears to be a quick fix by throwing hardware at the problem in the short run.

I think that when personnel are properly trained and knowledgeable, they will not only be able to quickly, efficiently, and cost effectively find and fix problems when they arise, but they will not make the mistakes that cause the problems in the first place.

I believe the truth is that if hardware isn’t your real problem, you’ll eventually have no other choice than to address the actual root cause, because hardware upgrades can only be a Band-aid on the wound for so long. I’ve seen too many people try this approach only to pay hundreds of thousands of dollars for hardware they don’t really even need and still have performance problems. When they finally do address the real root cause and fix it, they often find that they could have bypassed the entire expenditure on hardware and saved more time and money if they’d just done it right in the first place.

To summarize, while physical database design improvements may be the elegant and efficient solution to a performance and scalability challenge, more powerful hardware can sometimes be the simplest, quickest, and most cost-effective solution. For example, GridIron Systems claims that its “TurboCharger” caching solution “boosts Oracle database performance by providing up to 1000x faster access to the data blocks containing database tables and indexes.” However, diligence and diagnosis is necessary because a hardware upgrade may not improve the performance of your task and can sometimes make it worse. As Cary Millsap points out in Why “System” is a Four-Letter Word, “performance of your task gets worse if the “improvement” intensifies competition for the resource that is the bottleneck for your task.”

April Fools’ Day Joke

I started writing my book on April 1, 2011 when enthusiasm for Oracle Exadata Database Machine was at its height. The strengths of Oracle Exadata Database Machine are Smart Scans, Storage Indexes, and Hybrid Columnar Compression which dramatically reduce the cost of full table scans. However, it was clear to me that powerful hardware did not eliminate the need for physical database design. I also noted that Oracle’s most recent attempt at the TPC-H benchmark did not use Exadata. I wrote the following tongue-in-cheek posting in honor of the day.

SECRET OF ORACLE DATABASE PERFORMANCE FOUND IN 1987 SEARS ROEBUCK CATALOG

Redwood Shores, April 1, 2010 – In between mouthfuls of raw tofu slices delicately dressed with chili oil and shredded seaweed, Oracle CEO Larry Ellison related his game-changing epiphany about database performance.

“My personal ivory buyer was showing me a picture of ivory chopsticks in the 1897 Sears Roebuck catalog when I noticed the following words at the back of the catalog: IF YOU DON’T FIND IT IN THE INDEX, LOOK VERY CAREFULLY THROUGH THE ENTIRE CATALOG.

That’s when the truth hit me like a ton of bricks. Physical database design is hard. There are few good books about it. Why should our loyal customers spend a lot of money on database consultants when they can spend an equal amount of money on Oracle software and hardware instead? We could tell ’em to stop worrying about indexes, clustering, partitioning, and materialization and sell ’em a big honkin’ piece of hardware that can simply scan entire tables every time.”

Instructions in the 1897 Sears Roebuck catalog

Instructions in the 1897 Sears Roebuck catalog

Mr. Ellison went on: “It’s not that we haven’t tried the traditional route. Consider Multidimensional Clustering for example. We’ve offered it from day one but the only places it is ever used are the Oracle data dictionary and our own TPC-C benchmarks. It’s time to admit that our customers are right and to give them what they want, not what they need.”

When asked how he came up with the “Smart Scan” moniker for what was formerly called a full table scan, Mr. Ellison was equally candid. “When other companies sell fish, they bill it as dead fish. We call it sushi.”

When told that a certain database maverick named Iggy Fernandez had started writing a book with the ridiculously long title of “Expert Oracle Indexing, Clustering, Partitioning, and Materialization for Performance and Scalability in Oracle Database 11g Release 2” that he hopes to release in time for OpenWorld 2011, Mr. Ellison let out a derisive snort and predicted that Mr. Fernandez was unlikely to get rich from it. When contacted for comment, Mr. Fernandez reluctantly agreed with Mr. Ellison’s prediction and quoted American poet laureate Robert Frost Two roads diverged in a wood, and I—I took the one less traveled by.”

Load-and-Go

I myself have encountered at least one database application that did not use any indexes whatsoever but performed adequately even though every query required full table scans. In fact, a database vendor named ParAccel recently performed the 1 TB TPC-H data warehouse benchmark without any indexes at all; it calls this approach “Load-and-Go.” You may wonder how a database can perform joins—inner, outer, semi, and anti joins—without indexes. Oracle Database has two methods at its disposal—the “hash join” and the “sort merge join.” Please refer to the links for more information.

Summary

  • A hardware upgrade can sometimes be the simplest, quickest, and most cost-effective solution to a performance and scalability problem.
  • Performance of your task gets worse if a hardware upgrade intensifies competition for the resource that is the bottleneck for your task.
  • Oracle Database can use the “hash join” and “sort merge join” techniques to perform joins of all kinds without the use of indexes.

Further Reading

Exercises

  • Use Statspack or AWR to investigate whether your database application will benefit from faster I/O.
  • Evaluate SSD products from vendors such as Kaminario, Fusion-io, Texas Memory Systems, and GridIron Systems.

Previous Lesson

  1. September 13, 2011 at 8:35 am

    Thought-provoking and topical. Keep ’em coming, Iggy!

  2. Iggy Fernandez
    September 13, 2011 at 4:03 pm

    Michael Fontana :

    Thought-provoking and topical. Keep ‘em coming, Iggy!

    Thanks, Michael. I hope I reach the finish line someday.

  3. September 13, 2011 at 4:43 pm

    Good post, Iggy. The problem I have with the “Dr”‘s pronounciations – and so many others of the same calibre along 35 years of IT (yes, it’s nothing new, the “throw hardware” thing) is this:
    “It’s often far cheaper to add 16 GB RAM than pay two days’ expert database consultation fees. Yes, there are some scenarios where simply adding hardware might make a database bottleneck situation worse.”

    It is very often NOT cheaper to add 16GB of memory!
    “Dr” Scalzo completely forgets that in most server cases, additional memory won’t be able to be sourced or installed without major hardware upgrades which then cascade into software upgrades and so on.
    In this day and age of “new hardware generations” every 3 years, it is only too easy to find oneself with hardware that simply cannot have more memory added without first upgrading existing memory to different chip architectures, or simply getting a new motherboard/processor card.
    The problem with all these pronounciations is they are made by folks without the foggiest clue what it is like to run a modern data centre and where the real costs are…

  4. Iggy Fernandez
    September 14, 2011 at 1:31 pm

    Thanks for the feedback, Nuno. I’m glad that I’m serializing my book on my blog first. One of the problems of working with a publisher is that one publishes first and gets feedback later🙂 It ought to be the other way around.

  5. September 15, 2011 at 12:41 pm

    Noons makes a good point here, but I think Bert’s comment should be prefaced with “all things being equal”, because that’s always the goal. And I don’t agree with the Scalzo comment, it’s just that, in context, you can’t really blame management, if it’s pitched this way, for choosing the hardware. It’s probably been recommended by their own experts, right?

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: