TPC photo
The TPC defines transaction processing and database benchmarks and delivers trusted results to the industry
    Document Search         Member Login    
     Home
About the TPC
Benchmarks
      Newsletter
      Join the TPC
      Downloads
      Technical Articles
      TPCTC



Why You Should Look at TPC-C First
An Interview between TPC-C Subcommittee members and Kim Shanley, TPC Chief Operating Officer

[This interview was conducted in early October, 1996 with the following TPC members. Ed Whalen of Compaq (Chair of the TPC-C Subcommittee); David Simons, TPC Representative from Santa Cruz Operations; Damien Lindauer and Charles Levine, TPC Representatives from Microsoft. It should be noted that while the views of these TPC representatives generally reflect a consensus among TPC-C Subcommittee members, the views expressed are their own. This interview has been abbreviated for the TPC Quarterly Report. The full interview can be found on the TPC web site.

Shanley: The TPC first issued the TPC-C benchmark a little over four years ago. Over 20 companies worldwide have published results on it, and there is now a large body of TPC-C results for the public to analyze. Despite this, we still see TPC and TPC-C benchmarks criticized as not "real-world" enough, or too expensive, or too much influenced by the companies that run them. This criticism has led some people in the industry to dismiss TPC-C and propose that companies run some new type of benchmark.

We're here today to address these criticisms, and to try to examine whether TPC-C remains a solid, relevant benchmark.

The TPC says that TPC-C represents a broad range of OLTP environments. In designing the TPC-C Benchmark, the TPC chose to build the benchmark around just five transactions among thousands of possible OLTP transactions. How, then, does TPC-C represent a broad range of OLTP applications?

Lindauer: If you look at the five TPC-C transactions, they really model the basic types of operations that a typical OLTP system might use--posting an order, posting a payment, checking the status of an order, processing the delivery of an order and so forth. You could literally take a fully implemented TPC-C benchmark and use it as the basis for an order entry OLTP system. Granted, in a large organization there are more rules and more complexities that would have to evolve from it, but I think the beauty of TPC-C is that the basic transactions do most of the operations that a typical order entry system would do.

Shanley: How is the TPC-C testing like a customer's environment and how different?

Levine: TPC-C is similar in terms of the diversity of OLTP functions, the diversity in the database, the number of tables, the different types of data which are stored, dates and things like that. All of these things plus the concept of entering orders and making payments clearly relates to doing mainstream business function. Probably the way in which TPC-C is different is that it's more narrowly defined and controlled environment, which is an absolute necessity for a good benchmark. A benchmark is always a distillation of a real environment and the success of the benchmark relates to how representative the distillation is of the thing that you distilled. There is no question that TPC-C eliminates a lot of secondary and tangential function and activities that occur in a real business environment. I think it's our assertion that the things that we leave out don't have a huge impact on what we're trying to measure, which is the mainstream steady-state performance.

Shanley: What advantages or disadvantages does the standard benchmark like TPC-C have over custom benchmarks?

Lindauer: One of the advantages of TPC-C over a custom benchmark is that there are very rigorous controls and specifications in place and the auditing process is fantastic in the sense that it really gets everybody to play by the same rules. Custom benchmarks, regardless of how well they're thought out, tend to be driven by very strict deadlines and somewhat limited resources. They seem to lend themselves to discrepancies between comparisons and yes, to games almost being played.

Simons: Another advantage is that the cost of running TPC-C is much less than running a custom benchmark.

Lindauer: An often overlooked part of this process is that standard benchmarks really help products evolve and improve. We've found over and over and over again that our product has become a better performer across all applications due to the work on understanding and implementing and improving our performance on the standard TPC-C benchmark. With custom benchmarks it's usually a one time shot and very little information flows back to the company to help thrive the product and make it better in general.

Shanley: Why do companies often prefer running TPC benchmarks to others?

Lindauer: What has actually happened though in the last few years is there has been a tremendous increase in complexity and new requirements to support larger workloads. To really measure a system properly today takes a tremendous amount of understanding of how all these new and better components interact to get a number which really makes sense. What ends up happening in some of these other benchmarks is that very crucial aspects of system performance are not adequately addressed and therefore its difficult for these benchmarks to reflect the true performance of a particular system.

Shanley: TPC benchmarks are run by the companies themselves. Doesn't that compromise objectivity?

Whalen: I actually believe that the TPC-C benchmark is very objective because the companies must submit a full disclosure report which they know will be closely reviewed by their competitors.

Simons: I think also because there is an audit process, there's a need to much more carefully control exactly what happens to a system when its being tested.

Shanley: What about critics that say that TPC benchmarks are too expensive? What's your reaction to that?

Lindauer: Good benchmarks are expensive to run because to be any good, the benchmark must be complex and robust. However, once vendors put out the initial outlay and have developed the expertise to run the benchmarks, they tend to be able to generate results much more frequently and cost effectively.

Shanley: Some critics of the TPC say we allow for too much vendor tuning which produces results that a typical user, without the vendor's extensive knowledge of the product, wouldn't being able to produce. What's your response to this?

Simons: Well, my experience is that on like those large systems that we tend to run TPC-C on, users actually do spent a significant amount of time tuning the systems, looking at the performance of the system and seeing where they can improve it. So, I don't think it's necessarily unrepresentative to allow people to tune the system for the benchmark. In fact, this type of intensive tuning actually produces better performing systems for end-users.

Lindauer: There's is nothing that we do these systems that a customer couldn't understand in terms of looking at the full disclosure reports and understanding what all was done.

Shanley: What can you learn from a TPC benchmark test other than the primary metric numbers?

Simons: The things I see end users learning from the TPC-C full disclosure report are they can see how the system was tuned and, quite frequently, they can pick up a set of tunables and might even provide them with a good starting point for some of their systems. Secondly, I think it also can help with some sort of capacity planning type issues. Finally, I think the other one thing that people can learn from looking at the Full Disclosure Reports is obviously pricing. Overall, for those running the benchmark, the more you run the benchmark, the more you actually learn about how the system as a whole is running and where the opportunities are for making any improvements.

Shanley: What changes to the benchmark are being considered?

Whalen: Some of the things we are considering is to reduce the disk capacity and/or the disk usage that is necessary to run this benchmark, so it might be easier and more accessible for people to run this benchmark with less disk space or fewer numbers of disks. We are also considering whether to increase the CPU usage per transaction, maybe make the transactions a little more complex. Some of the things that we would also like to see are a Full Disclosure Report and an executive summary that might be of more value to the customer, the end user, by providing better information or perhaps just presenting it more graphically to make it easier to use. Essentially, we would like to make the benchmark easier to use but at the same more valuable to vendors and end-users.

I would like to end by stating we're conducting an industry survey to see how people feel TPC-C should be changed in the future. People should contact the TPC office at: webmaster@tpc.org to provide feedback.

 

Valid XHTML 1.0 Transitional Valid CSS!