TPC photo
The TPC defines transaction processing and database benchmarks and delivers trusted results to the industry
    Document Search         Member Login    
     Home
About the TPC
Benchmarks
      Newsletter
      Join the TPC
      Downloads
      Technical Articles
      TPCTC
      Performance-Pulse

TPC-D, Past, Present and Future
An Interview between Berni Schiefer, Chair of the TPC-D Subcommittee and Kim Shanley, TPC Chief Operating Officer

[This interview was conducted on June 18, 1996 with Berni Schiefer, a senior IBM performance engineer and Chair of the TPC-D (Decision Support) Benchmark Subcommittee. It should be noted that while Mr. Schiefer's views generally reflect a consensus among TPC-D Subcommittee and TPC Council members, the views expressed are his own.]

Shanley: At the 100 and 300 GB range, there are now 7 results, all of them are multiprocessing systems, five clustered systems and two non-clustered. What is it about TPC-D that would encourage multiprocessing systems and among those, clustered implementations?

Schiefer: I think it is the ability to keep amassing more and more hardware without reaching some limit or major bottleneck. If we look at the recent 300 gigabyte result, the test sponsor configured a system where each node, each box had eight processors in it, and they configured twenty of those. That is a grand total of 160 processors all simultaneously solving the 300 gigabyte queries. There is no SMP in the world that I am aware of today that has that many processors.

Shanley: Well, you addressed CPU utilization, but what are the other bottlenecks or the critical performance point that would enable a clustered system to do so well in terms of scalability. Is it their ability to handle the I/O and memory better?

Schiefer: I think in a one-to-one comparison I'm not sure there is any great distinction between the ability to perform I/O on an SMP system versus a clustered system. SMP systems have fundamental constraints about just how much of that you can do.

Shanley: Is the TPC-D workload CPU-bound?

Schiefer: No you can't say the workload is fundamentally CPU-bound. I would say that an initial cut at running the workload will, for almost everyone, show some queries that are highly CPU-bound and some queries that are highly I/O bound. That's the starting point and then you work from there.

Shanley: Do you expect then that there will be more and more clustered systems producing TPC-D results? Do you still expect to see some SMPs, maybe at the lower at the 100 gigabyte level?

Schiefer: I think the type of systems that we'll see at the different volume points that exist for TPC-D will vary quite a bit based on the volume point. That is I expect the 1 gig and 10 gig to be either uniprocessors with some SMPs with few or no clusters. I expect the 10-100 gig range to be primarily the SMP systems, perhaps with a clustered system here and there. I expect the 300 and terabyte results to be primarily clustered systems.

Shanley: Have there been changes to the benchmark since it was first released in April, 1995?

Schiefer: We have had a small number of additional query variants that were approved. These were partially because during the last year, and that can be a long time in the database development world, and products have actually made progress in terms of continuing to add SQL support to their database engines. As a result, they now offer additional functionality that brings them closer to the SQL 92 standard.

Shanley: But writing any SQL variants which takes special advantage of knowledge of the TPC-D workload would be prohibited?

Schiefer: Yes. One of the biggest concerns of the subcommittee was to ban query variants or other hints that would essentially define the processing steps required to execute a query.

Shanley: Where do you see TPC-D evolving--from a major revision perspective? For example, will more queries be added to maintain the complexity of the benchmark's workload against increasingly powerful systems?

Schiefer: Yes, I believe there is a need to perhaps drop some of the simplest queries and add some new queries in order to provide new measures for how robust and how broad a base of SQL queries an optimizer can process efficiently.

The second major area that requires, I think, revision is the underlying assumption in the schema, where we have a uniform distribution of data. In the real world, for example, there is a great disparity between items that are commonly purchased and items that are uncommonly purchased. The resulting non-uniform distribution of values in the database is something that we do not currently deal with in the TPC-D workload. So, that is the second major aspect that I see as a candidate for addition into the next major revision of the benchmark.

Shanley: In the TPC-D FAQ, it says, "the intent is to focus on the strength of the data manager and its associated query optimizer...." Is the real secret to running TPC-D to have good query optimization tools or are there other critical areas?

Schiefer: Certainly having a good query optimizer at least for the 17 queries is very important to getting good TPC-D results. But there are a number of other areas that will need to be looked at in detail in order to get good execution time performance. Some of those areas are the ability to scan large volumes of data, to aggregate large volumes of data efficiently and also to parallelize a query so that a single query can take advantage of a large number of processes either in an SMP or in a cluster.

Shanley: Okay. During the development and prototyping of TPC-D, there was a very large range for how different systems handled the same query. It would seem common sense, then, to assume that to improve TPC-D performance companies would concentrate on optimizing their poor performers, that is, the subset of TPC-D queries that are slow.

Schiefer: Focusing on the poor performing queries is a sound strategy and one which pays dividends, particularly in the Throughput metric because it does not weight each query equally but weights the long-running queries more heavily.

Shanley: These improvements in query optimization that we have been seeing, do you think it is a fair assessment that these improvements would reflect an overall decision support improvement?

Schiefer: I think it's fair to say that any improvements you make are going to affect some segment of the broader picture of decision support queries that are run by customers.

Shanley: How important are indexing strategies to implementation of TPC-D? Your TPC-D documenttion says different strategies are allowed but the implementor must pay the price. Could you talk about this issue and also address the issue of how severe these penalties are, and are they enough to penalize any extraordinary indexing strategies?

Schiefer: So far the evidence suggests that the penalties are sufficient. We have seen no evidence of extraordinary indexing strategies.

Shanley: Over the long run, would one way of breaking the benchmark be to have studied the workload long enough and intensely enough to have the optimizer select the right indexes?

Schiefer: You could but it's a fairly tricky job and more by accident than by design. The 17 TPC-D queries are actually sufficiently diverse that that is a very tough job.

Shanley: Why did the subcommittee choose a 24 x 7 basis for the benchmark?

Schiefer: I think the primary reason was that the subcommittee was attempting to anticipate the direction in which we see the consumers of these databases moving. That is while virtually no one has a real 24 x 7 requirement for decision support databases today, most customers will tell you that they wish that it was available today. So we were really trying to set the direction by modeling, or attempting to model a 24 x 7 environment rather than a read-only environment.

Shanley: Which I guess is the rationale for including updates rather than a read-only database?

Schiefer: I think we start with 24 x 7 in order to be available to the user community all the time. This means there is no refresh window available here. We recognize that for practical reasons that the bulk of the work that's done today is still at least a partial refresh mode, as opposed to completely online update. As a result of that, while we essentially encourage vendors to use a trickle update model, it is permissible to use more of a batch update model where the updates do not run concurrently with the queries. It is important to point out that all of the results submitted so far use a batch update model not a trickle update model.

Shanley: It wouldn't be sensible to use a trickle update model would it? You are going to have some impact then on performance.

Schiefer: True, but the time to do the batch updates is added to the time to run the queries in the Throughput test, so if you were to be successful in trickling in the updates with little or no impact to the query, your elapsed time which is used to compute the Throughput metric would be less. So there is an advantage to be gained if one can successfully implement the trickle updates. I think that we will see an implementation of the trickle updates. It's just that it's going to take a little bit longer for people to work on those implementations. By implementations I don't just mean making the database run efficiently with that model but also just the infrastructure that needs to be added as part of the benchmarking program to implement the trickle updates. How do you pace the trickle and so on?

Shanley: What would you consider the TPC-D Subcommittee's most controversial design decision?

Schiefer: I think the most controversial decisions were, one, precluding vendors from rewriting the SQL to suit their particular dialect of SQL and secondly, the metric. We had endless discussions about what is the most appropriate, most correct and most comprehensible way for rolling up this broad set of individual measurements, each of the queries, into one magic number that could be used for computing a price performance metric.

Shanley: Did TPC-D break any major new benchmarking ground?

Schiefer: In my mind, there are two majors contributions that the TPC-D has made, both to the TPC benchmark and to benchmarking as a whole. The first is actually the fact that we provide DBGEN, the actual database generation program. The second major step, and this was a controversial one, was the use of SQL. I think it was a very courageous step, and again one that many people have complimented the TPC on in terms on taking the bold step of recognizing that the vast majority of users who use relational databases don't have the knowledge and skills to hand-tune an SQL statement or are using some sort of fancy GUI front end which does not even give them that ability. I would expect that other benchmarks will evolve to also requiring some fixed specification of the query and not allowing the hand-tuning.

Shanley: Any last thoughts?

Schiefer: Just a few. First, we will continue to evolve TPC-D to address the needs of the decision support community. Secondly, I would also like to thank the members of the subcommittee who have contributed so much and continue to do so as we evolve the workload, as well as all of the vendors who have sponsored results that really make this a real benchmark, as opposed to just an academic exercise.

 

Valid XHTML 1.0 Transitional Valid CSS!