|Fifteenth TPC Technology Conference on Performance Evaluation & Benchmarking
|in conjunction with VLDB 2023
August 28, 2023
|All times are local times for
|Multivariate Time Series Anomaly Detection: Fancy Algorithms amd Flawed Evaluation Methology
Mohamed El Amine Sehili and Zonghua Zhang
|A Comprehensive Study on Benchmarking Permissioned Blockchains
Jeeta Ann Chacko, Ruben Mayer, Alan Fekete, Vincent Gramoli and Hans-Arno Jacobsen
|Benchmarking Generative AI Performance Requires a Holistic Approach (for more information about this panel - see below)
Ajay Dholakia, Lenovo (Moderator), David Ellison, Lenovo (Chair of Lenovo Responsible AI), Miro Hodak, AMD (MLPerf Work Group Chair), Debojyoti Dutta, Nutanix (MLCommons Board Member), Carsten Binning, TU Darmstadt (Research focus on Data and AI Systems
|Graph Stores with Application Level Query Result Caches (invited talk)
Hieu Nguyen, Jun Li and Shahram Ghandeharizadeh
|Chaosity: Understanding Contemporary NUMA-architectures
Hamish Nicholson, Andreea Nica, Aunn Raza, Viktor Sanca and Anastasia Ailamaki
|Benchmarking Large Language Models: Opportunities and Challenges
Miro Hodak, David Ellison, Chris Van Buren, Xiaotong Jiang and Ajay Dholakia
|The Linked Data Benchmark Council (LDBC): Driving competition and collaboration in the graph data management space
Gabor Szarnyas, Brad Bebee, Altan Birler, Alin Deutsch, George Fletcher, Henry A. Gabb, Denise Gosnell, Alastair Green, Zhihui Guo, Keith W. Hare, Jan Hidders, Alexandru Iosup, Atanas Kiryakov, Tomas Kovatchev, Xinsheng Li, Leonid Libkin, Heng Lin, Xiaojian Luo, Arnau Prat-Perez, David Puroja, Shipeng Qi, Oskar van Rest, Benjamin A. Steer, David Szakallas, Bing Tong, Jack Waudby, Mingxi Wu, Bin Yang, Wenyuan Yu, Chen Zhang, Jason Zhang, Yan Zhou and Peter Boncz
|The LDBC Social Network Benchmark Interactive Workload v2: A Transactional Graph Query Benchmark with Deep Delete Operations
David Puroja, Jack Waudby, Peter Boncz and Gabor Szarnyas
|A Cloud-Native Adoption of Classical DBMS Performance Benchmarks and Tools
|Panel Discussion (11:00-12:00)
The recent focus in AI on Large Language Models (LLMs) has brought the topic of trustworthy AI to the forefront. Along with the excitement of human-level performance,
the Generative AI systems enabled by LLMs have raised many concerns about factual accuracy, bias along various dimensions, authenticity and quality of generated output.
Ultimately, these concerns directly affect the user’s trust in the AI systems that they interact with.
The AI research community has come up with a variety of metrics for perplexity, similarity, bias, and accuracy that attempt to provide an objective comparison between different AI systems.
However, these are difficult concepts to encapsulate in metrics that are easy to compute. Furthermore, AI systems are advancing to multimodal foundation models that further make creating simple metrics a challenging task.
This panel of experts from across industry and academia will discuss the recent trends in measuring the performance of foundation models like LLMs and multimodal models. The need for creating metrics and ultimately benchmarks that enable meaningful comparisons between different Generative AI system designs and implementations is getting stronger. The panel discussion will focus on distilling the current state of the art as well as future trends aimed at increasing trust in Generative AI systems.
Dr. Ajay Dholakia is Principal Engineer, Chief Technologist for Software Solutions Development and CTO, SAP Alliance in Lenovo Infrastructure Solutions Group. In this role, he is leading the development of customer solutions in the areas of AI, Big Data & Analytics, Cloud computing as well as emerging technologies including edge computing, Internet of Things (IoT) and Blockchain. In his career spanning over 30 years, he has led diverse projects in research, technology, product and solution development and business/technical strategy. Ajay holds more than 60 patents and has authored over 60 technical publications including a book.
David Ellison is the Senior AI Data Scientist for Lenovo ISG. Through Lenovo’s US and European Innovation Centers, he leads a team that uses cutting-edge AI techniques to deliver solutions for external customers while internally supporting the overall AI strategy for the World Wide Infrastructure Solutions Group. Before joining Lenovo, he ran an international scientific analysis and equipment company and worked as a Data Scientist for the US Postal Service. Previous to that, he received a PhD in Biomedical Engineering from Johns Hopkins University. He has numerous publications in top tier journals including two in the Proceedings of the National Academy of the Sciences.
Miro Hodak is a Senior Member of Technical Staff, AI/ML Solutions Architecture at AMD. He is also the current chair of MLPerf Inference Working Group.
Debojyoti Dutta (Debo) currently leads AI solutions and engineering at Nutanix. Prior to this he led engineering for Nutanix’s cloud management portfolio, including AI driven operations and cost governance. Before Nutanix, Debo was a visiting scholar at Stanford (Management sciences and engineering), a founding member of MLCommons, and a Distinguished Engineer at Cisco. He obtained his postdoctoral training in computational biology and a PhD in computer science from the University of Southern California and a BTech in computer science and engineering from the Indian Institute of Technology, Kharagpur.
Carsten Binnig is a Full Professor in the Computer Science department at at TU Darmstadt and an Adjunct Associate Professor in the Computer Science department at Brown University. Carsten received his PhD at the University of Heidelberg in 2008. Afterwards, he spent time as a postdoctoral researcher in the Systems Group at ETH Zurich and at SAP working on in-memory databases. Currently, his research focus is on the design of data management systems for modern hardware as well as modern workloads such as interactive data exploration and machine learning. He has recently been awarded a Google Faculty Award and a VLDB Best Demo Award for his research
Call For Papers
The Transaction Processing Performance Council (TPC) is a non-profit organization established in August 1988. Over the past two decades, the TPC has had a significant impact on the computing industry’s use of industry-standard benchmarks.
Vendors use TPC benchmarks to illustrate performance competitiveness for their existing products, and to improve and monitor the performance of their products under development.
Many buyers use TPC benchmark results as points of comparison when purchasing new computing systems.
The information technology landscape is evolving at a rapid pace, challenging industry experts and researchers to develop innovative techniques for evaluation, measurement and characterization of complex systems.
The TPC remains committed to developing new benchmark standards to keep pace, and one vehicle for achieving this objective is the sponsorship of the Technology Conference on Performance Evaluation and Benchmarking (TPCTC).
Over the last fourteen years we have held TPCTC successfully in conjunction with VLDB.
With the fifteenth TPC Technology Conference on Performance Evaluation and Benchmarking (TPCTC 2023) proposal, we strive to excel the success of previous workshops by encouraging researchers and industry experts to present and debate novel ideas and methodologies
in performance evaluation and benchmarking for emerging technology areas. Authors are invited to submit original, unpublished papers that are not currently under review for any other conference or journal. We also encourage the submission of extended abstracts,
position statement papers and lessons learned in practice. The accepted papers will be published in the workshop proceedings, and selected papers will be considered for future TPC benchmark developments.
Topics of interest include, but are not limited to:
- Artificial Intelligence
- Hyperscale Datacenter
- Big Data Analytics
- Cloud Computing
- Social Media Infrastructure
- Internet of Things
- Database Optimizations
- Lessons learned in practice using TPC workloads
- General enhancements to TPC workloads
- Disaggregated Data Center
- In-memory databases
- Complex event processing
- Hybrid workloads
|Authors are invited to submit original, unpublished papers that are not currently under review for any other conference or journal. We also encourage the submission of extended abstracts, position statement papers and lessons learned in practice.
The length of a paper should not exceed 16 pages. Papers should follow the at: LNCS format.
The title page must contain a short abstract.
All papers should be submitted electronically in PDF format in compliance with the VLDB 2023 Submission Guidelines
- How To Benchmark Permissioned Blockchains by Jeeta Ann Chacko, Ruben Mayer, Alan Fekete, Vincent Gramoli and Hans-Arno Jacobsen
- Chaosity: Understanding Contemporary NUMA-architectures by Hamish Nicholson, Andreea Nica, Aunn Raza, Viktor Sanca and Anastasia Ailamaki
- Benchmarking Large Language Models: Opportunities and Challenges by Miro Hodak, David Ellison, Chris Van Buren, Xiaotong Jiang and Ajay Dholakia
- A Cloud-Native Adoption of Classical DBMS Performance Benchmarks and Tools by Patrick Erdelt
- The LDBC Social Network Benchmark Interactive Workload v2: A Transactional Graph Query Benchmark with Deep Delete Operations by David Puroja, Jack Waudby, Peter Boncz and Gabor Szarnyas
- Multivariate Time Series Anomaly Detection: Fancy Algorithms and Flawed Evaluation Methodology by Mohamed El Amine Sehili and Zonghua Zhang
- The Linked Data Benchmark Council (LDBC): Driving competition and collaboration in the graph data management space by Gabor Szarnyas, Brad Bebee, Altan Birler, Alin Deutsch, George Fletcher,
Henry A. Gabb, Denise Gosnell, Alastair Green, Zhihui Guo, Keith W. Hare, Jan Hidders, Alexandru Iosup, Atanas Kiryakov, Tomas Kovatchev, Xinsheng Li, Leonid Libkin, Heng Lin, Xiaojian Luo,
Arnau Prat-Perez, David Puroja, Shipeng Qi, Oskar van Rest, Benjamin A. Steer, David Szakallas, Bing Tong, Jack Waudby, Mingxi Wu, Bin Yang, Wenyuan Yu, Chen Zhang, Jason Zhang, Yan Zhou and Peter Boncz
|June 14th, 2023
|June 21st, 2023
|Notification of acceptance:
|June 28th, 2023
|August 28th, 2023
|Conference Venue and Registration
|Please visit the VLDB2023 conference web site at: http://vldb.org/2023
|Proceedings will be published by Springer-Verlag as Lecture Notes in Computer Science (LNCS). Selected papers may be considered for future TPC benchmark developments.
TPCTC 2023 Organization
General Chairs and Contacts
Raghunath Nambiar, AMD, USA, firstname.lastname@example.org
Meikel Poess, Oracle, USA, email@example.com
Ajay Dholakia, Lenovo, USA
Andrew Bond, Red Hat, USA
Anil Rajput AMD, USA
Hans-Arno Jacobsen, University of Toronto, Canada
Harry Le, University of Houston, USA
John Poelman, IBM, USA
Klaus-Dieter Lange, Hewlett Packard Enterprise, USA
Michael Brey, Oracle, USA
Miro Hodak, AMD, USA
Nicholas Wakou, Dell, USA
Paul Cao, Hewlett Packard Enterprise, USA
Rodrigo D. Escobar, Univ. Texas at San Antonio, USA
Shahram Ghandeharizadeh, University of Southern California, USA
Tariq Magdon-Ismail, VMware, USA
Tilmann Rabl, Hasso Plattner Institute, Germany
Meikel Poess, Oracle, USA
Andrew Bond, Red Hat, USA
Paul Cao, HPE, USA
Gary Little, Nutanix, USA
Raghunath Nambiar, AMD, USA
Michael Majdalany, L&M Management Group, USA
Forrest Carman, Owen Media, USA
Andreas Hotea, Hotea Solutions, USA
About the TPC
The Transaction Processing Performance Council (TPC) is a non-profit organization that defines transaction processing and database benchmarks and distributes vendor-neutral performance data to the industry.
Additional information is available at: tpc.org.