The Fifth International Workshop on Large-Scale Testing (LT 2016)

March 12, 2016
Delft, Netherlands
Co-located with ICPE 2016
The 7th International Conference on Performance Engineering
Important Dates
Intent to submit: Nov. 16, 2015
(Submission still open after the abstract deadline!)
Technical papers: Nov. 23, 2015
Dec. 8, 2015 (Extended)
Paper notifications: Dec. 15, 2015 Camera ready: Jan. 18, 2016 Presentation track: Feb. 1, 2016 Talk notifications: Feb. 8, 2016 Workshop date: Mar. 12, 2016
Past LT Workshops
LT 2012 LT 2013 LT 2014 LT 2015

Automated Analysis of Load Test Results of Systems with Equilibrium or Transient Behaviour

Andre_BondiAndrè Bondi, Software Performance and Scalability Consulting LLC, New Jersey, USA

Talk Abstract:
Performance test data should be analyzed to determine if performance requirements are being met, to see if they reveal opportunities for performance improvement, and to see if they show signs of lurking performance issues or malfunctions. Automated analyses of the measurements can be useful when the number of resource usage measures and performance measures is large, when the number of nodes under test is large, or when the number of test cases is large. We shall examine this for cases in which the system under test is subjected to a constant load, as might be the case for an online transaction processing system, and for the case where the load is inherently bursty, as would be the case for an alarm or monitoring system that is receiving streams of notifications from many sensors at once. (more)

Short Biography of the Speaker:
Andrè Bondi is a highly experienced software performance engineering consultant and researcher. He initiated the performance engineering practice at Siemens Corporate Research (now Siemens Corporate Technlohy US) in Princeton, New Jersey, where he worked for nearly twelve years. He has worked on performance issues in several domains, including telecommunications, conveyor systems, finance systems, building surveillance, railways, and network management systems. Prior to joining Siemens, he held senior performance positions at two start-up companies. Before that, he spent more than ten years working on a variety of performance and operational issues at AT&T Labs and its predecessor, Bell Labs. He has taught courses in performance, simulation, operating systems principles, and computer architecture at the University of California, Santa Barbara. Dr. Bondi holds a Ph.D. in computer science from Purdue University, an M.Sc. in statistics from University College London, and a B.Sc. in mathematics from the University of Exeter.

Performance Testing in Software Development: Getting the Developers on Board

Lubomir_BulejLubomír Bulej, Charles University, Prague, Czech Republic

Talk Abstract:
Society is increasingly dependent on software-intensive systems that are required to interact with a huge number of users and respond in timely manner. Failing that often results in users and consumers not being able to access the advertised services or products, or worse, citizens unable to access vital services provided by the state. In such systems, failures are more likely to be caused by performance issues than by faulty implementation of some features. Modern development processes for general-purpose software systems typically focus on managing complexity to deliver correctly functioning software on time, and best software development practices frown upon premature optimization. With other aspects of software design and construction put above performance concerns, performance becomes a secondary concern that only needs to be addressed if the system's performance turns out to be unsatisfactory. This contrasts with real-time systems, where meeting real-time performance requirements is essential, and performance is a primary design concern that permeates the development process and the resulting system as a whole. Consequently, the overall performance of the system is a concern that cannot be addressed locally - it must be designed into the system, and strictly controlled throughout its construction. Simply adopting the process of real-time system development for the development of general-purpose systems is not possible. The size, complexity, and the depth of the software stack used to build general-purpose software-intensive systems typically dwarfs that of the special-purpose mission- or safety-critical real-time systems. The level of control that can be exerted over individual elements of real-time systems either does not scale, or is not possible at all, in addition to performance requirements being usually much less precise, and not easily expressed in terms of latencies or deadlines. (more)

Short Biography of the Speaker:
Lubomír Bulej is an assistant professor at the Department of Distributed and Dependable Systems, Charles University, Prague, Czech Republic. His primary research interests include performance-related topics focusing on performance evaluation, testing, and monitoring. In addition to performance, his research interests include also dynamic program analysis, with specific focus on making the programs running on the Java (and Dalvik) Virtual Machines more observable. He holds an MSc. from the Czech Technical University in Prague, and a PhD from the Charles University in Prague. He is a member of the ACM.