Lezing
This Week’s Discoveries | 19 February 2019
- Datum
- dinsdag 19 februari 2019
- Tijd
- Locatie
-
Huygens
Niels Bohrweg 2
2333 CA Leiden - Zaal
- De Sitterzaal
First Lecture
Title
Quantum computation with small quantum computers
Speaker
Vedran Dunjko (LIACS) is an Assistant Professor in the Leiden Institute of Advanced Computer Science. He likes topics in the intersection of computer science and quantum physics. Over the course of the last few years, he has mostly been focusing on the interplay between quantum computing, machine learning, and artificial intelligence
Abstract
Theory shows that arbitrary-sized quantum computers may offer computational advantages for many problems. However, quantum computers we are likely to have in the foreseeable future will be restricted in many ways, including size. Can a small quantum computer genuinely speed up interesting algorithms? In this talk we will show that the answer is positive, even when the problem size is much larger than the computer itself.
Second Lecture
Title
Competitions – a healthy way to improve verification technology
Speaker
Jaco van de Pol (Aarhus University + University of Twente) is professor of Computer Science at Aarhus University and the University of Twente. His research interests are: model checking, theorem proving and testing techniques, for the analysis of safety, dependability and security aspects of software-intensive computer systems. Together with Fabrice Kordon and Martina Seidl, he is organizing the workshop: Advancing Verification Competitions as a Scientific Method that is being held in the Lorentz Center from 18 Feb through 22 Feb.
Abstract
Software and hardware verification is a complex task, supported by automated reasoning tools. Tremendous progress has led to a multitude of algorithms and tools that provide automated and scalable solutions for specific verification instances. The research field has been organizing several series of competitions as means for the objective evaluation and comparison between verification tools on a common set of benchmarks. These competitions provide insight in the best solutions for a particular task, but they also motivate researchers to push the boundaries of their tools, improving the state-of-the-art. This is why notable events have had significant impact on the involved communities. What are the success factors for such competitions? And how do we optimize the learning outcome of verification competitions?
We discuss:
- how to setup the rules and to select benchmarks of a competition
- how to execute the competition itself
- how to evaluate and communicate the end results