A promising technique for the formal verification of
embedded and cyber-physical systems is flow-pipe construction, which
creates a sequence of regions covering all reachable states over time.
Flow-pipe construction methods can check whether specifications are
met for all states, rather than just testing using a finite and
incomplete set of simulation traces. A fundamental challenge when
using flow-pipe construction on high-dimensional systems is the cost
of geometric operations, such as intersection and convex hull. In this
talk, we address this challenge by showing that it is often possible
to remove the need to perform high-dimensional geometric operations by
combining two model transformations, direct time-triggered conversion
and dynamics scaling. Further, we discuss how to make the
overapproximation error in the conversion arbitrarily small. Finally,
we show that our transformation-based approach enables the analysis of
a drivetrain system with up to 51 dimensions.
Bio: Sergiy Bogomolov is on the faculty of the Australian National
University where he leads the Dependable Cyber Physical Systems Lab.
Prior to joining ANU Sergiy was a Postdoctoral Researcher in the
Institute of Science and Technology Austria. Sergiy is broadly
interested in algorithms and techniques to support design and
development workflow of trustworthy and resilient autonomous systems.
For this purpose, he uses and develops techniques on the interface of
cyber-physical systems verification and AI planning. His Ph.D. and
M.Sc. degrees are from the University of Freiburg, Germany.
Increasingly, autonomous systems’ behavior is driven by machine learning, and in particular by (deep) neural networks. One crucial question then is how to effectively test such systems given that they don’t have (complete) specifications or even source code corresponding to some of their critical behavior. Further, adding even more complexity to an already difficult problem, many of these autonomous systems are usually integrated into larger systems, thus possibly leading to undesirable feature interactions. This talk will report on recent work done to address these intricate problems and will attempt to better define the challenges ahead. Examples from the automotive domain will be used to illustrate the main points.
Lionel C. Briand is professor in software verification and validation at the SnT centre for Security, Reliability, and Trust, University of Luxembourg, where he is also the vice-director of the centre. He is currently running multiple collaborative research projects with companies in the automotive, satellite, financial, and legal domains. Lionel has held various engineering, academic, and leading research positions in five other countries before that.
Lionel was elevated to the grade of IEEE Fellow in 2010 for his work on the testing of object-oriented systems. He was granted the IEEE Computer Society Harlan Mills award and the IEEE Reliability Society engineer-of-the-year award for his work on model-based verification and testing, respectively in 2012 and 2013. He received an ERC Advanced grant in 2016 — on the topic of modelling and testing cyber-physical systems — which is the most prestigious individual research grant in the European Union. His research interests include: software testing and verification, model-driven software development, search-based software engineering, and empirical software engineering.
Automata learning is a technique that has successfully been applied in verification, with the automaton type
varying depending on the application domain. Adaptations of automata learning algorithms for increasingly
complex types of automata have to be developed from scratch because there was no abstract theory offering
guidelines. This makes it hard to devise such algorithms, and it obscures their correctness proofs.
We introduce a simple category-theoretic formalism that provides an appropriately abstract foundation for
studying automata learning. Furthermore, our framework establishes formal relations between algorithms for
learning, testing, and minimization. We illustrate its generality with two examples: deterministic and weighted
The notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by comparator automata (comparators, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values.
We show that comparators that are finite-state and accept by the Büchi condition lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a finite representation of all counterexamples in the quantitative inclusion problem.
We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is omega-regular for all integral discount factors.
Not every aggregate function, however, has an omega-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither omega-regular nor omega-context-free. Given this result, we introduce the notion of prefix-average as a relaxation of limit-average aggregation, and show that it admits omega-context-free comparators.
In this talk, I will investigate the expressive power and the algorithmic properties of weighted expressions, which define functions from finite words to integers. First, I will consider a slight extension of an expression formalism, introduced by Chatterjee. et. al. in the context of infinite words (called Mean-Payoff expressions), by which to combine values given by unambiguous (max,+)-automata, using Presburger arithmetic. Important decision problems such as emptiness, universality and comparison are PSpace-Complete for these expressions. I will then investigate the extension of these expressions with Kleene star. This allows to iterate an expression over smaller fragments of the input word, and to combine the results by taking their iterated sum. Unfortunately, the decision problems turn out to be undecidable. So, the goal of this talk is to highlights a still expressive class of expression and sketch is decidability especially by considering a new class of automata: Weighted chop automata.
In this talk we present token ﬂow based synthesis of Petri nets from labelled prime event structures (LPES). For this purpose we use unfolding semantics based on token ﬂows.
First, given a ﬁnite LPES, it is shown how to synthesize a Petri net with acyclic behavior, such that the unfolding of the synthesized net preserves common preﬁxes and concurrency of runs of the LPES. The partial language of this unfolding is the minimal partial language of an unfolding of a Petri net, which includes the partial language of LPES. This result extend the class of non-sequential behaviour,for which Petri nets can be synthesized, because in comparison to a partial language, an LPES enables to deﬁne which common history of runs should be preserved in the synthesized net.
Second, given an inﬁnite LPES represented by some ﬁnite preﬁx equipped with a cutting context and cut-off events it is shown how to synthesize a bounded Petri net, such that the unfolding of the synthesized net preserves common preﬁxes and concurrency of runs of the LPES. The partial language of this unfolding is the minimal partial language of an unfolding of a Petri net, which includes the partial language of LPES. This result extends the class of non-sequential behaviour, for which Petri nets can be synthesized, because ﬁnite representations of inﬁnite LPES by a ﬁnite preﬁx equipped with a cutting context and cut-off events are more expressive than ﬁnite representations of inﬁnite partial languages by terms.
Parity games are deceptively simple two-player games on directed graphs
labeled with numbers.
Parity games have important practical applications in formal
verification and synthesis, especially to solve the model-checking problem
of the modal mu-calculus. They are also interesting from the theory
perspective, because they are widely believed to admit a polynomial
solution, but so far no such algorithm is known. In recent years, a number
of new algorithms and improvements to existing algorithms have been
In this talk, we introduce parity games in an accessible way and discuss
why they are so interesting. We present various solutions that have been
proposed over the years. We also present a comprehensive empirical evaluation
of modern parity game algorithms and solvers, both on real world benchmarks
and randomly generated games.
Symbolic executions (and their recent variants called dynamic symbolic
executions) are an important technique in automated testing. Instead
of analysing only concrete executions of a program, one could treat
such executions symbolically (i.e. with some variables that are not
bound to concrete values) and use constraint solvers to determine this
(symbolic) path feasibility so as to guide the path explorations of
the system under test, which in combination with dynamic analysis
gives the best possible path coverage. For string-manipulating
programs, solvers need to be able to handle constraints over the
string domain. This gives rise to the following natural question: what
is an ideal decidable logic over strings for reasoning about path
feasibility in a program with strings? This is a question that is
connected to a number of difficult results in theoretical computer
science (decidability of the theory of strings with concatenations,
a.k.a., word equations) and long-standing open problems (e.g.
decidability of word equations with length constraints). Worse yet,
recent examples from cross-site scripting vulnerabilities suggest that
more powerful string operations (e.g. finite-state transducers) might
be required as first class citizens in string constraints. Even though
putting all these string operations in a string logic leads to
undecidability, recent results show that there might be a way to
recover decidability while retaining expressivity for applications in
symbolic execution. In this talk, I will present one such result from
my POPL’16 paper (with P. Barcelo). The string logic admits
concatenations, regular constraints, finite-state transductions,
letter-counting and length constraints (which can consequently express
charAt operator, and string disequality). I will provide a number of
examples from the cross-site scripting literature that shows how a
string logic can, for the first time, be used to discover a bug in or
prove correctness of the programs. I will conclude by commenting on a
new decision procedure for the logic that leads to an efficient
implementation (POPL’18 with L. Holik, P. Janku, P. Ruemmer, and T.
Vojnar) and a recent attempt to incorporate the fully-fledged
replace-all operator into a string logic (POPL’18 with T. Chen, Y.
Chen, M. Hague, and Z. Wu).
Buggy and insecure software could have serious consequences including
the loss of human lives, financial losses, and confidential
information leakage, to name a few. Program analysis is a field that
concerns the problem of analysing the behaviour of programs especially
with respect to the issue of correctness. Over the years computational
logic has played an important role in program analysis particularly in
the development of precise automatic methods for verifying the
correctness and optimising the performance of programs. In this talk I
will illustrate how logic can help program analysis, drawing examples
from my own research inspired by challenges in web security (e.g. how
to detect/prevent cross-site scripting vulnerabilities), web
performance optimisation (e.g. how to remove code redundancies in web
pages), and verification of distributed protocols. A theme that will
emerge during the talk is that there is often a tight connection
between logic and automata that can be exploited when designing (often
theoretically optimal) algorithms.