Distributed protocols such as Paxos play an important role in many computer systems.

Therefore, a bug in a distributed protocol may have tremendous effects.

Accordingly, a lot of effort has been invested in verifying such protocols.

However, due to the infinite state space (e.g., unbounded number of nodes, messages) and protocols complexity, verification is both undecidable and hard in practice.

I will describe a deductive approach for verification of distributed protocols, based on first-order logic, inductive invariants and user interaction.

The use of first-order logic and a decidable fragment of universally quantified invariants allows to completely automate some verification tasks.

Tasks that remain undecidable (e.g. finding inductive invariants) are solved with user interaction, based on graphically displayed counterexamples.

I will also describe the application of these techniques to verify safety of several variants of Paxos, and a way to extend the approach to verify liveness and temporal properties.

Bio:

Oded Padon is a fourth year PhD student in Tel Aviv University, under the supervision of Prof. Mooly Sagiv.

His research focuses on verification of distributed protocols using first-order logic.

He is a recipient of the 2017 Google PhD fellowship in programming languages.

In a variety of applications that encompass adversarial behavior, there is clearly interest in deceiving an adversary about one’s objectives or, alternatively, making it difficult for the adversary to predict one’s strategy to achieve those objectives. In this talk, I will outline recent work on formalizing the notions of deception and unpredictability within control systems. Namely, I will discuss an approach which encodes deception and deceptive strategies through introducing a belief space for an adversary, as well as a belief-induced reward objective. Such a framework makes it possible to consider design of optimal deceptive strategies within the setting of optimal control, where lack of knowledge about the adversary translates into a need to develop robust optimal control policies, or policies based on partial observations. On the other hand, we relate unpredictability of an agent to the total Shannon entropy of the paths that an agent may take to reach its objective, and show that, within the context of Markov decision processes, maximal unpredictability of an agent is achieved through following a policy that results in a maximal total entropy of the induced Markov chain. In parallel with the development of the theory of deception and unpredictability, I will illustrate the introduced notions using a variety of situations that naturally involve adversarial behavior, and show that the policies which are deceptive or generate maximal unpredictability in the sense of theoretical definitions indeed also follow the natural intuition behind these notions.

A promising technique for the formal verification of

embedded and cyber-physical systems is flow-pipe construction, which

creates a sequence of regions covering all reachable states over time.

Flow-pipe construction methods can check whether specifications are

met for all states, rather than just testing using a finite and

incomplete set of simulation traces. A fundamental challenge when

using flow-pipe construction on high-dimensional systems is the cost

of geometric operations, such as intersection and convex hull. In this

talk, we address this challenge by showing that it is often possible

to remove the need to perform high-dimensional geometric operations by

combining two model transformations, direct time-triggered conversion

and dynamics scaling. Further, we discuss how to make the

overapproximation error in the conversion arbitrarily small. Finally,

we show that our transformation-based approach enables the analysis of

a drivetrain system with up to 51 dimensions.

Bio: Sergiy Bogomolov is on the faculty of the Australian National

University where he leads the Dependable Cyber Physical Systems Lab.

Prior to joining ANU Sergiy was a Postdoctoral Researcher in the

Institute of Science and Technology Austria. Sergiy is broadly

interested in algorithms and techniques to support design and

development workflow of trustworthy and resilient autonomous systems.

For this purpose, he uses and develops techniques on the interface of

cyber-physical systems verification and AI planning. His Ph.D. and

M.Sc. degrees are from the University of Freiburg, Germany.

Increasingly, autonomous systems’ behavior is driven by machine learning, and in particular by (deep) neural networks. One crucial question then is how to effectively test such systems given that they don’t have (complete) specifications or even source code corresponding to some of their critical behavior. Further, adding even more complexity to an already difficult problem, many of these autonomous systems are usually integrated into larger systems, thus possibly leading to undesirable feature interactions. This talk will report on recent work done to address these intricate problems and will attempt to better define the challenges ahead. Examples from the automotive domain will be used to illustrate the main points.

Speaker’s bio:

Lionel C. Briand is professor in software verification and validation at the SnT centre for Security, Reliability, and Trust, University of Luxembourg, where he is also the vice-director of the centre. He is currently running multiple collaborative research projects with companies in the automotive, satellite, financial, and legal domains. Lionel has held various engineering, academic, and leading research positions in five other countries before that.

Lionel was elevated to the grade of IEEE Fellow in 2010 for his work on the testing of object-oriented systems. He was granted the IEEE Computer Society Harlan Mills award and the IEEE Reliability Society engineer-of-the-year award for his work on model-based verification and testing, respectively in 2012 and 2013. He received an ERC Advanced grant in 2016 — on the topic of modelling and testing cyber-physical systems — which is the most prestigious individual research grant in the European Union. His research interests include: software testing and verification, model-driven software development, search-based software engineering, and empirical software engineering.

The notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by comparator automata (comparators, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values.

We show that comparators that are finite-state and accept by the Büchi condition lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a finite representation of all counterexamples in the quantitative inclusion problem.

We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is omega-regular for all integral discount factors.

Not every aggregate function, however, has an omega-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither omega-regular nor omega-context-free. Given this result, we introduce the notion of prefix-average as a relaxation of limit-average aggregation, and show that it admits omega-context-free comparators.

In this talk, I will investigate the expressive power and the algorithmic properties of weighted expressions, which define functions from finite words to integers. First, I will consider a slight extension of an expression formalism, introduced by Chatterjee. et. al. in the context of infinite words (called Mean-Payoff expressions), by which to combine values given by unambiguous (max,+)-automata, using Presburger arithmetic. Important decision problems such as emptiness, universality and comparison are PSpace-Complete for these expressions. I will then investigate the extension of these expressions with Kleene star. This allows to iterate an expression over smaller fragments of the input word, and to combine the results by taking their iterated sum. Unfortunately, the decision problems turn out to be undecidable. So, the goal of this talk is to highlights a still expressive class of expression and sketch is decidability especially by considering a new class of automata: Weighted chop automata.

In this talk we present token ﬂow based synthesis of Petri nets from labelled prime event structures (LPES). For this purpose we use unfolding semantics based on token ﬂows.

First, given a ﬁnite LPES, it is shown how to synthesize a Petri net with acyclic behavior, such that the unfolding of the synthesized net preserves common preﬁxes and concurrency of runs of the LPES. The partial language of this unfolding is the minimal partial language of an unfolding of a Petri net, which includes the partial language of LPES. This result extend the class of non-sequential behaviour,for which Petri nets can be synthesized, because in comparison to a partial language, an LPES enables to deﬁne which common history of runs should be preserved in the synthesized net.

Second, given an inﬁnite LPES represented by some ﬁnite preﬁx equipped with a cutting context and cut-off events it is shown how to synthesize a bounded Petri net, such that the unfolding of the synthesized net preserves common preﬁxes and concurrency of runs of the LPES. The partial language of this unfolding is the minimal partial language of an unfolding of a Petri net, which includes the partial language of LPES. This result extends the class of non-sequential behaviour, for which Petri nets can be synthesized, because ﬁnite representations of inﬁnite LPES by a ﬁnite preﬁx equipped with a cutting context and cut-off events are more expressive than ﬁnite representations of inﬁnite partial languages by terms.