Distributed protocols such as Paxos play an important role in many computer systems.

Therefore, a bug in a distributed protocol may have tremendous effects.

Accordingly, a lot of effort has been invested in verifying such protocols.

However, due to the infinite state space (e.g., unbounded number of nodes, messages) and protocols complexity, verification is both undecidable and hard in practice.

I will describe a deductive approach for verification of distributed protocols, based on first-order logic, inductive invariants and user interaction.

The use of first-order logic and a decidable fragment of universally quantified invariants allows to completely automate some verification tasks.

Tasks that remain undecidable (e.g. finding inductive invariants) are solved with user interaction, based on graphically displayed counterexamples.

I will also describe the application of these techniques to verify safety of several variants of Paxos, and a way to extend the approach to verify liveness and temporal properties.

Bio:

Oded Padon is a fourth year PhD student in Tel Aviv University, under the supervision of Prof. Mooly Sagiv.

His research focuses on verification of distributed protocols using first-order logic.

He is a recipient of the 2017 Google PhD fellowship in programming languages.

In a variety of applications that encompass adversarial behavior, there is clearly interest in deceiving an adversary about one’s objectives or, alternatively, making it difficult for the adversary to predict one’s strategy to achieve those objectives. In this talk, I will outline recent work on formalizing the notions of deception and unpredictability within control systems. Namely, I will discuss an approach which encodes deception and deceptive strategies through introducing a belief space for an adversary, as well as a belief-induced reward objective. Such a framework makes it possible to consider design of optimal deceptive strategies within the setting of optimal control, where lack of knowledge about the adversary translates into a need to develop robust optimal control policies, or policies based on partial observations. On the other hand, we relate unpredictability of an agent to the total Shannon entropy of the paths that an agent may take to reach its objective, and show that, within the context of Markov decision processes, maximal unpredictability of an agent is achieved through following a policy that results in a maximal total entropy of the induced Markov chain. In parallel with the development of the theory of deception and unpredictability, I will illustrate the introduced notions using a variety of situations that naturally involve adversarial behavior, and show that the policies which are deceptive or generate maximal unpredictability in the sense of theoretical definitions indeed also follow the natural intuition behind these notions.

Abstract:

Recursive algebraic data types (term algebras, ADTs) are one of the

most well-studied theories in logic, and find application in

contexts including functional programming, modelling languages,

proof assistants, and verification. At this point, several

state-of-the-art theorem provers and SMT solvers include tailor-made

decision procedures for ADTs, and version 2.6 of the SMT-LIB

standard includes support for ADTs. We study a relatively simple

approach to decide satisfiability of ADT constraints, the reduction

of ADT constraints to equisatisfiable constraints over uninterpreted

functions (EUF) and linear integer arithmetic (LIA). We show that

the reduction approach gives rise to both decision and Craig

interpolation procedures in ADTs. As an extension, we then consider

ADTs with size constraints, and give a precise characterisation of

the ADTs for which reduction combined with incremental unfolding is

a decision procedure.

A promising technique for the formal verification of

embedded and cyber-physical systems is flow-pipe construction, which

creates a sequence of regions covering all reachable states over time.

Flow-pipe construction methods can check whether specifications are

met for all states, rather than just testing using a finite and

incomplete set of simulation traces. A fundamental challenge when

using flow-pipe construction on high-dimensional systems is the cost

of geometric operations, such as intersection and convex hull. In this

talk, we address this challenge by showing that it is often possible

to remove the need to perform high-dimensional geometric operations by

combining two model transformations, direct time-triggered conversion

and dynamics scaling. Further, we discuss how to make the

overapproximation error in the conversion arbitrarily small. Finally,

we show that our transformation-based approach enables the analysis of

a drivetrain system with up to 51 dimensions.

Bio: Sergiy Bogomolov is on the faculty of the Australian National

University where he leads the Dependable Cyber Physical Systems Lab.

Prior to joining ANU Sergiy was a Postdoctoral Researcher in the

Institute of Science and Technology Austria. Sergiy is broadly

interested in algorithms and techniques to support design and

development workflow of trustworthy and resilient autonomous systems.

For this purpose, he uses and develops techniques on the interface of

cyber-physical systems verification and AI planning. His Ph.D. and

M.Sc. degrees are from the University of Freiburg, Germany.

Increasingly, autonomous systems’ behavior is driven by machine learning, and in particular by (deep) neural networks. One crucial question then is how to effectively test such systems given that they don’t have (complete) specifications or even source code corresponding to some of their critical behavior. Further, adding even more complexity to an already difficult problem, many of these autonomous systems are usually integrated into larger systems, thus possibly leading to undesirable feature interactions. This talk will report on recent work done to address these intricate problems and will attempt to better define the challenges ahead. Examples from the automotive domain will be used to illustrate the main points.

Speaker’s bio:

Lionel C. Briand is professor in software verification and validation at the SnT centre for Security, Reliability, and Trust, University of Luxembourg, where he is also the vice-director of the centre. He is currently running multiple collaborative research projects with companies in the automotive, satellite, financial, and legal domains. Lionel has held various engineering, academic, and leading research positions in five other countries before that.

Lionel was elevated to the grade of IEEE Fellow in 2010 for his work on the testing of object-oriented systems. He was granted the IEEE Computer Society Harlan Mills award and the IEEE Reliability Society engineer-of-the-year award for his work on model-based verification and testing, respectively in 2012 and 2013. He received an ERC Advanced grant in 2016 — on the topic of modelling and testing cyber-physical systems — which is the most prestigious individual research grant in the European Union. His research interests include: software testing and verification, model-driven software development, search-based software engineering, and empirical software engineering.

Abstract:

Automata learning is a technique that has successfully been applied in verification, with the automaton type

varying depending on the application domain. Adaptations of automata learning algorithms for increasingly

complex types of automata have to be developed from scratch because there was no abstract theory offering

guidelines. This makes it hard to devise such algorithms, and it obscures their correctness proofs.

We introduce a simple category-theoretic formalism that provides an appropriately abstract foundation for

studying automata learning. Furthermore, our framework establishes formal relations between algorithms for

learning, testing, and minimization. We illustrate its generality with two examples: deterministic and weighted

automata.