Piotrek Hofman

Data nets are yet another extension of Petri Nets in which the relations between consumed and produced tokens are very restricted. Their subclasses like, Unordered Data Petri Nets (UDPN), from the theory perspective are natural and easy to define. Furthermore, similarly
to Petri Nets they have a lot of structure to explore. During the talk, we will start form defining Data Nets and formulating the state equation, a generalization of one of the simplest and most important equations for Petri Nets.
Next, I will present a sketch of the proof of the correctness of the NP-time algorithm, to solve the equation in case of Unordered Data Petri Nets. Finally, I will mention some novel results for other classes of Data Nets and open problems that we are working on.

The talk will base on a joint work with Patrick Totzke and Jerome Leroux
“Linear Combinations of Unordered Data Vectors” published at LICS-2017
and on unpublished results with Sławomir Lasota.

Goran Frehse

Reachability analysis is a useful tool for checking whether a cyber-physical system satisfies a given safety property. For instance, one could ask whether an electro-magnetic braking system brings a car to a standstill within a given time frame. In set-based reachability, one takes a given set of initial states (ranges for the position and speed of the car) and computes the image of the set of states as it evolves over time. Even for simple types of systems, this so-called reach set can only be computed approximately, and accuracy comes at an extremely steep cost. A highly scalable way to approximate the reach set is known for the special case of linear dynamics. It is based on template polyhedra, which are polyhedra (sets bounded by linear constraints) with normal vectors from a given finite set. Simple instances of template polyhedra are boxes or octagons. A template instance that tightly bounds the reach set is found by solving a set of optimization problems. The accuracy of the approximation can be improved by adding more normal vectors to the template.
In this talk, we propose an approach that extends this idea from linear to nonlinear dynamics. We linearize the system around a reference trajectory and solve ODEs to obtain templates that bound the reach set. The ODEs are particular in that they involve an optimization problem constrained by the template itself. We show how, similarly to the linear case, the template can be adapted over time to match the dynamics of the system. For both static and dynamic templates, we identify conditions that guarantee convergence. The potential of the approach is discussed on several benchmarks.

Johannes Kloos

Asynchronous concurrency is a model of concurrency based on the concept of tasks. It executes tasks one-at-a-time, choosing the next task to run from a set called the task buffer, adding tasks to the buffer using a “post” primitive, and scheduling a new task only when the currently running task gives up control. Task can depend on each other using explicit “wait” operations. This model and its variants are used widely in languages such as JavaScript, C# (the async and await primitives), or the monadic concurrency libraries of Haskell and OCaml. The more restricted scheduling separates it from the more commonly considered multi-threaded programming model.

In this talk, I will address talk about two projects. The first project deals with the question how we can reason about asynchronous programs. We present a program logic and a corresponding type system that allow us to reason locally about programs with asynchronous concurrency and mutable state; we instantiate this model for OCaml using the Lwt library. The key innovation is to introduce the concept of a “wait permission”, which describes the resources that a given task will yield on termination. An earlier version of this work was published at ECOOP ’15. The second project deals with the question how we can perform a kind of “asynchronous parallelization” optimization, where we are given a (more-or-less) sequential program and rewrite it to make use of asynchronous concurrency. We use a set of program rewriting rules, most notably replacing synchronous I/O operations with asynchronous counterparts in a safe way, and pushing wait statements as far back as possible. As it turns out, proving the soundness of these rewriting rules is surprisingly tricky; I will sketch a reasoning approach that allows us to show refinement, in the the following sense: Let $e$ be a program, and $e’$ the result of rewriting $e$ using the given rules. For every terminating execution of $e’$, there is a corresponding terminating execution of $e$ that ends in an equivalent state.

Dana Drachsler Cohen

Interactive program synthesizers enable a user to communicate his/her intent via input-output examples. Unfortunately, such synthesizers only guarantee that the synthesized program is correct on the provided examples. A user that wishes to guarantee correctness for all possible inputs has to manually inspect the synthesized program, an error-prone and challenging task.
In this talk, I will present a novel synthesis framework that communicates only through (abstract) examples and guarantees that the synthesized program is correct on all inputs.
The main idea is to use abstract examples—a new form of examples that represent a potentially unbounded set of concrete examples. An abstract example captures how part of the input space is mapped to corresponding outputs by the synthesized program. Our framework uses a generalization algorithm to compute abstract examples which are then presented to the user. The user can accept an abstract example, or provide a counterexample in which case the synthesizer will explore a different program. When the user accepts a set of abstract examples that covers the entire input space, the synthesis process is completed.
I will also show some experimental results to demonstrate that our synthesizer communicates with the user effectively.
Joint work with Sharon Shoham and Eran Yahav.

Miriam García Soto

The central problem addressed in this talk is the following: given the design of a hybrid system, conclude if the system is stable. Hybrid systems are apt to model cyber-physical systems since they capture the mixed discrete-continuous behaviour which appears naturally in cyber-physical systems. Stability is a fundamental property in control system design, which demands that small perturbations in the input to the system lead to small variations in the future behaviour. The classical approach to establishing stability involves finding a particular kind of function, called Lyapunov function. In relation to this method, there is some effort towards automation which consists of a template-based search that uses sum-of-squares solvers. However, the shortcomings with this template-based method are that the choice of a correct template requires designer’s ingenuity and that a template failure does not provide instability reasons or guidance for the choice of the next template.

We propose an alternate algorithmic approach for the stability verification of hybrid systems, which is a counterexample-guided abstraction refinement (CEGAR) framework. The key idea is that given a hybrid system, its stability can be determined by constructing an abstract weighted graph which over-approximates the behaviour of the former one. The relation between the abstraction and the hybrid system is such that if every cycle in the graph has weight below 1, then the hybrid system is stable. In the case that the graph has a cycle with weight greater than 1, this cycle can be evaluated to infer instability of the hybrid system or guide the choice of subsequent templates. This CEGAR approach addresses the shortcomings of the template-based search.

We have implemented the CEGAR algorithm in the tool AVERIST, and we report experimental evaluations on some examples for demonstrating the feasibility of the approach.

Jean-Pierre Talpin

Abstract:

We consider distributed timed systems that implement leader election
protocols which are at the heart of clock synchronization protocols. We
develop abstraction techniques for parameterized model checking of such
protocols under arbitrary network topologies, where nodes have independently
evolving clocks. We apply our technique for model checking the root election
part of the flooding time synchronisation protocol (FTSP), and obtain
improved results compared to previous work. We model check the protocol for
all topologies in which the distance to the node to be elected leader is
bounded by a given parameter.

Veselin Raychev

In this talk I will discuss a new generation of software tools based on probabilistic models learned from large codebases of code a.k.a “Big Code”. By leveraging the massive effort already spent by thousands of programmers, these tools make useful predictions about new, unseen programs, thus helping to solve important and difficult software tasks. As an example, I will illustrate our systems for statistical code completion, deobfuscation and defect prediction. Two of these systems (jsnice.org and apk-deguard.com) are freely available and already have thousands of users. In the talk, I will present some of the core machine learning and program analysis techniques behind these learning tools.

Short Bio: Veselin Raychev obtained his PhD from ETH Zürich in 2016 and his dissertation “Learning from Large Codebases” has received Honorable Mention for 2016 ACM Doctoral Dissertation Award. Before his PhD, he worked as a software engineer at Google on the public transportation routing algorithm of Google Maps as well as several other projects. Currently he is a co-founder and CTO of DeepCode GmbH – a company developing “Big Code” programming tools.

Zvonimir Rakamaric

Abstract:
Tool prototyping is an essential step in developing novel software verification algorithms and techniques. However, implementing a verifier prototype that can handle real-world programs is a huge endeavor, which hinders researchers by forcing them to spend more time engineering tools, and less time innovating. In this talk, I will present the SMACK software verification toolchain. The toolchain provides a modular and extensible software verification ecosystem that decouples the front-end source language details from back-end verification algorithms. It achieves that by translating from the LLVM compiler intermediate representation into the Boogie intermediate verification language. SMACK benefits the software verification community in several ways: (i) it can be used as an off-the-shelf software verifier in an applied software verification project, (ii) it enables researchers to rapidly develop and release new verification algorithms, (iii) it allows for adding support for new languages in its front-end. We have used SMACK to verify numerous C/C++ programs, including industry examples, showing it is mature and competitive. Likewise, SMACK is already being used in several existing verification research prototypes.

František Blahoudek

The talk will have three parts in which I will discuss the results achieved with my colleagues about nondeterministic, deterministic, and semi-deterministic omega-automata. For nondeterministic automata, I will show which aspects of property automata can influence the performance of explicit model checking and what improvements can be made in LTL-to-automata traslation if we have some knowledge about the verified system. Then I will present our efficient translation of a fragment of LTL to deterministic automata. Finally, I will explore the jungle of Buchi automata classes that lie between deterministic and nondeterministic automata. I will present how to efficiently complement semi-deterministic automata and how to obtain them from nondeterministic generalized Buchi automata.

Andrei Voronkov

Abstract:

It is known that one can extract Craig interpolants from so-called local derivations. An interpolant extracted from such a derivation is a boolean combination of formulas occurring in the derivation. However, standard complete proof systems, such as superposition, for theories having the interpolation property are not necessarily complete for local proofs.
In this paper we investigate interpolant extraction from non-local proofs in the superposition calculus and prove a number of general results about interpolant extraction and complexity of extracted interpolants. In particular, we prove that the number of quantifier alternations in first-order interpolants of formulas without quantifier alternations is unbounded. This result has far-reaching consequences for using local proofs as a foundation for interpolating proof systems – any such proof system should deal with formulas of arbitrary quantifier complexity.
To search for alternatives for interpolating proof systems, we consider several variations on interpolation and local proofs. Namely, we give an algorithm for building interpolants from resolution refutations in logic without equality and discuss additional constraints when this approach can be also used for logic with equality. We finally propose a new direction related to interpolation via local proofs in first-order theories.