Somesh Jha


Machine learning (ML) models, e.g., deep neural networks
(DNNs), are vulnerable to adversarial examples: malicious inputs
modified to yield erroneous model outputs, while appearing unmodified
to human observers. Potential attacks include having malicious content
like malware identified as legitimate or controlling vehicle
behavior. Yet, most existing adversarial example attacks require
knowledge of either the model internals or its training data.  We will
describe a black-box attack on ML models. These algorithms yield
adversarial examples misclassified by Amazon and Google at rates of
96.19% and 88.94%. We also find that this black-box attack strategy is
capable of evading defense strategies previously found to make
adversarial example crafting harder.

Piotrek Hofman

Goran Frehse

Reachability analysis is a useful tool for checking whether a cyber-physical system satisfies a given safety property. For instance, one could ask whether an electro-magnetic braking system brings a car to a standstill within a given time frame. In set-based reachability, one takes a given set of initial states (ranges for the position and speed of the car) and computes the image of the set of states as it evolves over time. Even for simple types of systems, this so-called reach set can only be computed approximately, and accuracy comes at an extremely steep cost. A highly scalable way to approximate the reach set is known for the special case of linear dynamics. It is based on template polyhedra, which are polyhedra (sets bounded by linear constraints) with normal vectors from a given finite set. Simple instances of template polyhedra are boxes or octagons. A template instance that tightly bounds the reach set is found by solving a set of optimization problems. The accuracy of the approximation can be improved by adding more normal vectors to the template.
In this talk, we propose an approach that extends this idea from linear to nonlinear dynamics. We linearize the system around a reference trajectory and solve ODEs to obtain templates that bound the reach set. The ODEs are particular in that they involve an optimization problem constrained by the template itself. We show how, similarly to the linear case, the template can be adapted over time to match the dynamics of the system. For both static and dynamic templates, we identify conditions that guarantee convergence. The potential of the approach is discussed on several benchmarks.

Johannes Kloos

Asynchronous concurrency is a model of concurrency based on the concept of tasks. It executes tasks one-at-a-time, choosing the next task to run from a set called the task buffer, adding tasks to the buffer using a “post” primitive, and scheduling a new task only when the currently running task gives up control. Task can depend on each other using explicit “wait” operations. This model and its variants are used widely in languages such as JavaScript, C# (the async and await primitives), or the monadic concurrency libraries of Haskell and OCaml. The more restricted scheduling separates it from the more commonly considered multi-threaded programming model.

In this talk, I will address talk about two projects. The first project deals with the question how we can reason about asynchronous programs. We present a program logic and a corresponding type system that allow us to reason locally about programs with asynchronous concurrency and mutable state; we instantiate this model for OCaml using the Lwt library. The key innovation is to introduce the concept of a “wait permission”, which describes the resources that a given task will yield on termination. An earlier version of this work was published at ECOOP ’15. The second project deals with the question how we can perform a kind of “asynchronous parallelization” optimization, where we are given a (more-or-less) sequential program and rewrite it to make use of asynchronous concurrency. We use a set of program rewriting rules, most notably replacing synchronous I/O operations with asynchronous counterparts in a safe way, and pushing wait statements as far back as possible. As it turns out, proving the soundness of these rewriting rules is surprisingly tricky; I will sketch a reasoning approach that allows us to show refinement, in the the following sense: Let $e$ be a program, and $e’$ the result of rewriting $e$ using the given rules. For every terminating execution of $e’$, there is a corresponding terminating execution of $e$ that ends in an equivalent state.

Dana Drachsler-Cohen

Miriam García Soto

The central problem addressed in this talk is the following: given the design of a hybrid system, conclude if the system is stable. Hybrid systems are apt to model cyber-physical systems since they capture the mixed discrete-continuous behaviour which appears naturally in cyber-physical systems. Stability is a fundamental property in control system design, which demands that small perturbations in the input to the system lead to small variations in the future behaviour. The classical approach to establishing stability involves finding a particular kind of function, called Lyapunov function. In relation to this method, there is some effort towards automation which consists of a template-based search that uses sum-of-squares solvers. However, the shortcomings with this template-based method are that the choice of a correct template requires designer’s ingenuity and that a template failure does not provide instability reasons or guidance for the choice of the next template.

We propose an alternate algorithmic approach for the stability verification of hybrid systems, which is a counterexample-guided abstraction refinement (CEGAR) framework. The key idea is that given a hybrid system, its stability can be determined by constructing an abstract weighted graph which over-approximates the behaviour of the former one. The relation between the abstraction and the hybrid system is such that if every cycle in the graph has weight below 1, then the hybrid system is stable. In the case that the graph has a cycle with weight greater than 1, this cycle can be evaluated to infer instability of the hybrid system or guide the choice of subsequent templates. This CEGAR approach addresses the shortcomings of the template-based search.

We have implemented the CEGAR algorithm in the tool AVERIST, and we report experimental evaluations on some examples for demonstrating the feasibility of the approach.

Jean-Pierre Talpin


We consider distributed timed systems that implement leader election
protocols which are at the heart of clock synchronization protocols. We
develop abstraction techniques for parameterized model checking of such
protocols under arbitrary network topologies, where nodes have independently
evolving clocks. We apply our technique for model checking the root election
part of the flooding time synchronisation protocol (FTSP), and obtain
improved results compared to previous work. We model check the protocol for
all topologies in which the distance to the node to be elected leader is
bounded by a given parameter.

Veselin Raychev

In this talk I will discuss a new generation of software tools based on probabilistic models learned from large codebases of code a.k.a “Big Code”. By leveraging the massive effort already spent by thousands of programmers, these tools make useful predictions about new, unseen programs, thus helping to solve important and difficult software tasks. As an example, I will illustrate our systems for statistical code completion, deobfuscation and defect prediction. Two of these systems ( and are freely available and already have thousands of users. In the talk, I will present some of the core machine learning and program analysis techniques behind these learning tools.

Short Bio: Veselin Raychev obtained his PhD from ETH Zürich in 2016 and his dissertation “Learning from Large Codebases” has received Honorable Mention for 2016 ACM Doctoral Dissertation Award. Before his PhD, he worked as a software engineer at Google on the public transportation routing algorithm of Google Maps as well as several other projects. Currently he is a co-founder and CTO of DeepCode GmbH – a company developing “Big Code” programming tools.

Zvonimir Rakamaric

Tool prototyping is an essential step in developing novel software verification algorithms and techniques. However, implementing a verifier prototype that can handle real-world programs is a huge endeavor, which hinders researchers by forcing them to spend more time engineering tools, and less time innovating. In this talk, I will present the SMACK software verification toolchain. The toolchain provides a modular and extensible software verification ecosystem that decouples the front-end source language details from back-end verification algorithms. It achieves that by translating from the LLVM compiler intermediate representation into the Boogie intermediate verification language. SMACK benefits the software verification community in several ways: (i) it can be used as an off-the-shelf software verifier in an applied software verification project, (ii) it enables researchers to rapidly develop and release new verification algorithms, (iii) it allows for adding support for new languages in its front-end. We have used SMACK to verify numerous C/C++ programs, including industry examples, showing it is mature and competitive. Likewise, SMACK is already being used in several existing verification research prototypes.

František Blahoudek

The talk will have three parts in which I will discuss the results achieved with my colleagues about nondeterministic, deterministic, and semi-deterministic omega-automata. For nondeterministic automata, I will show which aspects of property automata can influence the performance of explicit model checking and what improvements can be made in LTL-to-automata traslation if we have some knowledge about the verified system. Then I will present our efficient translation of a fragment of LTL to deterministic automata. Finally, I will explore the jungle of Buchi automata classes that lie between deterministic and nondeterministic automata. I will present how to efficiently complement semi-deterministic automata and how to obtain them from nondeterministic generalized Buchi automata.