Aleksandar Nanevski

Abstract:

Verification of concurrent software is a notoriously difficult
subject, whose complexities stem from the inability of the
existing verification methods to modularize, and thus divide and
conquer the verification problem.

Dependent types are a formal method well-known for its ability to
modularize and scale complex mathematical proofs. But, when it
comes to programming, dependent types are considered limited to
the purely functional programming model.

In this talk I will present my recent work towards reconciling
dependent types with shared memory concurrency, with the goal of
achieving modular verification for the latter. Applying the
type-theoretic paradigm to concurrency have lead to interesting
reformulations of some classical verification ideas, and to the
discovery of novel and useful abstractions for modularizing the
proofs.

RiSE Invited Lecture: Joost-Pieter Katoen

Abstract:
In this talk, I will give a perspective on inference in Bayes’ networks
(BNs) using program verification. I will argue how weakest precondition
reasoning a la Dijkstra can be used for exact inference (and more). As
exact inference is NP-complete, inference is typically done by means of
simulation. I will show how by means of wp-reasoning exact expected
sampling times of BNs can be obtained in a fully automated fashion. An
experimental evaluation on BN benchmarks demonstrates that very large
expected sampling times (in the magnitude of millions of years) can be
inferred within less than a second. This provides a means to decide
whether sampling-based methods are appropriate for a given BN. The key
ingredients are to reason at program code in a compositional manner.

RiSE Invited Lecture: Joel Ouaknine

Abstract:

Automated invariant generation is a fundamental challenge in program analysis and verification, going back many decades, and remains a topic of active research. In this talk I’ll present a select overview and survey of work on this problem, and discuss unexpected connections to other fields including quantum computing, group theory, and algebraic geometry. (No previous knowledge of these fields will be assumed.)

Stefan Schmid

Abstract:

The operation of traditional computer networks is known to be a difficult manual and error-prone task. Over the last years, even tech-savvy companies have reported major issues with their network, due to misconfigurations, leading to disruptive downtimes. As a response to the difficulty of maintaining policy compliance, and given the critical role that computer networks (including the Internet, datacenter networks, enterprise networks) play today, researchers have started developing more principled approaches to networking and specification. Over the last years, we have witnessed great advances in the development of mathematical foundations for computer networks and the emergence of high-level network programming languages such as NetKAT. While powerful, however, existing formal frameworks often come with potentially high (super-polynomial) running times — even without considering failure scenarios.
 
This talk first gives an overview of the “softwarization” trends in communicaiton networks and motivates why formal methods are currently the “hot topic” in this area. I will then present a what-if analysis framework which allows us to verify important properties such as policy compliance and reachability in communication networks in polynomial time, even in the presence of (multiple) failures. Our framework relies on an automata-theoretic approach, and applies both to the widely deployed MPLS networks as well as to the emerging Segment Routing networks. In addition to the theory underlying our approach (presented at INFOCOM 2018 together with Jiri Srba, patent pending), I will also report on our query language, the tool we develop at Aalborg University, as well as on our first evaluation results. 
 
I would also like to use the opportunity of this talk to provide a brief overview of our other research activities, especially the ones related to network security and the design of demand-aware and self-adjusting networks. We are currently eager to establish connections and collaborations within Vienna and Austria in general, related to all the presented topics and beyond. More details about our research activities can also be found at https://net.t-labs.tu-berlin.de/~stefan/ 
and more and more also at: http://ct.cs.univie.ac.at/ (under construction).

Alexey Bakhirkin

We work on a new algorithm for parametric identification in signal temporal logic. For a real-valued signal and a parameterized temporal logic formula, we want to find the set of parameter values that makes the formula satisfied by the signal. Such a procedure has multiple uses. For example, as part of the process of deriving a model from simulation traces, it can be used to find the values of model parameters. Or it can be used to evaluate formulas containing universal and existential quantifiers. We have a working algorithm for identification of spatial parameters in piecewise-constant signals, which performs well in our experiments. Identification of time parameters is an ongoing work.

Stefan Ratschan

A common approach to solving problems in rigorous systems engineering is
to reduce them to constraint solving problems with a quantifier prefix
exists-forall. Here, the existential quantifier ranges over a
proof/certificate/program/controller to be found, and the universal
quantifier is used for specifying a property that the found object
should fulfill.

Recently, there has been a lot of work on algorithms for solving such
problems by iteratively learning the object to be found from concrete
counter-examples to the property. Many of those algorithms follow a
general scheme, often called counter-example guided inductive synthesis
(CEGIS). In the talk, we will present an algorithm of this type that
synthesizes certificates for safety of ordinary differential equations,
so-called barrier certificates.  We will draw general conclusions
regarding the usage of counter-example guided inductive synthesis in
continuous versus discrete structures.

Bio:

Stefan Ratschan is a researcher at the Institute of Computer Science of
the Czech Academy of Sciences in Prague. He received his Ph.D. at the
Research Institute for Symbolic Computation at Johannes Kepler
University Linz, Austria, and has since then also been affiliated with
the University of Girona, Spain, and the Max-Planck-Institute for
Informatics, Saarbrücken, Germany. He currently heads the Department of
Computational Mathematics at the Institute of Computer Science of the
Czech Academy of Sciences.  The main scientific interests of Stefan
Ratschan are in the areas of formal verification of cyber-physical
systems and of constraint solving.

RiSE Invited Lecture: Warren Hunt

Abstract:
The ACL2 theorem-proving system has seen sustained industrial use since
the mid 1990s.  Companies that have and are using ACL2 include AMD, ARM,
Centaur Technology, General Electric, IBM, Intel, Kestrel Institute,
Motorola/Freescale, Oracle, and Rockwell Collins.  ACL2 has been
accepted for industrial application because it is an integrated
programming/proof environment supporting a subset of the ANSI standard
Common Lisp programming language.  Software and hardware systems have
been modeled and analyzed with the ACL2 theorem-proving system.

The ACL2 programming language can be used to develop efficient and
robust programs.  The ACL2 analysis machinery provides many features
permitting domain-specific, human-supplied guidance at various levels
of abstraction.  ACL2 specifications often serve as efficient execution
engines for the modeled artifacts while permitting formal analysis and
proof of properties.  ACL2 provides support for the development and
verification of other formal analysis tools.  ACL2 did not find its way
into industrial use merely because of its technical features.  The ACL2
user/development community has a shared vision of making formal
specification and mechanized verification routine — we have been
committed to this vision for the quarter century since the Computational
Logic, Inc., Verified Stack.

Oded Padon

Distributed protocols such as Paxos play an important role in many computer systems.
Therefore, a bug in a distributed protocol may have tremendous effects.
Accordingly, a lot of effort has been invested in verifying such protocols.
However, due to the infinite state space (e.g., unbounded number of nodes, messages) and protocols complexity, verification is both undecidable and hard in practice.
I will describe a deductive approach for verification of distributed protocols, based on first-order logic, inductive invariants and user interaction.

The use of first-order logic and a decidable fragment of universally quantified invariants allows to completely automate some verification tasks.
Tasks that remain undecidable (e.g. finding inductive invariants) are solved with user interaction, based on graphically displayed counterexamples.

I will also describe the application of these techniques to verify safety of several variants of Paxos, and a way to extend the approach to verify liveness and temporal properties.

Bio:
Oded Padon is a fourth year PhD student in Tel Aviv University, under the supervision of Prof. Mooly Sagiv.
His research focuses on verification of distributed protocols using first-order logic.
He is a recipient of the 2017 Google PhD fellowship in programming languages.

Melkior Ornik

In a variety of applications that encompass adversarial behavior, there is clearly interest in deceiving an adversary about one’s objectives or, alternatively, making it difficult for the adversary to predict one’s strategy to achieve those objectives. In this talk, I will outline recent work on formalizing the notions of deception and unpredictability within control systems. Namely, I will discuss an approach which encodes deception and deceptive strategies through introducing a belief space for an adversary, as well as a belief-induced reward objective. Such a framework makes it possible to consider design of optimal deceptive strategies within the setting of optimal control, where lack of knowledge about the adversary translates into a need to develop robust optimal control policies, or policies based on partial observations. On the other hand, we relate unpredictability of an agent to the total Shannon entropy of the paths that an agent may take to reach its objective, and show that, within the context of Markov decision processes, maximal unpredictability of an agent is achieved through following a policy that results in a maximal total entropy of the induced Markov chain. In parallel with the development of the theory of deception and unpredictability, I will illustrate the introduced notions using a variety of situations that naturally involve adversarial behavior, and show that the policies which are deceptive or generate maximal unpredictability in the sense of theoretical definitions indeed also follow the natural intuition behind these notions.

Philipp Rümmer

Abstract:

Recursive algebraic data types (term algebras, ADTs) are one of the
most well-studied theories in logic, and find application in
contexts including functional programming, modelling languages,
proof assistants, and verification. At this point, several
state-of-the-art theorem provers and SMT solvers include tailor-made
decision procedures for ADTs, and version 2.6 of the SMT-LIB
standard includes support for ADTs. We study a relatively simple
approach to decide satisfiability of ADT constraints, the reduction
of ADT constraints to equisatisfiable constraints over uninterpreted
functions (EUF) and linear integer arithmetic (LIA). We show that
the reduction approach gives rise to both decision and Craig
interpolation procedures in ADTs. As an extension, we then consider
ADTs with size constraints, and give a precise characterisation of
the ADTs for which reduction combined with incremental unfolding is
a decision procedure.