Verification of concurrent software is a notoriously difficult

subject, whose complexities stem from the inability of the

existing verification methods to modularize, and thus divide and

conquer the verification problem.

Dependent types are a formal method well-known for its ability to

modularize and scale complex mathematical proofs. But, when it

comes to programming, dependent types are considered limited to

the purely functional programming model.

In this talk I will present my recent work towards reconciling

dependent types with shared memory concurrency, with the goal of

achieving modular verification for the latter. Applying the

type-theoretic paradigm to concurrency have lead to interesting

reformulations of some classical verification ideas, and to the

discovery of novel and useful abstractions for modularizing the

proofs.

In this talk, I will give a perspective on inference in Bayes’ networks

(BNs) using program verification. I will argue how weakest precondition

reasoning a la Dijkstra can be used for exact inference (and more). As

exact inference is NP-complete, inference is typically done by means of

simulation. I will show how by means of wp-reasoning exact expected

sampling times of BNs can be obtained in a fully automated fashion. An

experimental evaluation on BN benchmarks demonstrates that very large

expected sampling times (in the magnitude of millions of years) can be

inferred within less than a second. This provides a means to decide

whether sampling-based methods are appropriate for a given BN. The key

ingredients are to reason at program code in a compositional manner. ]]>

Automated invariant generation is a fundamental challenge in program analysis and verification, going back many decades, and remains a topic of active research. In this talk I’ll present a select overview and survey of work on this problem, and discuss unexpected connections to other fields including quantum computing, group theory, and algebraic geometry. (No previous knowledge of these fields will be assumed.)

]]>The operation of traditional computer networks is known to be a difficult manual and error-prone task. Over the last years, even tech-savvy companies have reported major issues with their network, due to misconfigurations, leading to disruptive downtimes. As a response to the difficulty of maintaining policy compliance, and given the critical role that computer networks (including the Internet, datacenter networks, enterprise networks) play today, researchers have started developing more principled approaches to networking and specification. Over the last years, we have witnessed great advances in the development of mathematical foundations for computer networks and the emergence of high-level network programming languages such as NetKAT. While powerful, however, existing formal frameworks often come with potentially high (super-polynomial) running times — even without considering failure scenarios.

This talk first gives an overview of the “softwarization” trends in communicaiton networks and motivates why formal methods are currently the “hot topic” in this area. I will then present a what-if analysis framework which allows us to verify important properties such as policy compliance and reachability in communication networks in polynomial time, even in the presence of (multiple) failures. Our framework relies on an automata-theoretic approach, and applies both to the widely deployed MPLS networks as well as to the emerging Segment Routing networks. In addition to the theory underlying our approach (presented at INFOCOM 2018 together with Jiri Srba, patent pending), I will also report on our query language, the tool we develop at Aalborg University, as well as on our first evaluation results.

I would also like to use the opportunity of this talk to provide a brief overview of our other research activities, especially the ones related to network security and the design of demand-aware and self-adjusting networks. We are currently eager to establish connections and collaborations within Vienna and Austria in general, related to all the presented topics and beyond. More details about our research activities can also be found at https://net.t-labs.tu-berlin.de/~stefan/

and more and more also at: http://ct.cs.univie.ac.at/ (under construction).

]]>to reduce them to constraint solving problems with a quantifier prefix

exists-forall. Here, the existential quantifier ranges over a

proof/certificate/program/cont

quantifier is used for specifying a property that the found object

should fulfill.

Recently, there has been a lot of work on algorithms for solving such

problems by iteratively learning the object to be found from concrete

counter-examples to the property. Many of those algorithms follow a

general scheme, often called counter-example guided inductive synthesis

(CEGIS). In the talk, we will present an algorithm of this type that

synthesizes certificates for safety of ordinary differential equations,

so-called barrier certificates. We will draw general conclusions

regarding the usage of counter-example guided inductive synthesis in

continuous versus discrete structures.

Bio:

Stefan Ratschan is a researcher at the Institute of Computer Science of

the Czech Academy of Sciences in Prague. He received his Ph.D. at the

Research Institute for Symbolic Computation at Johannes Kepler

University Linz, Austria, and has since then also been affiliated with

the University of Girona, Spain, and the Max-Planck-Institute for

Informatics, Saarbrücken, Germany. He currently heads the Department of

Computational Mathematics at the Institute of Computer Science of the

Czech Academy of Sciences. The main scientific interests of Stefan

Ratschan are in the areas of formal verification of cyber-physical

systems and of constraint solving.

The ACL2 theorem-proving system has seen sustained industrial use since

the mid 1990s. Companies that have and are using ACL2 include AMD, ARM,

Centaur Technology, General Electric, IBM, Intel, Kestrel Institute,

Motorola/Freescale, Oracle, and Rockwell Collins. ACL2 has been

accepted for industrial application because it is an integrated

programming/proof environment supporting a subset of the ANSI standard

Common Lisp programming language. Software and hardware systems have

been modeled and analyzed with the ACL2 theorem-proving system.

The ACL2 programming language can be used to develop efficient and

robust programs. The ACL2 analysis machinery provides many features

permitting domain-specific, human-supplied guidance at various levels

of abstraction. ACL2 specifications often serve as efficient execution

engines for the modeled artifacts while permitting formal analysis and

proof of properties. ACL2 provides support for the development and

verification of other formal analysis tools. ACL2 did not find its way

into industrial use merely because of its technical features. The ACL2

user/development community has a shared vision of making formal

specification and mechanized verification routine — we have been

committed to this vision for the quarter century since the Computational

Logic, Inc., Verified Stack.

Therefore, a bug in a distributed protocol may have tremendous effects.

Accordingly, a lot of effort has been invested in verifying such protocols.

However, due to the infinite state space (e.g., unbounded number of nodes, messages) and protocols complexity, verification is both undecidable and hard in practice.

I will describe a deductive approach for verification of distributed protocols, based on first-order logic, inductive invariants and user interaction.

The use of first-order logic and a decidable fragment of universally quantified invariants allows to completely automate some verification tasks.

Tasks that remain undecidable (e.g. finding inductive invariants) are solved with user interaction, based on graphically displayed counterexamples.

I will also describe the application of these techniques to verify safety of several variants of Paxos, and a way to extend the approach to verify liveness and temporal properties.

Bio:

Oded Padon is a fourth year PhD student in Tel Aviv University, under the supervision of Prof. Mooly Sagiv.

His research focuses on verification of distributed protocols using first-order logic.

He is a recipient of the 2017 Google PhD fellowship in programming languages.

Recursive algebraic data types (term algebras, ADTs) are one of the

most well-studied theories in logic, and find application in

contexts including functional programming, modelling languages,

proof assistants, and verification. At this point, several

state-of-the-art theorem provers and SMT solvers include tailor-made

decision procedures for ADTs, and version 2.6 of the SMT-LIB

standard includes support for ADTs. We study a relatively simple

approach to decide satisfiability of ADT constraints, the reduction

of ADT constraints to equisatisfiable constraints over uninterpreted

functions (EUF) and linear integer arithmetic (LIA). We show that

the reduction approach gives rise to both decision and Craig

interpolation procedures in ADTs. As an extension, we then consider

ADTs with size constraints, and give a precise characterisation of

the ADTs for which reduction combined with incremental unfolding is

a decision procedure.