Many quantitative dynamical models in computational systems biology are represented in the form of ordinary differential equations. Most of them are based on known first principles and are highly parametrized. Suitable wet-lab measurements and existing hypotheses about the modelled phenomena can be represented in a temporal logic and considered as constraints for the admissible parameters. Model checking and monitoring techniques known from formal verification can be used to synthesise model parameters that guarantee a given set of temporal properties. In this talk we present the scalable coloured model checking approach for parameter synthesis from LTL and CTL specifications. The method will be presented in the context of formal biochemical space that attempts to avoid the gap between formal methods and biology. On the practical side, application of the method to several biological problems will be given.
Model-Based Diagnosis (MBD) finds a growing number of uses in
different settings, which include software fault localization,
debugging of spreadsheets, web services, and hardware designs, but
also the analysis of biological systems, among many others. Motivated
by these different uses, there have been significant improvements made
to MBD algorithms in recent years. Nevertheless, the analysis of
larger and more complex systems motivates further improvements to
existing approaches. The talk will briefly describe a work on this
topic done at INESC-ID Lisboa, which proposes a novel encoding of MBD
into maximum satisfiability (MaxSAT). The new encoding builds on
recent work on using Propositional Satisfiability (SAT) for MBD, but
identifies a number of key optimizations that are very effective in
practice. Experimental results obtained on existing and on the new MBD
problem instances, show conclusive performance gains over the current
state of the art.
Algorithmic deduction and abstract interpretation are two widely used
and successful approaches to implementing program verifiers. A major
impediment to combining these approaches is that their mathematical
foundations and implementation approaches are fundamentally different.
This paper presents a new, logical perspective on abstract
interpreters that perform reachability analysis using non-relational
domains. We encode reachability of a location in a control-flow graph
as satisfiability in a monadic, second-order logic parameterized by a
first-order theory. We show that three components of an abstract
interpreter, the lattice, transformers and iteration algorithm,
represent a first-order, substructural theory, parametric deduction
and abduction in that theory, and second-order constraint propagation.
At the heart of an automobile are its engine and powertrain. The operation of these components is controlled by embedded software on an electronic control unit (ECU). The paradigm of model-based development (MBD) has become the de facto standard for designing such control software. MBD designs of control software range from feature-level models to application-level and even entire system-level models. On the other hand, models of the plant (e.g. the engine), can range from simple physics-based models to high-fidelity models incorporating test-data. The advantage of MBD is in its ability to design, validate, and analyze the closed-loop model of the plant and the controller, often well before the actual hardware components become available. Unfortunately, even the simplest closed-loop model of an automotive powertrain subsystem is a complex cyber-physical system with highly nonlinear and hybrid dynamics, and reasoning about the correctness of such closed-loop models is a formidable task. In this talk, we introduce two challenges in reasoning about industrial-scale closed-loop control models: (1) Scaling verification or bug-finding techniques to engine control software, and (2) formalisms to express correctness and performance requirements for such models. We survey some of the existing work done to address such questions, and present some promising directions for future work.
In a world where we increasingly depend more on computers, the need for tools, which can prove the correctness of software has been well established. Termination analysis is central to the process of detecting regions of code which could lead to unresponsiveness of the system. However, proving that a certain program always terminates is hard, especially if this program is very large and complex. One way of tackling this problem is to reason about different parts of the program separately and find conditions for which each of those terminates. This talk presents a method for finding such conditions on termination of a code fragment, which was implemented as a module in the temporal prover T2. The method is based on finding a potential ranking function for the code and then turning that into a valid ranking function by constraining the possible values of the program variables. The inferred preconditions could be used to perform termination analysis on large and complex programs, including interprocedural analysis, interprocedural termination proving, and distributed termination proving.
We study several classical graph-problems such as computing all pairs shortest paths, as well as the related problems of computing the diameter, center and girth of a network in a distributed setting.The model of distributed computation we consider is: in each synchronous round,each node can transmit a different (but short) message to each of its neighbors. For the above mentioned problems, the talk will cover algorithms running in time O(n) in the unweighted case, as well as lower bounds showing that this is essentially optimal. After extending these results to approximation algorithms and according lower bounds, the talk will provide insights into distributed verification problems. That is, we study problems such as verifying that a subgraph H of a graph G is a minimum spanning tree. It will turn out that in our setting, e.g., computing a spanning tree can take much more time than actually computing a spanning tree of G. As an application of these results we derive strong unconditional time lower bounds on the hardness of distributed approximation for many classical optimization problems including minimum spanning tree, shortest paths, and minimum cut. Many of these results are the first non-trivial lower bounds for both exact and approximate distributed computation and they resolve previous open questions. Our result implies that there can be no distributed approximation algorithm for minimum spanning tree that is significantly faster than the current exact algorithm, for any approximation factor.
The cardinality operator is indispensable when specifying and
reasoning about parameterized concurrent/distributed systems. It
provides a level of abstraction and conciseness that makes (manual)
proofs go through smoothly. Unfortunately, the automation support for
such proofs remains elusive due to challenges in counting sets of
unbounded program states. In this talk, I will present #Π a tool that
can automatically synthesize cardinality-based proofs of parameterized
systems. #Π crucially relies on a sound and precise axiomatization of
cardinality for the combined theory of linear arithmetic and
arrays. This axiomatization is the key technical contribution of this
work. We show that various parameterized systems, including mutual
exclusion and consensus protocols, can be efficiently verified using
#Π. Many of them were automatically verified for the first time. This
is joint work with Nikolaj Bjørner and Andrey Rybalchenko.
In set-based reachability, a cover of the reachable states of a hybrid
system is obtained by repeatedly computing one-step successor states.
It can be used to show safety or to obtain quantitative information,
e.g., for measuring the jitter in an oscillator circuit. In general,
one-step successors can only be computed approximately and are
difficult to scale in the number of continuous variables. The
approximation error requires particular attention since it can
accumulate rapidly, leading to a coarse cover, prohibitive state
explosion, or preventing termination. We present an approach with
precise control over the balance between approximation error and
scalability. By lazy evaluation of abstract set representations, the
precision can be increased in a targeted manner, e.g., to show that a
particular transition is spurious. Each evaluation step scales well in
the number of continuous variables. The set representations are
particularly suited for clustering and containment checking, which are
essential for reducing the state explosion. This provides the building
blocks for refining the cover of the reachable set just enough to show
a property of interest. The approach is illustrated on several
In this talk, we consider empirical metrics for software source code, which can predict the performance of verification tools on specific types of software. Our metrics comprise variable usage patterns, loop patterns, as well as indicators of control-flow complexity and are extracted by simple data-flow analyses. We demonstrate that our metrics are powerful enough to devise a machine-learning based portfolio solver for software verification. Our experiments show that this portfolio solver would be the (hypothetical) overall winner of both the 2014 and 2015 International Competition on Software Verification (SV-COMP), which gives strong empirical evidence for the predictive power of our metrics.
I will discuss the Continuous Skolem Problem, a reachability problem for linear dynamical systems whose decidability is open, and which lies at the heart of many natural verification questions for cyber-physical systems (modelled as hybrid automata) and continuous-time Markov chains. This problem is also equivalent to deciding the existence of zeros for single-variable, linear ordinary differential equations with rational (or algebraic) coefficients.
I will describe some recent work (joint with Ventsi Chonev and James Worrell) using results in transcendence theory, Diophantine geometry, real algebraic geometry, and model theory to obtain decidability for certain variants of the problem, partially answering open questions (among others) of Bell, Delvenne, Jungers, and Blondel. In particular, we consider a bounded version of the Continuous Skolem Problem (corresponding to time-bounded reachability), and show decidability of the bounded problem assuming Schanuel’s Conjecture, a central and unifying conjecture in transcendence theory. I will also describe some partial decidability results in the unbounded case and discuss mathematical obstacles to proving decidability of the Continuous Skolem Problem in full generality.