Abstract:

Abstract:

Automata learning is a technique that has successfully been applied in verification, with the automaton type

varying depending on the application domain. Adaptations of automata learning algorithms for increasingly

complex types of automata have to be developed from scratch because there was no abstract theory offering

guidelines. This makes it hard to devise such algorithms, and it obscures their correctness proofs.

We introduce a simple category-theoretic formalism that provides an appropriately abstract foundation for

studying automata learning. Furthermore, our framework establishes formal relations between algorithms for

learning, testing, and minimization. We illustrate its generality with two examples: deterministic and weighted

automata.

Abstract:

Parity games are deceptively simple two-player games on directed graphs

labeled with numbers.

Parity games have important practical applications in formal

verification and synthesis, especially to solve the model-checking problem

of the modal mu-calculus. They are also interesting from the theory

perspective, because they are widely believed to admit a polynomial

solution, but so far no such algorithm is known. In recent years, a number

of new algorithms and improvements to existing algorithms have been

proposed.

In this talk, we introduce parity games in an accessible way and discuss

why they are so interesting. We present various solutions that have been

proposed over the years. We also present a comprehensive empirical evaluation

of modern parity game algorithms and solvers, both on real world benchmarks

and randomly generated games.

Abstract:

Symbolic executions (and their recent variants called dynamic symbolic

executions) are an important technique in automated testing. Instead

of analysing only concrete executions of a program, one could treat

such executions symbolically (i.e. with some variables that are not

bound to concrete values) and use constraint solvers to determine this

(symbolic) path feasibility so as to guide the path explorations of

the system under test, which in combination with dynamic analysis

gives the best possible path coverage. For string-manipulating

programs, solvers need to be able to handle constraints over the

string domain. This gives rise to the following natural question: what

is an ideal decidable logic over strings for reasoning about path

feasibility in a program with strings? This is a question that is

connected to a number of difficult results in theoretical computer

science (decidability of the theory of strings with concatenations,

a.k.a., word equations) and long-standing open problems (e.g.

decidability of word equations with length constraints). Worse yet,

recent examples from cross-site scripting vulnerabilities suggest that

more powerful string operations (e.g. finite-state transducers) might

be required as first class citizens in string constraints. Even though

putting all these string operations in a string logic leads to

undecidability, recent results show that there might be a way to

recover decidability while retaining expressivity for applications in

symbolic execution. In this talk, I will present one such result from

my POPL’16 paper (with P. Barcelo). The string logic admits

concatenations, regular constraints, finite-state transductions,

letter-counting and length constraints (which can consequently express

charAt operator, and string disequality). I will provide a number of

examples from the cross-site scripting literature that shows how a

string logic can, for the first time, be used to discover a bug in or

prove correctness of the programs. I will conclude by commenting on a

new decision procedure for the logic that leads to an efficient

implementation (POPL’18 with L. Holik, P. Janku, P. Ruemmer, and T.

Vojnar) and a recent attempt to incorporate the fully-fledged

replace-all operator into a string logic (POPL’18 with T. Chen, Y.

Chen, M. Hague, and Z. Wu).

Abstract:

Buggy and insecure software could have serious consequences including

the loss of human lives, financial losses, and confidential

information leakage, to name a few. Program analysis is a field that

concerns the problem of analysing the behaviour of programs especially

with respect to the issue of correctness. Over the years computational

logic has played an important role in program analysis particularly in

the development of precise automatic methods for verifying the

correctness and optimising the performance of programs. In this talk I

will illustrate how logic can help program analysis, drawing examples

from my own research inspired by challenges in web security (e.g. how

to detect/prevent cross-site scripting vulnerabilities), web

performance optimisation (e.g. how to remove code redundancies in web

pages), and verification of distributed protocols. A theme that will

emerge during the talk is that there is often a tight connection

between logic and automata that can be exploited when designing (often

theoretically optimal) algorithms.

Abstract:

The rapid growth in the size and scope of datasets in science and technology has created a need for novel foundational perspectives on data analysis that blendthe inferential and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in Data Science is apparent from their sharply divergent nature at an elementary level—in computer science, the growth of the number of data points is a source of “complexity” that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of “simplicity” in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as “runtime” in core statistical theory and the lack of a role for statistical concepts such as “risk” in core computational theory. I present several research vignettes aimed at bridging computation and statistics, including the problem of inference under privacy and communication constraints, and including a surprising cameo role for symplectic geometry.

Abstract:

Machine learning (ML) models, e.g., deep neural networks

(DNNs), are vulnerable to adversarial examples: malicious inputs

modified to yield erroneous model outputs, while appearing unmodified

to human observers. Potential attacks include having malicious content

like malware identified as legitimate or controlling vehicle

behavior. Yet, most existing adversarial example attacks require

knowledge of either the model internals or its training data. We will

describe a black-box attack on ML models. These algorithms yield

adversarial examples misclassified by Amazon and Google at rates of

96.19% and 88.94%. We also find that this black-box attack strategy is

capable of evading defense strategies previously found to make

adversarial example crafting harder.

Abstract:

We consider distributed timed systems that implement leader election

protocols which are at the heart of clock synchronization protocols. We

develop abstraction techniques for parameterized model checking of such

protocols under arbitrary network topologies, where nodes have independently

evolving clocks. We apply our technique for model checking the root election

part of the ﬂooding time synchronisation protocol (FTSP), and obtain

improved results compared to previous work. We model check the protocol for

all topologies in which the distance to the node to be elected leader is

bounded by a given parameter.

Abstract:

Tool prototyping is an essential step in developing novel software verification algorithms and techniques. However, implementing a verifier prototype that can handle real-world programs is a huge endeavor, which hinders researchers by forcing them to spend more time engineering tools, and less time innovating. In this talk, I will present the SMACK software verification toolchain. The toolchain provides a modular and extensible software verification ecosystem that decouples the front-end source language details from back-end verification algorithms. It achieves that by translating from the LLVM compiler intermediate representation into the Boogie intermediate verification language. SMACK benefits the software verification community in several ways: (i) it can be used as an off-the-shelf software verifier in an applied software verification project, (ii) it enables researchers to rapidly develop and release new verification algorithms, (iii) it allows for adding support for new languages in its front-end. We have used SMACK to verify numerous C/C++ programs, including industry examples, showing it is mature and competitive. Likewise, SMACK is already being used in several existing verification research prototypes.

Abstract:

It is known that one can extract Craig interpolants from so-called local derivations. An interpolant extracted from such a derivation is a boolean combination of formulas occurring in the derivation. However, standard complete proof systems, such as superposition, for theories having the interpolation property are not necessarily complete for local proofs.

In this paper we investigate interpolant extraction from non-local proofs in the superposition calculus and prove a number of general results about interpolant extraction and complexity of extracted interpolants. In particular, we prove that the number of quantifier alternations in first-order interpolants of formulas without quantifier alternations is unbounded. This result has far-reaching consequences for using local proofs as a foundation for interpolating proof systems – any such proof system should deal with formulas of arbitrary quantifier complexity.

To search for alternatives for interpolating proof systems, we consider several variations on interpolation and local proofs. Namely, we give an algorithm for building interpolants from resolution refutations in logic without equality and discuss additional constraints when this approach can be also used for logic with equality. We finally propose a new direction related to interpolation via local proofs in first-order theories.

Abstract:

Analyzing and reasoning about safety properties of software systems

becomes an especially challenging task for programs with complex flow

and, in particular, with loops or recursion. For such programs one needs

additional information, for example in the form of loop invariants,

expressing properties to hold at intermediate program points. We study

program loops with non-trivial arithmetic, implementing addition and

multiplication among numeric program variables. In this talk, we present

a new approach for automatically generating all polynomial invariants of

a class of such programs, based on techniques from computer algebra,

which will be explained thoroughly and intuitively.