Kuldeep S. Meel

Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, privacy, probabilistic reasoning, and constrained-random verification. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints . In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints. 

In this talk, I will introduce a novel algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in Boolean reasoning over the past two decades.  This has allowed us to obtain breakthrough results in constrained sampling and counting, providing a new algorithmic toolbox in machine learning, probabilistic reasoning, privacy, and design verification .  I will demonstrate the utility of the above techniques on various real applications including probabilistic inference, design verification and our ongoing collaboration  in estimating the reliability of critical infrastructure networks during natural disasters. 

Bio:

Kuldeep Meel is a final year PhD candidate in Rice University working with Prof. Moshe Vardi and Prof. Supratik Chakraborty. His research broadly lies at the intersection of artificial intelligence and formal methods. He is the recipient of a 2016-17 IBM PhD Fellowship, the 2016-17 Lodieska Stockbridge Vaughn Fellowship and the 2013-14 Andrew Ladd Fellowship. His research won the best student paper award at the International Conference on Constraint Programming 2015. He obtained a B.Tech. from IIT Bombay and an M.S. from Rice in 2012 and 2014 respectively.  He co-won the 2014 Vienna Center of Logic and Algorithms International Outstanding Masters thesis award. 

Ondřej Kunčar

Abstract:
In our project we are working on a framework that provides holistic security guarantees for web-based systems in which information flows heavily but not all flows should be allowed. As a case study we developed CoCon, a conference management system with verified document confidentiality. In my talk, I will start with a demo of CoCon, show which properties of the system we verified in the interactive theorem prover Isabelle and explain how we technically capture the intuitive idea that an attacker cannot learn any secrets of the system. A discussion of limitations of our approach will follow together with a summary of our experience with deployment of CoCon for real-life conferences. At the end, I will shortly mention future work.

Swen Jacobs

Guarded protocols, as introduced by Emerson and Kahlon (2000), describe concurrent systems where transitions of processes are enabled or disabled depending on the existence of other processes in certain local states. Cutoff results reduce reasoning about systems with an arbitrary number of processes to systems of a determined, fixed size. Our work is based on the observation that the existing cutoff results are i) of limited use for liveness properties because the reductions do not preserve fairness, and ii) in many cases give a prohibitively large cutoff. We provide new cutoff results that work under fairness assumptions, and prove tightness or asymptotic tightness for cutoffs that only depend on the size of the process templates. I will also report on ongoing work to obtain smaller cutoffs by considering additional static properties of the process templates, such as the number of different guards that are used in the template.

Filippo Bonchi

Abstract:

We introduce a graphical syntax for signal flow diagrams based on the language of symmetric monoidal categories. Using universal categorical constructions, we provide a stream semantics and a sound and complete axiomatisation. A certain class of diagrams captures the orthodox notion of signal flow graph used in control theory; we show that any diagram of our syntax can be realised, via rewriting in the equational theory, as a signal flow graph.

Matteo Sammartino

Automata learning, or regular inference, is a widely used technique for creating an automaton model from observations. In recent years, it has been successfully applied to the verification of security protocols, hardware, and software systems. The original algorithm L* works for deterministic finite automata, and is only capable of learning control-flow models.
In this talk I will present an extension of L* to learn combined control/data-flow models in the form of nominal automata, which are acceptors of languages over infinite (structured) alphabets. After recalling L*, I will briefly present the theory of nominal sets. Then I will show how this theory enables extending L* to infinite alphabets in a seamless way, with almost no modifications to the original code. Finally, I will give a short demo of a tool based on this work, currently being developed at UCL.

Markus N. Rabe

Many of the challenging problems in verification and synthesis algorithms are naturally phrased in terms of quantified formulas.  Our ability to solve these problems, however, is still unsatisfactory. Even when we take a look at the most basic logical theory with quantification, quantified boolean formulas (QBF), the most effective approach are CEGAR-style algorithms. But CEGAR is a generic algorithmic scheme, which means that we are not yet able to exploit the structure of quantified problems.
In this talk I will discuss some of the fundamental limits of CEGAR-style algorithms and present a novel approach to solve quantified boolean formulas (currently restricted to one quantifier alternation, i.e. 2QBF). The algorithm adds new constraints to the formula until the constraints describe a unique Skolem function – or until the absence of a Skolem function is detected. Backtracking is required if the absence of Skolem functions depends on the newly introduced constraints. The algorithm can be best understood when compared to search algorithms for SAT. We will discuss how to lift propagation, decisions, and conflicts from values (SAT) to Skolem functions (QBF). The algorithm significantly improves over the state of the art in terms of the number of solved instances, solving time, and the size of certificates.

Radu Grigore

Although not the most popular feature of Java's generics,
  bounded wildcards have their uses.
On the negative side, bounded wildcards render type checking undecidable.
On the positive side, bounded wildcards let us encode any computation at compile time;
  so, Java's type checker can recognize any recursive language.

The first part of the talk will review how bounded wildcards
  are used in the implementation of Java's standard library.
The second part of the talk will review the proof that bounded wildcards
  render subtype checking undecidable.


Radu Grigore is
  a lecturer at University of Kent
  and an anagram of Argued Rigor.

Filip Nikšić

In systematic testing of concurrent systems, if the goal is to expose
bugs of “small depth,” one only needs to test a hitting family of
schedules. More precisely, in a system given as a partially ordered set
of events, we say a bug has depth $d$ if it is exposed by ordering $d$
specific events in a particular way, irrespective of how the other
events are ordered. A set of schedules is called a $d$-hitting family if
for every admissible ordering of $d$ events, there is a schedule in the
family that schedules these events in this order. We showed previously
that when the partial order forms a tree, we can explicitly construct
hitting families of size polynomial in the height of the tree, and
therefore polylogarithmic in the number of events when the tree is balanced.

In the follow-up work, which I will present in this talk, we consider an
extended setting where an event can be executed correctly or with a
fault. A fault does not necessarily entail a bug, as the developer may
have included fault-handling mechanisms. Faults merely mean that the
space of possible schedules is now much larger. Our preliminary results
show that even with faults, there are hitting families of size
polylogarithmic in the number of events.

In the talk I will also present COFIT—Concurrency and Fault Injection
Testing framework for .NET applications, which incorporates
constructions of hitting families.

Fabio Mogavero

Parity games are two-player infinite-duration games on numerically labeled
graphs, playing a crucial role in various fields of theoretical computer
science. Finding efficient algorithms to solve these games in practice is widely
acknowledged as a core problem in formal verification, as it leads, indeed, to
efficient solutions of the model-checking and satisfiability problems for
expressive temporal logics, e.g., the modal µCalculus. This problem has also an
intriguing status from a complexity theoretic viewpoint, since it belongs to the
class UPTime ∩ CoUPTime, and still open is the question whether it can be solved
in polynomial time. The solution of a parity game can be reduced to the problem
of identifying sets of positions of the game, called dominions, in each of which
a player can force a win by remaining in the set forever. In this talk, we
describe a novel technique to compute dominions, based on the idea of promoting
positions to higher priorities during the search for winning regions. The
proposed approaches have nice computational properties, exhibiting the best
space complexity among the currently known solutions. Experimental results on
both random games and benchmark families show that the technique is also very
effective in practice.
The talk is based on joint works with Massimo Benerecetti and Daniele Dell’Erba
of the Università degli Studi di Napoli Federico II.

Mitra Tabaei

Abstract:
Communication and synchronization are two main challenges in
designing and implementing concurrent programs, and consequently the main sources of
concurrency bugs. Concurrency bugs may lead to unintended program behavior as the
result of a particular ordering of actions in different threads at runtime.
Communication mechanisms for concurrent programs are generally based on either shared
memory or message passing. In the shared memory mechanism, concurrency bugs may manifest
at runtime as unexpected order in accessing shared data. Similarly, in
the message passing mechanism, unexpected order in sending and receiving messages
may result in a failure such as a deadlock.
We present a general framework for understanding the cause of
failure in concurrent programs by isolating from concurrent traces the
problematic or unexpected order of events which may cause a failure.
In devising our dynamic techniques for bug explanation,
we follow two different approaches, namely anomaly detection and slicing.
Our anomaly detection approach is based on statistical analysis and pattern mining.
In a completely different approach, we use a proof-based technique to construct
semantics-aware slices from failing traces.
We evaluate the efficiency and effectiveness of our proposed techniques
on benchmarks covering a broad range of real-world concurrency bugs. Moreover, we
compare the strengths and limitations of the anomaly detection technique with
the proof-based slicing technique.