Parity games are two-player infinite-duration games on numerically labeled

graphs, playing a crucial role in various fields of theoretical computer

science. Finding efficient algorithms to solve these games in practice is widely

acknowledged as a core problem in formal verification, as it leads, indeed, to

efficient solutions of the model-checking and satisfiability problems for

expressive temporal logics, e.g., the modal µCalculus. This problem has also an

intriguing status from a complexity theoretic viewpoint, since it belongs to the

class UPTime ∩ CoUPTime, and still open is the question whether it can be solved

in polynomial time. The solution of a parity game can be reduced to the problem

of identifying sets of positions of the game, called dominions, in each of which

a player can force a win by remaining in the set forever. In this talk, we

describe a novel technique to compute dominions, based on the idea of promoting

positions to higher priorities during the search for winning regions. The

proposed approaches have nice computational properties, exhibiting the best

space complexity among the currently known solutions. Experimental results on

both random games and benchmark families show that the technique is also very

effective in practice.

The talk is based on joint works with Massimo Benerecetti and Daniele Dell’Erba

of the Università degli Studi di Napoli Federico II.

Abstract:

Communication and synchronization are two main challenges in

designing and implementing concurrent programs, and consequently the main sources of

concurrency bugs. Concurrency bugs may lead to unintended program behavior as the

result of a particular ordering of actions in different threads at runtime.

Communication mechanisms for concurrent programs are generally based on either shared

memory or message passing. In the shared memory mechanism, concurrency bugs may manifest

at runtime as unexpected order in accessing shared data. Similarly, in

the message passing mechanism, unexpected order in sending and receiving messages

may result in a failure such as a deadlock.

We present a general framework for understanding the cause of

failure in concurrent programs by isolating from concurrent traces the

problematic or unexpected order of events which may cause a failure.

In devising our dynamic techniques for bug explanation,

we follow two different approaches, namely anomaly detection and slicing.

Our anomaly detection approach is based on statistical analysis and pattern mining.

In a completely different approach, we use a proof-based technique to construct

semantics-aware slices from failing traces.

We evaluate the efficiency and effectiveness of our proposed techniques

on benchmarks covering a broad range of real-world concurrency bugs. Moreover, we

compare the strengths and limitations of the anomaly detection technique with

the proof-based slicing technique.

We consider the following problem: given a dxd matrix A and two polytopes P and Q, is there a natural number n such that A^n P intersects Q. We show that the problem can be decided in PSPACE for dimension at most three. As in previous works, decidability in the case of higher dimensions is left open, as the problem is known to be hard for long-standing number-theoretic open problems.

The talk is self-contained and assumes no background from the audience. As such, the talk will include short introductions to various mathematical tools, such as representation of algebraic numbers, the theory of real-closed fields, and quantifier elimination.

Abstract:

Scheduling tasks on clusters in large-scale datacenters is challenging:

thousands of tasks must be cleverly and quickly placed to achieve good

application-level performance and high resource utilization. Centralized

datacenter schedulers can make high-quality placement decisions, but

today these high-quality placements come at the cost of high latency at

scale, which degrades response time for interactive tasks and reduces

resource utilization.

In this talk I present Firmament, a centralized scheduler that scales to

over ten thousand machines, even though it performs a computationally

expensive min-cost max-flow (MCMF) optimization that continuously

reschedules all tasks. To achieve this, Firmament automatically chooses

between different MCMF algorithms, solves the optimization problem

incrementally when possible, and applies problem-specific optimizations.

Experiments with a Google workload trace from a 12,500-machine cluster

show that Firmament places tasks in hundreds of milliseconds, and that

Firmament improves placement latency by 20x over Quincy, a prior

centralized scheduler using the same MCMF optimization. Moreover, even

though Firmament is centralized, it matches the placement latency of

distributed schedulers for workloads of short tasks. Finally, Firmament

exceeds the placement quality of four widely-used centralized and

distributed schedulers on a real-world cluster, improving batch task

response time by 6x.

Reference:

I. Gog, M. Schwarzkopf, A. Gleave, R. N. M. Watson, S. Hand. “Firmament:

Fast, Centralized Cluster Scheduling at Scale”. In Proceedings of the

12th USENIX Symposium on Operating Systems Design and Implementation

(OSDI), Savannah, GA, USA, November 2016, p. 99–115.

**[Software Testing] Hitting families of schedules**

We consider the following basic task in the testing of concurrent systems.

The input to the task is a partial order of events, which models actions performed on or by the system and specifies ordering constraints between them. The task is to determine if some scheduling of these events can result in a bug. The number of schedules to be explored can, in general, be exponential.

Empirically, many bugs in concurrent programs have been observed to have small bug depth; that is, these bugs are exposed by every schedule that orders some d specific events in a particular way, irrespective of how the other events are ordered, and d is small compared to the total number of events. To find all bugs of depth $d$, one needs to only test a d-hitting family of schedules: we call a set of schedules a $d$-hitting family if for each set of d events, and for each allowed ordering of these events, there is some schedule in the family that executes these

events in this ordering. The size of a d-hitting family may be much smaller than the number of all possible schedules, and a natural question is whether one can find $d$-hitting families of schedules that have small size.

In general, finding the size of optimal d-hitting families is hard, even for d=2. We show, however, that when the partial order is a tree, one can explicitly construct d-hitting families of schedules of small size. When the tree is balanced, our constructions are polylogarithmic in the number of events.

Correctness Witnesses: Exchanging Verification Results between Verifiers

Dirk Beyer

Paper: http://dx.doi.org/10.1145/2950290.2950351

Proc. SIGSOFT FSE 2016.

Standard verification tools provide a counterexample to witness a specification violation,

and, since a few years, such a witness can be validated by an independent validator

using an exchangeable witness format. This way, information about the violation can be shared across verification tools and the user can use standard tools to visualize and explore witnesses.

This technique is not yet established for the correctness case, where a program fulfills a specification. Even for simple programs, it is often difficult for users to comprehend why a given program is correct, and there is no way to independently check the verification result.

We close this gap by complementing our earlier work on violation witnesses with correctness witnesses. While we use an extension of the established common exchange format for violation witnesses to represent correctness witnesses, the techniques for producing and validating correctness witnesses are different. The overall goal to make proofs available to engineers is probably as old as programming itself, and proof-carrying code was proposed two decades ago —

our goal is to make it practical: We consider witnesses as first-class exchangeable objects, stored independently from the source code and checked independently from the verifier that produced them, respecting the important principle of separation of concerns.

At any time, the invariants from the correctness witness can be used to reconstruct a correctness proof to establish trust. We extended two state-of-the-art verifiers, CPAchecker and Ultimate Automizer, to produce and validate witnesses, and report that the approach is promising

on a large set of verification tasks.

Abstract:

We consider data automata in the form of finite automata equipped with variables (counters or registers) ranging over infinite data domains. A trace of such a data automaton is an alternating sequence of alphabet symbols and values taken by the counters during an execution of the automaton. We address the problem of checking the inclusion between the sets of traces (data languages) recognized by such automata. Since the problem is undecidable in general, we give a semi-algorithm based on abstraction refinement, which is proved to be sound and complete modulo termination. Due to the undecidability of the trace inclusion problem, our procedure is not guaranteed to terminate. We have implemented our technique in a prototype tool and show promising results on several non-trivial examples.

Abstract:

Unsatisfiability proofs in the DRAT format became the de

facto standard to increase the reliability of contemporary SAT solvers.

We consider the problem of generating proofs for the XOR reasoning

component in SAT solvers and propose two methods: direct translation

transforms every XOR constraint addition inference into a DRAT proof,

whereas T-translation avoids the exponential blow-up in direct transla-

tions by using fresh variables. T-translation produces DRAT proofs from

Gaussian elimination records that are polynomial in the size of the input

CNF formula. Experiments show that a combination of both approaches

with a simple prediction method outperforms the BDD-based method.

Abstract:

Security vulnerabilities plague modern systems because writing secure systems code is hard. Promising approaches can retrofit security automatically via runtime checks that implement the desired security policy; these checks guard critical operations, like memory accesses. Alas, the induced slowdown usually exceeds by a wide margin what system users are willing to tolerate in production, so these tools are hardly ever used. As a result, the insecurity of real-world systems persists. We present an approach in which developers/operators can specify what level of overhead they find acceptable for a given workload (e.g., 5%); our proposed tool ASAP then automatically instruments the program to maximize its security while staying within the specified “overhead budget.” Two insights make this approach effective: most overhead in existing tools is due to only a few “hot” checks, whereas the checks most useful to security are typically “cold”and cheap. We evaluate ASAP on programs from the Phoronix and SPEC benchmark suites. It can precisely select the best points in the security-performance spectrum. Moreover, we analyzed existing bugs and security vulnerabilities in RIPE, OpenSSL, and the Python interpreter, and found that the protection level offered by the ASAP approach is sufficient to protect against all of them. Joint work with Jonas Wagner, Volodymyr Kuznetsov and George Candea (EPFL), presented at IEEE S&P (Oakland) 2015.

Abstract:

Difference constraints have been used for termination analysis in the literature, where they denote relational inequalities of the form x ≤ y + c, and describe that the value of x in the current state is at most the value of y in the previous state plus some integer constant c. We believe that difference constraints are also a good choice for complexity and resource bound analysis because the complexity of imperative programs typically arises from counter increments and resets, which can be modeled naturally by difference constraints. In this article we propose a bound analysis based on difference constraints. We make the following contributions:

(1) Our analysis handles bound analysis problems of high practical relevance which current approaches cannot handle: We extend the range of bound analysis to a class of challenging but natural loop iteration patterns which typically appear in parsing and string-matching routines.

(2) We advocate the idea of using bound analysis to infer invariants: Our soundness proven algorithm obtains invariants through bound analysis, the inferred invariants are in turn used for obtaining bounds. Our bound analysis therefore does not rely on external techniques for invariant generation.

(3) We demonstrate that difference constraints are a suitable abstract program model for automatic complexity and resource bound analysis: We provide efficient abstraction techniques for obtaining difference constraint programs from imperative code.

(4) We report on a thorough experimental comparison of state-of-the-art bound analysis tools.