Ras Bodik

To explain causality in biological systems, we want our models to be mechanistic, i.e., reproducing the behavior by simulating the underlying signaling mechanisms.  Because most mechanistic models are executable programs, we can in principle infer them from experimental data using program synthesis. Unfortunately, the data is typically too under-constrained to induce a model on its own.  I will show how to phrase synthesis so that experimental data can be complemented with diverse prior information, including expert’s assumptions; and how to compute experiments that would rule out some plausible models.  I will draw lessons from two case studies: cell fate determination in c. elegans VPC cells and EGF signaling.  I will conclude by outlining some open problems for the future of modeling where inference will be made from a union of qualitatively different experiments.

Lenore D. Zuck

The will provide with a background on automatic generation of invariants and other constructs used in verification 
parameterized systems (for safety proofs, this method is sometimes referred to  as “Invisible Invariants”.) Two
“real-life” case studies will be described, one in which the theory sufficed to verify, and find a flaw (“bug”) in a 
hurricane prediction system, and one in which the theory had to be augmented by a compositional construct
to verify the correctness of a satellite system.

Pavithra Prabhakar

Hybrid systems refer to systems exhibiting mixed discrete-continuous behaviors and arise as a natural byproduct of the interaction of a network of embedded processors with physical systems. In this talk, we focus on the verification of stability of hybrid control systems. Stability is a fundamental property in control system design and captures the notion that small perturbations to the initial state or input to the system result in only small variations in the eventual behavior of the system. We present foundations for approximation based analysis of stability, and discuss an algorithmic approach for stability analysis based on predicate abstraction and counter-example guided abstraction refinement. In contrast to the well-known methods for automated verification of stability based on Lyapunov functions, which are deductive, our approach is algorithmic.

Nathan Wasser

Ivan Radicek

Providing feedback on programming assignments manually is a tedious, error prone, and time-consuming task. In this paper, we motivate and address the problem of generating feedback on performance aspects in introductory programming assignments. We studied a large number of functionally correct student solutions to introductory programming assignments and observed: (1) There are different algorithmic strategies, with varying levels of efficiency, for solving a given problem. These different strategies merit different feedback. (2) The same algorithmic strategy can be implemented in countless different ways, which are not relevant for reporting feedback on the student program.

We propose a light-weight programming language extension that allows a teacher to define an algorithmic strategy by specifying certain key values that should occur during the execution of an implementation. We describe a dynamic analysis based approach to test whether a student's program matches a teacher's specification. Our experimental results illustrate the effectiveness of both our specification language and our dynamic analysis. On one of our benchmarks consisting of 2316 functionally correct implementations to 3 programming problems, we identified 16 strategies that we were able to describe using our specification language (in 95 minutes after inspecting 66, i.e., around 3%, implementations). Our dynamic analysis correctly matched each implementation with its corresponding specification, thereby automatically producing the intended feedback. 


Johann A. Makowsky

Many courses of Sets and/or Logic teach the old narrative
of paradoxes, compactness, (in)completeness of logical systems etc.
In this talk we examine what we think should be the topics of Sets and Logics
so as to form a basis and toolbox for other courses such as  automata
and formal languages, databases, verification, formal methods, AI,

Martin Rinard

I present two techniques for eliminating software defects: Horizontal

Code Transfer (HCT) and Condition Synthesis/Compound Mutations (CSCM).

HCT works with multiple applications that process the same input file
format. It first finds a defect in one of the applications, then
locates code in one or more of the other applications that eliminates
the defect. It then extracts and transfers this code from the donor
application into the recepient application to eliminate the defect in
the recepient.  Our implementation generates source-level patches, but
can extract and transfer code from stripped binary donor applications
with no need for donor source code. It can therefore support a wide
range of use cases.

CSCM works with a test suite that exposes a defect. It automatically
generates a search space of potential patches, then searches this
space to find patches that enable the application to successfully
process all test cases in the test suite. Novel techniques include
condition synthesis, which automatically synthesizes expressions that
patch incorrect conditions in if statements, and a set of compound
mutations that build on condition synthesis to generate a rich and
efficiently searchable space of candidate patches.

Christian Scheideler

Many P2P systems have already been proposed in the literature, but only
very few of them are truly self-organizing in a sense that they can
recover from any state. While certain degenerate states might be very
rare, and may require major attacks to occur, it would nevertheless be
much safer to use these systems if we knew that they would be able to
recover from any bad event without human intervention. In theory, a
system that can recover from any initial state is called
self-stabilizing. While many self-stabilizing algorithms have already
been proposed for static distributed systems, not many constructions for
self-stabilizing dynamic distributed systems are known. In my
presentation, I will present a rigorous base for the design of such
systems which is based on our insights that we have gained from our
various self-stabilizing constructions for P2P systems.

Richard Mayr

Consider nondeterministic finite automata (NFA) 
and nondeterministic Buchi automata (NBA), which describe regular 
languages and omega-regular languages, respectively.
Central problems for these are computationally hard (PSPACE-complete), e.g.,
checking language inclusion, equivalence and universality, 
as well as finding automata of minimal size for a given language.

In spite of the high worst-case complexity, recent algorithms and
software-tools can solve many instances of nontrivial size. 
Here we give an overview over these techniques. They include antichain techniques 
exploiting logical subsumption, the construction of congruence bases,
and automata minimization methods based on transition pruning 
and state-space quotienting w.r.t. generalized simulation preorders.
In particular, multipebble simulations and the more practical
lookahead simulations can in polynomial time compute very good
under-approximations of the PSPACE-complete relations of 
(trace-)language inclusion.
A collection of current papers and tools is available at www.languageinclusion.org.

Michael Elberfeld

In the network orientation problem one is given a mixed graph,
consisting of directed and undirected edges, and a set of
source-target vertex pairs. The goal is to orient the undirected edges
so that a maximum number of pairs admit a directed path from the
source to the target. This NP-complete problem arises in the context
of analyzing physical networks of protein-protein and protein-DNA
interactions. While the latter are naturally directed from a
transcription factor to a gene, the direction of signal flow in
protein-protein interactions is often unknown or cannot be measured en
masse. One then tries to infer this information by using causality
data on pairs of genes such that the perturbation of one gene changes
the expression level of the other gene.

After introducing physical networks of protein interactions and the
related orientation problem, the talk reports on two studies related
to its solution: First, we have a look at optimal solutions to the
problem that are based on an integer linear programming formulation;
this formulation is applied to orient protein-protein interactions in
yeast (this part of the talk is based on [1]). Second, we consider
approximate solutions and their algorithmic aspects (based on [2] and



[1] Silverbush, D., Elberfeld, M., and Sharan, R. (2011).
Optimally orienting physical networks.
Journal of Computational Biology, 18(11):1437–1448.

[2] Elberfeld, M., Bafna, V., Gamzu, I., Medvedovsky, A., Segev, D.,
Silverbush, D., Zwick, U., and Sharan, R. (2011).
On the approximability of reachability-preserving network orientations.
Internet Mathematics, 7(4):209-232.

[3] Elberfeld, M., Segev, D., Davidson, C. R., Silverbush, D., and Sharan, R. (2013).
Approximation algorithms for orienting mixed graphs.
Theoretical Computer Science, 483:96-103.