Idit Keidar

We introduce transactions into libraries of concurrent data structures;
such transactions can be used to ensure atomicity of sequences of data
structure operations. By focusing on transactional access to a
well-defined set of data structure operations, we strike a balance
between the ease-of-programming of transactions and the efficiency of
custom-tailored data structures. We exemplify this concept by designing
and implementing a library supporting transactions on any number of
maps, sets (implemented as skiplists), and queues. Our library offers
efficient and scalable transactions, which are an order of magnitude
faster than state-of-the-art transactional memory toolkits. Moreover,
our approach treats stand-alone data structure operations (like put and
enqueue) as first class citizens, and allows them to execute with
virtually no overhead, at the speed of the original data structure
library.

Joint work with Alexander Spiegelman and Guy Golan-Gueta, to appear in
PLDI 2016.

Bio:
Idit Keidar is a Professor and the Associate Dean for Graduate Studies
at the Viterbi Department of Electrical Engineering at the Technion. She
also heads the Networked Software Systems Laboratory (NSSL). She
received her BSc (summa cum laude), MSc (summa cum laude) and PhD from
the Hebrew University of Jerusalem in 1992, 1994, and 1998 resp. Prof.
Keidar is an expert in distributed computing and concurrency, having
developed numerous distributed and networked systems, including for
storage, multicast, group communication, group membership, distributed
transactions and atomic commit, cloud services, trusted access to data
in cloud storage, and concurrent programming. Her awards include the
Henry Taub Prize for Academic Excellence, the Yanai Award for Excellence
in Academic Education, the Muriel and David Jackow Award for Excellence
in Teaching, the David Dudi Ben-Aharon Research Award, the Allon
Fellowship, the Rothschild Yad-Hanadiv fellowship for postdoctoral
studies, and a Wolf Foundation Prize for Ph.D. students.

Erika Abraham

Satisfiability Checking aims to develop algorithms and tools for checking the satisfiability of existentially quantified logical formulas. For propositional logic, in the late ’90s impressive progress was made towards practically applicable solutions, resulting in powerful SAT solvers. Driven by this success, a new line of research started to enrich propositional SAT solving with solver modules for different theories. Nowadays, sophisticated SAT-modulo-theories (SMT) solvers are available for, e.g., equality logic with uninterpreted functions, bit-vector arithmetic, array theory, floating point arithmetic, and real and integer arithmetic. SAT and SMT solvers are now at the heart of many techniques for the analysis of programs and probabilistic, timed, hybrid and cyber-physical systems, for test-case generation, for solving large combinatorial problems and complex scheduling tasks, for product design optimisation, planning and controller synthesis, just to mention a few well-known areas.

In this talk we first give an introduction to the theoretical foundations of SMT solving, and mention some examples for the successful employment of SMT solvers in different software technologies. Focussing on non-linear arithmetic theories, we discuss the connection between the areas of Satisfiability Checking and Symbolic Computation and illustrate the embedding of arithmetic decision procedures in SMT solvers on the example of the Cylindrical Algebraic Decomposition method. Besides SMT-solving approaches for non-linear real arithmetic, we also discuss opportunities for moving from the real to the integer domain.

Stanley Bak

Abstract:

Hybrid automata formally model the evolution of a continuous, time-evolving system that undergoes discrete changes, such as interactions with an embedded controller. Examining which states a hybrid automaton can reach, in order to verify formal properties, is challenging. The research community, however, is quite innovative, and continues to propose a large number of promising approaches.

In order to verify high-level properties of hybrid automata, multiple techniques or iterations or a technique may need to be used. In particular, the recent tools Hyst and Hypy enable this type of heterogeneous and multi-iteration analysis. Hyst is used to encode individual model transformations (tactics), whereas Hypy can guide the use of these techniques (strategies) by running reachability tools and interpreting the tools’ results. Finally, verifying general cyber-physical systems inevitably involves the software domain, which has its own domain-specific set of analysis techniques. Those reasoning approaches also need to be integrated in order to perform high-level formal reasoning.

Bio:

Stanley Bak is a computer science researcher in the field of formal verification of cyber-physical systems using hybrid automata models. His recent research aims to develop tools and techniques for automatic and scalable formal reasoning of complex system models.

Stanley Bak received a Bachelor’s degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), and a Master’s degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009. He completed his PhD from the Department of Computer Science at UIUC in 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. He is currently a research computer scientist at the United States Air Force Research Laboratory (AFRL), working in the Verification of Autonomous Systems Branch and the Aerospace Systems Directorate.

Joseph Sifakis

Slides [pdf]

SifakisPhoto

Joseph Sifakis (RiSD laboratory, EPFL; Turing Award Winner 2007)
Rigorous System Design in BIP

Today, the development costs of high confidence systems explode with
their size. We are far away from the solution of the so called,
software crisis. In fact, the latter hides another much bigger: the
system crisis.

In my talk I will discuss rigorous system design as a formal and
accountable process leading from requirements to
correct-by-construction implementations. I will also discuss current
limitations of the state of the art and advocate a coherent scientific
foundation for system design based on four principles: 1) separation
of concerns; 2) component-based construction; 3) semantic coherency;
4) correctness-by-construction. The combined application of these
principles allows the definition of a methodology clearly identifying
where human intervention and ingenuity are needed to resolve design
choices, as well as activities that can be supported by tools to
automate tedious and error- prone tasks.

The presented view for rigorous system design has been amply
implemented in the BIP (Behavior, Interaction, Priority) component
framework and substantiated by numerous experimental results showing
both its relevance and feasibility. I will conclude with a discussion
advocating a system-centric vision for computing, and a deeper
interaction and cross-fertilization with other more mature scientific
disciplines.

Tom van Dijk

A well-known technique in formal methods to deal with large state spaces is symbolic model checking using decision diagrams. Decision diagrams conveniently and concisely store functions, especially Boolean functions that represent state spaces and transition relations, but also other functions, for example rational functions to encode transition rates. In the last decade, increases in processing power of computers no longer comes from increased processor speed, but mainly from increased parallelism. This has inspired research into scalable parallel techniques, for example using multi-core machines, but also using networks of computers or many-core GPGPUs. Efficient parallelism is not quite trivial. The application of parallelization to symbolic model checking has been problematic and prone to significant overheads in the past, but recent advances in efficient fine-grained task-based parallelism and scalable lock-free datastructures enable effective scalable parallel (multi-core) implementations of algorithms on decision diagrams. This talk discusses the implementation of Sylvan, a parallel decision diagram library implemented in C, and its application to symbolic state space exploration and symbolic bisimulation minimisation.

Tony Hoare

A theory of programming has many presentations, each suited for a particular purpose. (1) A model defines the details of the traces of execution of a program, in a form useful for discovering, locating and correcting program errors. (2) An algebra gives laws as equations and inequalities between programs.   It defines the valid optimisations to which a program may be subjected. (3) A logic gives a method of establishing correctness of programs in terms of preconditions and postconditions.  It is used in program analysers checkers to find which parts of a program are free from certain generic errors. (4) An operational semantics describes the individual steps in the execution of the program in terms of the state before and after each step.  It is used in the design of compilers and interpreters for a programming language.

A  Unified Theory is one that establishes the mutual consistency of these four presentations.  I give a simple example how this is done, concentrating primarily on features of concurrency and distribution. The model is simpler than school geometry, the laws are simpler than school algebra, the logic uses Hoare triples, and the semantics uses Milner transitions.  It is hoped that the general audience will have some recollection of some of these ideas.

Tony Hoare has a first degree in classical languages, literature, history and philosophy. He has professional qualifications as a Statistician and as a Russian Interpreter.  But his primary interest is in Philosophy, which he has sought to apply to computer programming languages throughout his fifty year academic and industrial career.

Cezara Dragoi

Fault-tolerant distributed algorithms play an important role in many critical/high-availability applications. These algorithms are notoriously difficult to implement correctly, due to asynchronous communication and the occurrence of faults, such as the network dropping messages or computers crashing. We introduce PSYNC, a domain specific language based on the Heard-Of model, which views asynchronous faulty systems as synchronous ones with an adversarial environment that simulates asynchrony and faults by dropping messages. We define a runtime system for PSYNC that efficiently executes on asynchronous networks. We formalize the relation between the runtime system and PSYNC in terms of observational refinement. The high-level lockstep abstraction introduced by PSYNC simplifies the design and implementation of fault-tolerant distributed algorithms and enables automated formal verification. We have implemented an embedding of PSYNC in the SCALA programming language with a runtime system for asynchronous networks. We show the applicability of PSYNC by implementing several important fault-tolerant distributed algorithms and we compare the implementation of consensus algorithms in PSYNC against implementations in other languages in terms of code size, runtime efficiency, and verification.