**Date: **
17:00,
Thursday, March 19, 2015

**Speaker: **
Mahsa Shirmohammadi

**Venue: **IST Austria

Markov decision processes (MDPs) are finite-state probabilistic systems

with both strategic and random choices, hence well-established to model

the interactions between a controller and its randomly responding

environment. An MDP is viewed as a 1/2-player stochastic game played in

rounds when the controller chooses an action, and the environment

chooses a successor according to a fixed probability distribution.

Recently, MDPs have been viewed as generators of sequences of

probability distributions over states, called the distribution-outcome.

We introduce synchronizing conditions defined on distribution-outcomes,

which intuitively requires that some (group of) state(s) tend to

accumulate all the probability mass. We consider the following modes of

synchronization: the always mode where all distributions of the

distribution-outcome must be synchronized, the eventually mode where

some distributions must be synchronized, the weakly mode where

infinitely many distributions must be synchronized, and the strongly

mode where all but finitely many distributions must be synchronized.

For each synchronizing mode, we study three winning modes: sure,

almost-sure and limit-sure. In this talk, we present the various

synchronization modes and winning modes, as well as an overview of the

key properties for their analysis.

We also give hints on synchronization in probabilistic automata (PAs),

that are kind of MDPs where controllers are restricted to use only

word-strategies; i.e. no ability to observe the history of the system

execution, but the number of choices made so far.