In a variety of applications that encompass adversarial behavior, there is clearly interest in deceiving an adversary about one’s objectives or, alternatively, making it difficult for the adversary to predict one’s strategy to achieve those objectives. In this talk, I will outline recent work on formalizing the notions of deception and unpredictability within control systems. Namely, I will discuss an approach which encodes deception and deceptive strategies through introducing a belief space for an adversary, as well as a belief-induced reward objective. Such a framework makes it possible to consider design of optimal deceptive strategies within the setting of optimal control, where lack of knowledge about the adversary translates into a need to develop robust optimal control policies, or policies based on partial observations. On the other hand, we relate unpredictability of an agent to the total Shannon entropy of the paths that an agent may take to reach its objective, and show that, within the context of Markov decision processes, maximal unpredictability of an agent is achieved through following a policy that results in a maximal total entropy of the induced Markov chain. In parallel with the development of the theory of deception and unpredictability, I will illustrate the introduced notions using a variety of situations that naturally involve adversarial behavior, and show that the policies which are deceptive or generate maximal unpredictability in the sense of theoretical definitions indeed also follow the natural intuition behind these notions.