Robust Verification of Stochastic Systems: Guarantees in the Presence of Uncertainty
Thom Badings
Promotors: prof. dr. Nils Jansen (RU) en prof. dr. Mariëlle I.A. Stoelinga (RU)
Radboud University
Date: 27 March 2025
Thesis: PDF will follow
Summary
Verifying that systems are safe and reliable is crucial in today’s world. For example, we want to prove that an autonomous drone will safely reach its target, or that a manufacturing system will not break down. Classical algorithms for verifying such properties often rely on a precise mathematical model of the system, for example, in the form of a Markov chain or a Markov decision process (MDP). Such Markov models are probabilistic transition systems and are ubiquitous in many areas, including control theory, artificial intelligence (AI), formal methods, and operations research.
However, as systems become increasingly complex with more cyber-physical and AI components, uncertainty about the system’s behavior is inevitable. As a result, transition probabilities in Markov models are subject to uncertainty, rendering many existing analysis algorithms inapplicable.
In this thesis, we fill this gap by developing novel verification methods for Markov models in the presence of uncertainty. To capture this uncertainty, we use parametric Markov models, where probabilities are described as functions over parameters. We study two perspectives on rendering verification methods robust against uncertainty: (1) set-bounded uncertainty, where only the set of possible parameter values is known and one can be robust in a worst-case sense, and (2) stochastic uncertainty, where the parameter values are described by a probability distribution and one can be robust in a probabilistic sense. By combining techniques from formal methods, AI, and control theory, our contributions span the following general problem settings:
- We develop robust abstraction techniques for solving control problems for Markov models with continuous state and action spaces, and with set-bounded uncertain parameters. Based on the notion of probabilistic simulation relations, we show that such continuous control problems can be solved by only analyzing a finite-state abstraction, formalized as an MDP with sets of transition probabilities.
- We present novel and scalable verification techniques for parametric Markov models with a prior distribution over the parameters. Our approaches are sampling-based, do not require any assumptions on the parameter distribution, and provide probably approximately correct (PAC) guarantees on the verification results.
- We show that parametric models can be used to improve the sample efficiency of data-driven learning methods. We leverage tools from convex optimization to perform a sensitivity analysis, where we measure sensitivity in terms of partial derivatives of the polynomial function that describes a measure of interest.
- We study continuous-time Markov chains where the initial state must be inferred from a set of (possibly uncertain) state observations. This setting is particularly relevant in runtime monitoring. We compute upper and lower bounds on reachability probabilities, conditioned on these observations. Our approach is based on a robust abstraction into an MDP with intervals of transition probabilities.
In conclusion, we develop verification methods that reason over uncertainty without sacrificing guarantees. While dealing with uncertainty can be computationally expensive, providing such guarantees is crucial for designing safe and reliable systems with cyber-physical and AI components. The aim of this thesis is to contribute to a better understanding of dealing with uncertainty in stochastic verification problems.