Verification of Program Parallelization

Saeed Darabi

promotor: prof. dr. Marieke Huisman (UT)
Twente University
Date: 2 March 2018
Thesis: PDF


This thesis presents techniques to improve reliability and functional orrectness of parallel programs. These requirements are especially crucial in critical systems where system failures endanger human lives, cause substantial economic damages or security breaches. Today’s critical systems are expected to deliver more and more complex and computationally intensive functions. In many cases these cannot be achieved without exploiting the computational power of multi- and even many-core processors via parallel programming. The use of parallelization in critical systems is especially challenging as on one hand, the non-deterministic nature of parallel programs makes them highly error-prone, while on the other hand high levels of reliability have to be guaranteed.

We tackle this challenge by proposing novel formal techniques for verification of parallel programs. We focus on the verification of data race freedom and functional correctness, i.e. a program behaves as it is expected. For this purpose, we use axiomatic reasoning techniques based on permission-based separation logic.

Among different parallel programming paradigms, in deterministic parallel programming, parallelization is expressed over a sequential program using high-level parallel programming constructs (e.g. parallel loops) or parallelization annotations; next the low-level parallel program (e.g. a multithreaded program or a GPGPU kernel) is generated by a parallelizing compiler.

First, we present a verification technique to reason about loop parallelizations. We introduce the notion of an iteration contract that specifies the memory locations being read or written by each iteration of the loop. The specifications can be extended with extra annotations that capture data-dependencies among the loop iterations. A correctly written iteration contract can be used to draw conclusions about the safety of a loop parallelization; it can also indicate where synchronization is needed in the parallel loop. Iteration contracts can be further extended to specify the functional behavior of the loop such that the functional correctness of the loop can be verified together with its parallelization safety.

Second, we propose a novel technique to reason about deterministic parallel programs. We first formally define the Parallel Programming Language (PPL), a simple core language that captures the main forms of deterministic parallel programs. This language distinguishes three kinds of basic blocks: parallel, vectorized and sequential blocks, which can be composed using three different composition operators: sequential, parallel and fusion composition. We show that it is sufficient to have contracts for the basic blocks to prove the correctness of the PPL program, and moreover when a PPL program is data race free, the functional correctness of the sequential program implies the correctness of the parallelized program. We formally prove the correctness of our approach. In addition, we define a widely-used subset of OpenMP that can be encoded into our core language, thus effectively enabling verification of OpenMP compiler directives, and we discuss automated tool support for this verification process.

Third, we propose a specification and verification technique to reason about data race freedom and functional correctness of GPGPU kernels that use atomic operations as a synchronization mechanism. We exploit the notion of resource invariant from Concurrent Separation Logic to specify the behaviour of atomic operations. To capture the GPGPU memory model, we adapt this notion of resource invariant such that group resource invariants capture the behaviour of atomic operations that access locations in local memory, which are accessible only to the threads in the same work group, while kernel resource invariants capture the behaviour of atomic operations that access locations in global memory, which are accessible to all threads in all work groups. We show the soundness of our approach and we demonstrate the application of the technique in our toolset.

This thesis presents a set of verification techniques based on permission-based separation logic to reason about the data race freedom and functional correctness of program parallelizations. Our reasoning techniques address different forms of high-level and low-level parallelization. For high-level parallel programs, we first formalize the main features of deterministic parallel programming in PPL and discuss how PPL programs and consequently, real-world deterministic parallel programs (e.g. OpenMP programs) are verified. For low-level parallel programs, we specifically focus on reasoning about GPGPU kernels. At the end we discuss how the presented verification techniques are chained together to reason about the semantical equivalence of high-level parallel programs where they are automatically transformed to low-level parallel programs by a parallelizing compiler. Thus, effectively enabling a holistic verification solution for such parallelization frameworks.