Spring Days on Cyber-Physical Systems

Hotel De Zeven Heuvelen, Groesbeek,
May 13-17, 2013

The IPA Spring Days are an annual multi-day event, dedicated to a specific theme of current interest to the research community of IPA. This year’s Spring Days are dedicated to Cyber-Physical Systems, fitting in the homonymous focus area Cyber-Physical Systems of the research school for the period 2013-2018.

The programme of the CPS Spring Days is composed by Pieter Cuijpers (TU/e), Michel Reniers (TU/e) and Mariëlle Stoelinga (UT).


Registration

Registration closed on Friday April 19.

As always, IPA PhD students can attend for free. For others, details on pricing will be made available.

To make maximal use of the available capacity, we process applications on the following basis: Registrations are treated “first come, first serve”. All PhD students (IPA and non-IPA) have to share a room. Others may also be asked to share if we run out of rooms. We will accept participants until we run out of space at the hotel!


Programme

Monday 13 May is reserved for the IPA PhD workshop. The keynote speaker of the PhD workshop is Cynthia Kop, winner of the IPA Dissertation Award.

The programme for 13-17 May is available below. Abstracts of the talks can be read by clicking on the titles.

MONDAY 13 MAY: PHD WORKSHOP

11:30-12:30 Registration
12:30-13:30 Lunch
14:00-14:30 Maarten Manders, TU/e,
A Framework to Work with DSL Execution Traces
When people are working with executable DSLs, they are interested in the behavior of the models that they define in the DSL. Ideally, when DSL users are reasoning about the behavior of their models, they only have to think about DSL related concepts. Moreover, for a DSL to be successful, its users must have access to powerful, DSL specific, tooling. It is the job of DSL developers to implement these requirements.In this talk we will present a framework that can be used by DSL developers to implement powerful, and DSL specific tools that operate on traces that are obtained by execution of DSL models.
14:30-15:00 Ruud Koot, UU
Type-based program analysis
Type-driven program analysis uses non-standard type systems to statically approximate properties of computer programs that can be used to drive program transformations or provide correctness guarantees beyond what a conventional type system would be able to. Advanced type system features, such as subtyping, polymorphism, polymorphic recursion and higher-ranked polymorphism, allow one to increase the precision of an analysis and thereby better assist the programmer in writing software the executes faster and crashes less often.In this talk we will look at how these features can be used to, and we will argue are necessary for, developing an exception analysis; a program analysis that accurately, but safely, estimates the exceptions that may be raised during run-time.
15:00-15:15 Coffee
15:15-15:45 Yaping Luo, TU/e
Can car fly? -- Support for reusable safety assurance
As more and more complex software is deployed in safety critical embedded systems, the challenge of assessing the safety of those systems according to the relevant standards is becoming greater. Due to the extensive manual work required, validating compliance of these systems with safety standards is an expensive and time-consuming activity; furthermore, as products evolve, re-assessment may become necessary. Therefore, obtaining reusable assurance data for safety assessment or re-assessment is very desirable. In the presentation, we propose a model-based approach for assuring compliance with safety standards to facilitate reuse in the assessment, qualification and certification processes, using the automotive safety standard ISO 26262 as a specific example. In the future, we plan to use meta-modeling technique to support re-assessment cross domain, such as automotive and avionic domain.
15:45-16:15 Tom van Dijk, UT
Reachability with BDDs on 48+ cores
To facilitate the verification of ever-larger problems, we require a lot of processor power. One way to tackle this problem is by exploiting parallelism. In symbolic model checking, we use binary decision diagrams to represent states and transitions. The MaDriD project brings these two worlds of Multi-core computing and Decision Diagrams together. Using the verification framework LTSmin as a case study, we implemented Sylvan, a multi-core BDD implementation.
16:15-16:30 Coffee
16:30-17:30 Cynthia Kop, University of Innsbruck
Termination of higher-order term rewriting
(Higher-order) term rewriting is a convenient model for program analysis, which also has applications in for instance equational reasoning and compiler specifications. In this talk, we will consider first order and higher order term rewriting systems, and various ways to prove termination of such systems.
18:30- Dinner

TUESDAY 14 MAY

09:00-10:00 Arjan Mooij, TNO-ESI
Experiences with model-driven development in healthcare applications
Techniques for model-driven development vary from light-weight modeling and simulation, via domain-specific languages and code generation, to formal methods and verification. In this talk, we report about our experiences in applying such techniques in an industrial context. In particular we focus on our participation in several industrial development projects at Philips HealthCare. Abstraction levels and early fault detection are among the topics addressed.
10:00-10:30 Coffee
10:30-11:15 Johan Lukkien, TU/e
11:15-12:00 Milosh Stolikj, TU/e
Optimizing software updates of resource constrained devices
Software updates enable users to extend or correct functionality of devices after deployment, preferably at a low cost. It is an essential feature of modern consumer electronic devices, from smart phones to sensor networks. Strategies for updating software of large networks of devices depend on the resource constraints, such as available memory, processing power, battery life etc. In this work we investigate general techniques for optimizing the size of software updates with the aim to reduce the energy and time required to perform an update. We start with commonly used algorithms for incremental software update, used in combination with data compression algorithms. Then, we extend these methods with horizontal patching, a new method for generating one patch from another. We show that this method reduces the size of software updates in networks of multiple heterogeneous devices running multiple applications with a common software component. We validate our hypothesis on two test sets, consisting of software updates for sensor networks and smart phones.
12:30-13:30 Lunch
14:00-14:45 Mariëlle Stoelinga, UT
Risk management meets model checking
Dynamic fault trees (DFTs) are a versatile and common formalism to model and analyze the reliability of computer-based systems. This talk presents a formal semantics of DFTs in terms of input/output interactive Markov chains (I/O-IMCs), which extend continuous-time Markov chains with discrete input, output and internal actions. This semantics provides a rigorous basis for the analysis of DFTs. Our semantics is fully compositional, that is, the semantics of a DFT is expressed in terms of the semantics of its elements (i.e. basic events and gates). This enables an efficient analysis of DFTs through compositional aggregation, which helps to alleviate the state-space explosion problem, and yields a very flexible modeling and analysis framework by incrementally building the DFT state space. We have implemented our methodology by developing a tool, and showed, through a number of case studies, the feasibility of our approach and its effectiveness in reducing the state space.
14:45-15:30 Dennis Guck, UT,
Fault Tree Analysis for Railroad Maintenance Problems
Effective risk management is a key to ensure that our railroad infrastructure is dependable, and is often required by law. Fault Tree Analysis (FTA) is a widely used methodology, computing important measures like system reliability. We present powerful techniques for FTA providing (1) efficient fault tree modelling via compact representations for important features like spare management and common cause failures; (2) effective analysis, allowing a wide range of dependability properties to be analysed (3) efficient analysis, by using state-of-the-art stochastic analysis techniques; and (4) a flexible and extensible framework catered for future needs, e.g. cost structures.
15:30-16:00 Coffee
16:00-16:45 Jeroen Keiren, TU/e
Analysing Hierarchical Control Software: application to CERN's detector control systems
Developing software that controls the operation of a large number of hardware components, so called control software, is a complex task. One approach to reduce the complexity uses a hierarchical architecture for the software. At the leafs in the hierarchy we find the device drivers that directly control the hardware. Going up in the hierarchy, similar subsystems are grouped together into more coarse-grained subsystems. Every subsystem in the hierarchy is represented by a node, and every node runs its own software to receive commands from and send status updates to the layer above, and to send commands to and receive status information from the layer below.In this talk we address the reliability of hierarchical control software. We take a close look a the complexity of such systems, and describe the verification of local properties, such as livelock freedom and pairwise reachability, within this hierarchy.

The verification techniques that we describe have been applied to the detector control system of the CERN Compact Muon Solenoid experiment. The detector control system consists of 27,500 nodes, and the software of each of these nodes is implemented as a finite state machine. The sheer size of the system, estimated to consist of at least 1027,500 states, makes it virtually impossible to fully understand the details of the system behaviour at the macro level. We have formally described the finite state machines, and implemented an automatic translation to the mCRL2 process algebra. Correctness of the translation was assessed by means of simulations and visualisations of individual finite state machines, and through formal verification of complete subsystems of the control software.
Based on the formalised semantics of the finite state machines, specific verification questions were translated to satisfiability problems to allow for efficient solving, enabling verification of the complete system in minutes. The IDE that is used for developing the FSMs has been extended with the verification technology.

18:30- Dinner

WEDNESDAY 15 MAY

09:00-10:00 Bert van Beek, TU/e
Simulation, visualization and real-time control of cyber physical systems using CIF
The Compositional Interchange Format (CIF) is based on hybrid automata with a formally defined operational semantics, and features differential algebraic equations, automata with discrete, continuous and algebraic variables, synchronization by means of shared events and shared variables, urgent events and urgent locations, a rich set of data types and associated operations, and concepts for controller synthesis. The presentation will discuss some of the recent developments, including a continuous background type checker that is integrated in the Eclipse CIF editor, a user-friendly way of creating Scalable Vector Graphics (SVG) based visualizations, and ongoing research on generation of real-time control code for different back ends. Some live demonstrations of the simulation tools, the speed of which has recently been increased by several orders of magnitude, will be given, and industrial applications will be discussed.
10:00-10:30 Coffee
10:30-11:15 Allan van Hulst, TU/e
Maximal Synthesis for Hennessy-Milner Logic
We present a solution for the synthesis on Kripke structures with labelled transitions, with respect to Hennessy-Milner Logic. This encompasses the definition of a theoretical framework that is able to express how such a transition system should be modified in order to satisfy a given HML formula. The transition system is mapped under bisimulation equivalence onto a recursive structure, thereby unfolding up to the applicable reach of a given HML formula. Operational rules define the required adaptations to ensure validity upon this structure. Synthesis might result in multiple valid adaptations which are all related to the original transition system via simulation. The set of synthesized products contains an outcome which is maximal with respect to all deterministic simulants which satisfy the HML-formula.
11:15-12:00 Rik Kamphuis
Design and real-time implementation of a supervisory controller for baggage handling at Veghel Airport
Current industrial practice is based on a separation between informal specifications developed by domain engineers, and control code developed by software engineers. This may lead to many different kinds of errors in the control code: errors due to ambiguous or inconsistent specifications, errors due to miscommunication of domain engineer and software engineer, and errors due to incorrect coding. The presentation discusses a project carried out in cooperation with Vanderlande Industries (VI), the world’s largest developer of baggage handling systems, based on formally specifying the behavioral requirements and model of the controlled system, subsequently validating the system by means of interactive simulation and visualization, and finally generating the control code for different PLC (Programmable Logic Controller) back ends. The result is a PLC program according to the IEC 61131-3 Structured Text standard, and a specific version for Siemens PLCs, that has been obtained from the hybrid automata based CIF specification by partly manual and partly automatic execution of the various transformation steps. The PLC code has been tested in a real-time environment at “Veghel airport”, which is a small baggage handling system at the Vanderlande Innovation center. Current research is on fully automatic generation of the controllers.
12:30-13:30 Lunch
14:00-15:00 Cees Witteveen, TUD
Decomposition in scheduling
Sometimes a scheduling problem has to be solved by distributed solvers. In this talk, I consider the use of scheduling decomposition methods that ensure a globally feasible outcome (a global schedule) by purely local computations, i.e., individual schedules composed by distributed solvers. Examples range from scheduling of reconnaissance missions and distributed violation detection in P2P systems, to railway scheduling problems.
To study properties of these decomposition methods we model scheduling problems as constraint problems. Distributed scheduling problems then can be modeled by constraint systems where the set of variables is partitioned and each block of variables is controlled by a separate agent. Such an agent assigns values to the variables, and the aim is to provide distributed methods enabling a set of agents to come up with a global assignment (schedule) that satisfies all the constraints. We discuss some main approaches in such distributed scheduling, in particular the so-called temporal decoupling method and its generalizations. Then we discuss some general results regarding optimality and finding decompositions and we show that there exists a close connection between optimal decompositions and finding flexible schedules for certain classes of scheduling problems.
Finally, we discuss some applications of decoupling in scheduling.
15:00-15:30 Coffee
15:30-16:15 Marc Geilen, TU/e
Predictable Dynamic Embedded Data Processing
Cyber-physical systems interact with their physical environment. In this interaction, non-functional aspects, most notably timing, are essential to correct operation. In modern systems, dynamism is introduced in many different ways. The additional complexity threatens timely development and reliable operation. Applications often have different modes of operation with different resource requirements and different levels of required quality-of-service. Moreover, multiple applications in dynamically changing combinations share a platform and its resources. To preserve efficient development of such systems, dynamism needs to be taken into account as a primary concern, not as a verification or tuning effort after the design is done. This requires a model-driven design approach in which timing of interaction with the physical environment is taken into consideration; formal models capture applications and their platforms in the physical environment. Moreover, platforms with resources and resource arbitration are needed that allow for predictable and reliable behaviour to be realized. Run-time management is further required to deal with dynamic use-cases and dynamic trade-offs encountered at run-time. We present a model-driven approach that combines model-based design and synthesis with development of platforms that support predictable, repeatable, composable realizations and a run-time management approach to deal with dynamic use-cases at run-time. A formal, compositional model is used to exploit Pareto-optimal trade-offs in the system use. The approach is illustrated with dataflow models with dynamic application scenarios, a predictable platform architecture and run-time resource management that determines optimal trade-offs through an efficient knapsack heuristic.Joint work with Sander Stuijk and Twan Basten.
16:15-17:00 Shreya Adyanthaya, TU/e and ASML
Design Space Exploration for the Mechatronic Control Domain
Over the last decades, embedded platform architectures for mechatronic systems have been composed of general-purpose components to benefit from their steady increase in average performance, most notably due to their increased processor clock frequencies. This trend has come to an end due to IC power dissipation limits. To meet future application demands, a shift has to be made to lower-frequency but higher-parallel execution platforms offering more predictable timing guarantees. Given the control application specification one has to dimension the platform (i.e. determine the required computational, storage and communication resources), and determine what application functionalities will run on which resources and in what sequence. This should be done in such a way that all timing requirements of all application tasks are met. In order to achieve this one must be able to predict the performance for individual points in the design space and be able to traverse this design space in order to determine an optimal solution. However, the many elements of the control application, the platform, and the mapping (including binding and scheduling) of the application onto the platform can be configured in many different ways, implying that the exploration of a great number of alternate design points is a major challenge. The main impact of considering a mechatronic control domain is the focus on optimization of latency, as opposed to throughput, which is the main optimization objective in other domains such as media processing. The focus on throughput constitutes the basis of the majority of work done in the past.Our work is motivated by the scheduling challenges faced by ASML, the world’s leading provider of wafer scanners. ASML provides a platform to shape the problem accurately along with obtaining experimental results to solution strategies. A wafer scanner is a complex cyber-physical system that manipulates silicon wafers with extreme accuracy at high throughput. Typical control applications of the wafer scanner consist of thousands of precedence-constrained tasks with latency requirements. Machines are customized so that precise characteristics of the control applications to be scheduled and the execution platform are only known during machine start-up. This results in large-scale scheduling problems that need to be solved during start-up of the machine under a strict timing constraint on the schedule delivery time. In this presentation we talk about the design space exploration problem of ASML and a crucial aspect of it, namely the scheduling problem. We present a fast and scalable static-order scheduling approach for applications with stringent latency requirements and a fixed binding on multiprocessor platforms. The approach will be incorporated into ASML’s latest generation of wafer scanners.
18:30- Dinner

THURSDAY 16 MAY

09:00-10:00 Anne Remke, UT
10:00-10:30 Coffee
10:30-11:15 Pieter Cuijpers, TU/e
Prefix Orders 4 CPS (or, bisimulation without a next step)
Twenty years of hybrid systems research have, in my opinion, only made a start on closing the gap between control engineering and computer science. I will give my views on why that is, and what we can learn from that when embarking on a study of the even more complicated cyber physical systems. I will focus on the way in which we have treated the syntax and semantics of hybrid systems so far, and on what is wrong with that. I give arguments why we should try to use a domain-specific syntax when solving problems, but an as abstract semantics as possible when defining concepts. For the study of cyber physical systems I propose to embrace the semantic notion of ‘execution’ under its natural prefix order whenever we are dealing with dynamic properties of systems behavior. As icing on my cake, I will show that using prefix orders allows us to define bisimulation equivalence without mentioning a ‘next’ step, thus making it applicable to any branching-time dynamical system, be-it discrete, continuous, or hybrid.
11:15-12:00 Herman Geuvers, RU
Machine-Checked proofs of safety, using over- and underapproximations
Abstract TBA
12:30-13:30 Lunch
14:00-15:00 Marco Bekooij, UT
15:00-15:45 Alok Lele, TU/e
Data Flow based Modeling and Temporal Analysis of Embedded Streaming Applications
Embedded streaming applications are typically implemented as a network of concurrent tasks that execute in an iterative, pipelined manner where the execution of each task may be conditioned by intra and inter-iteration data dependencies. A Heterogeneous Multi-Processor System-on-Chip (HMPSoC) is used to run multiple streaming applications with each application having an independent hard real-time requirement in terms of minimum throughput and/or maximum end-to-end latency. The HMPSoC enforces a resource scheduling technique to manage the resource allocation for each application. However, to guarantee that the independent requirements of each application are satisfied we must analyze the worst-case temporal behavior of each application scheduled on the HMPSoC.We argue that data flow graphs can conveniently model the concurrent nature of embedded streaming applications. In data flow, one starts by modeling an application as a graph, where nodes (called actors) model tasks and edges model the data dependencies. For the temporal analysis of data flow graphs we present a graph transformation technique, called response modeling, where each actor in the application graph is replaced by a sub-graph that models the worst-case temporal behavior of the actor’s execution. We demonstrate that our technique obtains tight conservative guarantees on the minimum throughput and/or maximum latency of an application.
15:45-16:15 Coffee
18:30- Dinner
20:30-22:00 Social Event

FRIDAY 17 MAY

09:00-09:45 Frits Vaandrager, RU
Applying Automata Learning to Embedded Control Software
We explore whether LearnLib, a state-of-the-art automata learning tool, is able to learn a model of the Engine Status Manager (ESM), a piece of control software that is used in many printers and copiers of Oce. Finding counterexamples for incorrect hypotheses for the ESM turns out to be challenging due to the large number of inputs. A first contribution of this paper consists of two novel heuristics for test selection, implemented on top of LearnLib, that allowed us to learn an almost complete model of the ESM. The first heuristic is based on the intuition that for many systems it is possible to reach the majority of the states using only a small number of different inputs. The idea is to divide the input alphabet into a number of subalphabets, and to divide the test phase into subphases for each subalphabet. The second heuristic is a randomized variant of the W-method of Chow and Vasilevskii. To the best of our knowledge, this is the first paper in which active automata learning is applied to industrial embedded control software. Using our new test selection heuristics, we succeeded to learn a model of the ESM with 3.326 states. More than 263 million test queries were not enough to find a counterexample for this model. We also constructed another model directly from the ESM software with 3.410 states, indicating that we succeeded to learn 97,5% of the model using automata learning. Our work provides a challenging benchmark for the automata learning and model-based testing communities.Joint work with W. Smeenk and D.N. Jansen.
09:45-10:30 Jan Tretmans, RU and TNO-ESI
Towards Test Selection for the ioco Framework
Since testing is an expensive process, automatic testing with smart test selection has been proposed as a way to reduce such expense. Such a selection of tests can be done using specification coverage functions. Model-based ioco theory, however, uses test suites which are not suitable for easy computation of coverage because of interdependence of their test cases. We define a new test suite that overcomes such problems. Using such a test suite we cast the test selection problem to a specification selection problem that aims at transforming the model to which a system under test must conform, in order to reduce the set of test cases. We give a canonical representation for the newly defined test suite.
10:30-11:00 Coffee
11:00-12:00 Joost-Pieter Katoen, UT and RWTH Aachen,
Cyber-Physical Systems in Space
Building modern aerospace systems is highly demanding. They should be extremely dependable. They must offer service without interruption for a very long time — typically years or decades. Whereas ‘five nines’ dependability, ie. a 99.999% availability, is satisfactory for most safety-critical systems, for on-board systems it is not. Faults are costly and may severely damage reputations. Dramatic examples such as Ariane 5 and Mars Pathfinder are known.Rigorous design support and analysis techniques are called for. Bugs must be found as early as possible in the design process while performance and reliability guarantees need to be checked whenever possible. The effect of Fault Diagnosis, Isolation and Recovery (FDIR) measures must be quantifiable.During the last five years, we have attempted to exploit concurrency theory and computer-aided verification to the engineering of aerospace systems. In this talk, we will present:

  • a modeling formalism based on the AADL SAE standard
  • its formal semantics
  • sketch a rich palette of formal analysis techniques
  • application to a satellite design at the European Space Agency.
12:30-13:30 Lunch and Farewell