Modeling and Buffer Analysis of Real-time Streaming Radio Applications Scheduled on Heterogeneous Multiprocessors

Hrishikesh Salunkhe

promotor: prof.dr.ir. C.H. van Berkel (TU/e)
copromotor: dr.ir. O. Moreira (Intel Benelux BV Eindhoven)
Technische Universiteit Eindhoven
Date: 12 June, 2017, 16:00

Summary

Current multi-functional embedded systems such as smartphones and tablets support multiple 2G/3G/4G radio standards including Long Term Evolution (LTE) and LTE-Advanced running simultaneously. The transceivers of these applications are real-time streaming applications since they have real-time requirements, and run continuously, processing virtually infinite input sequences in an iterative manner. They are typically scheduled on a heterogeneous multiprocessor to satisfy real-time and low-power requirements. The timing and resource usage analysis of such applications, when mapped to a hardware platform, is crucial to guarantee their correct functional behavior. The dataflow model of computation fits well with this application domain, and several static variants offer design-time timing and resource usage analysis.

In dataflow, applications are modeled as graphs that consist of nodes called actors and edges. These actors, when modeled on a hardware platform, communicate with each other by producing (and consuming) data values called tokens through finite First-In First-Out (FIFO) buffers which are mapped on a on-chip memory. Buffer allocation for such applications involve the minimization of total memory consumption by buffers while reserving sufficient space for each token production without overwriting any live (i.e. not consumed) tokens and guaranteeing the satisfaction of real-time constraints. Systems may prevent overwriting of live tokens by implementing a back-pressure mechanism in which a producer task is suspended until there is a sufficient space available on its output buffers. In systems without back-pressure, a producer can execute without checking for the availability of space on its output buffers, resulting in buffer overflow. Such systems are not uncommon, since back-pressure incurs extra processing and synchronization overheads. We propose dataflow-based modeling and buffer allocation techniques for real-time streaming radio applications scheduled on heterogeneous multiprocessors without back-pressure. Our contributions are:

  1. Dataflow-based modeling techniques for LTE and LTE-Advanced receivers using:
    • Single-rate Dataflow (SRDF) that captures practically relevant behavior of such applications conservatively.
    • Mode-controlled Dataflow (MCDF) that captures dynamic data-dependent behavior of such applications more accurately.
  2. Buffer allocation techniques for real-time streaming applications modeled as dataflow graphs running in a self-timed manner (i.e. actors being activated by data availability) on a hardware platform without back-pressure. We propose these techniques for applications modeled as:
    • SRDF graphs that provides significant reduction in memory consumption compared to an existing industrial buffer allocation technique.
    • MCDF that provides smaller memory consumption compared to our SRDF-based buffer allocation at the expense of an increased timing complexity.
  3. Application of our techniques on the industrial case study that includes LTE and LTE-Advanced receivers. Our techniques provide significantly tighter timing analysis and buffer allocation.

Static dataflow allows a rich set of analysis techniques, but is too restrictive to conveniently model the dynamic data-dependent behavior in many realistic applications. Dynamic dataflow allows more accurate modeling of such applications, but in general does not support rigorous real-time analysis. Mode-controlled Dataflow (MCDF) is a restricted form of dynamic dataflow that promises to capture realistic features in such dynamic applications and allows rigorous timing analysis. We show that MCDF can conveniently handle the dynamic behavior exhibited by such complex industrial applications. We investigate the challenges involved in dataflow-based modeling of LTE and LTE-Advanced receivers. We develop modeling techniques that address these challenges one by one, and stepwise develop a complete and fine-grained MCDF models of an LTE and LTE-Advanced receivers.

Real-time streaming applications running on embedded systems have to deal with severely constrained memory resources. We propose buffer allocation techniques for real-time streaming applications modeled as SRDF graphs running in a self-timed manner on a hardware platform without back-pressure. Systems without back-pressure lack blocking behavior at the side of the producer, therefore, the buffer allocation for such systems requires both best- and worst-case timing analysis. We extend the available dataflow techniques with best-case analysis. We introduce the closest common  ominator-based and common predecessor-based lifetime analysis techniques that significantly reduce the memory consumption required for buffer allocation. We also introduce techniques to model the
initialization behavior and token reuse. Moreover, we provide a comparison between the systems with and without back-pressure in terms of buffer allocation.

We then extend our buffer allocation techniques for real-time streaming applications modeled as MCDF graphs. In combination with a pre-specified set of so-called mode sequences, rigorous analysis of MCDF models is possible. The dynamic (decision making) behavior present in applications makes buffer allocation challenging from the best- and worst-case timing analysis perspective. A set of mode sequences models the dynamic behavior of an application. We provide a conversion of an MCDF graph into an equivalent SRDF graph for a given mode sequence. We propose best- and worst-case analysis based on merging the SRDF graphs of all mode sequences associated to the MCDF graph of an application. We also extend the closest common dominator-based lifetime analysis for MCDF-based
buffer allocation.

Our techniques and tools can handle complex practical applications. We apply our SRDF-based and MCDF-based buffer allocation techniques on LTE and LTE-Advanced receivers running on an industrial hardware platform. Our SRDF-based buffer allocation techniques provide up to 54% reduction in the total memory consumption compared to our absolute lifetime analysis and an industrial buffer allocation techniques for our benchmark set. For the LTE-Advanced receiver case study, our buffer allocation techniques allow to explore different scheduling choices for a given platform; this provides interesting trade-offs among scheduling choices in terms of buffer allocation. For SRDF-based buffer allocation, our benchmark set includes two additional applications: 1) an MP3 decoder and 2) a WLAN receiver. For an LTE and LTE-Advanced receiver use cases, our MCDF-based buffer allocation achieves up to 15% reduction in total memory consumption compared to the SRDF-based buffer allocation that uses the closest common dominator-based lifetime analysis technique.