Antoniu Pop - A Data-flow Approach to Solving the Von Neumann Bottlenecks in the Manycore Era

12:00
Thursday
11
Apr
2013
Organized by: 
Arnaud Legrand
Speaker: 
Antoniu Pop
Keywords: 

As single-threaded performance has flattened, the prevailing trend in current hardware architectures is to provide an ever increasing number of processing units. Exploiting newer architectures poses tremendous challenges to application programmers and to compiler developers alike. Uncovering raw parallelism is insufficient in and of itself : improving performance requires changing the code structure to harness complex parallel hardware and memory hierarchies ; translating more processing units into effective performance gains involves a never-ending combination of target-specific optimizations, subtle concurrency concepts and non-deterministic algorithms. As optimizing compilers and runtime libraries no longer shield programmers from the complexity of processor architectures, the gap to be filled by programmers increases with every processor generation.

Driven by these challenges, we designed and implemented OpenStream, a high-level data-flow programming model, with the pragmatic perspective of achieving a fair middle-ground : programmers provide abstract information about their applications and leave the compiler and runtime system with the responsibility of lowering these abstractions to well-orchestrated threads and memory management. The expressiveness of such languages and the fine balance between the roles of programmers and compilers, as well as the static analysis and code generation techniques required to generate efficient code while abiding by the ``write once, compile anywhere ’’ rule, pose problems of great relevance from both theoretical and practical standpoints. The way forward still requires overcoming one of the greatest shortcomings of current compilers, their inability to understand concurrency, by developing new compiler intermediate representations for parallel programs. This is essential to both enable new compiler optimizations and avoid the current obfuscation of program semantics resulting from the lack of integration of parallel constructs in current intermediate representations. Finally, we developed a theoretical framework, the Control-Driven Data Flow model of computation, to reason about computation and execution in this context, to enable verification techniques and provide determinism guarantees.