Efficient dynamic pinning of parallelized applications by distributed reinforcement learning
Georgios C. Chasparis
|Titel||Efficient dynamic pinning of parallelized applications by distributed reinforcement learning|
|Ort||HLPGPU 2017 - High-Level Programming for Heterogeneous and Hierarchical Parallel Systems 2017 Workshop, Stockholm, Sweden, January 23, 2017 in conjunction with HiPEAC 2017, Stockholm, Sweden, January 23-25, 2017.|
This paper introduces a resource allocation framework specifically tailored for addressing the problem of dynamic placement (or pinning) of parallelized applications to processing units. Under the proposed setup each thread of the parallelized application constitutes an independent decision maker (or agent), which (based on its own prior performance measurements and its own prior CPU-affinities) decides on which processing unit to run next. Decisions are updated recursively for each thread by a resource manager/scheduler which runs in parallel to the application’s threads and periodically records their performances and assigns to them new CPU affinities. For updating the CPU-affinities, the scheduler uses a distributed reinforcement-learning algorithm, each branch of which is responsible for assigning a new placement strategy to each thread. According to this algorithm, prior allocations are going to be reinforced in the future proportionally to their prior performance. The proposed resource allocation framework is flexible enough to address alternative optimization criteria, such as maximum average processing speed and minimum speed variance among threads. We demonstrate analytically that convergence to locally-optimal placements is achieved asymptotically. Finally, we validate these results through experiments in Linux platforms.