Efficient dynamic pinning of parallelized applications by distributed reinforcement learning

G. Chasparis, M. Roßbory. Efficient dynamic pinning of parallelized applications by distributed reinforcement learning. cs.DC - Distributed, Parallel, and Cluster Computing, arXiv:1606.08156 [cs.DC], pages http://arxiv.org/abs/1606.08156, 6, 2016.

Autoren
  • Georgios Chasparis
  • Michael Roßbory
TypSonstiges
Journalcs.DC - Distributed, Parallel, and Cluster Computing, arXiv:1606.08156 [cs.DC]
Monat6
Jahr2016
Seitenhttp://arxiv.org/abs/1606.08156
Abstract

This paper introduces a resource allocation framework specifically tailored for addressing the problem of dynamic placement (or pinning) of parallelized applications to processing units. Under the proposed setup each thread of the parallelized application constitutes an independent decision maker (or agent), which (based on its own prior performance measurements and its own prior CPU-affinities) decides on which processing unit to run next. Decisions are updated recursively for each thread by a resource manager/scheduler which runs in parallel to the application’s threads and periodically records their performances and assigns to them new CPU affinities. For updating the CPU-affinities, the scheduler uses a distributed reinforcement-learning algorithm, each branch of which is responsible for assigning a new placement strategy to each thread. According to this algorithm, prior allocations are going to be reinforced in the future proportionally to their prior performance. The proposed resource allocation framework is flexible enough to address alternative optimization criteria, such as maximum average processing speed and minimum speed variance among threads. We demonstrate analytically that convergence to locally-optimal placements is achieved asymptotically. Finally, we validate these results through experiments in Linux platforms.