Computationally Efficient DNN Mapping Search Heuristic Using Deep Reinforcement Learning

Date

2023-12

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

In this dissertation, we present a computationally efficient Reinforcement Learning (RL) search heuristic for finding high quality mappings of N perfectly nested loops, such as loops in Convolutional Neural Networks (CNNs) for high dimensional data sets, to accelerators with multiple processing elements (PEs) each with a memory hierarchy and a shared-memory for all PEs. Our RL search uses maximum potential operand reuse to guide the search process. It is computationally inexpensive compared to RL reward functions used by state-of-the-art mapping search methods. The maximum potential operand reuse for mappings is also used for an effective mapping pruning strategy that significantly contributes to the overall computational effectiveness of our RL search method. We also present a search space state representation and a parsing strategy therefore that produces only valid mappings. Unlike supervised learning methods, our RL search does not require training datasets, thus is easily applicable to different loop-nests and accelerators. We show that our RL search heuristic evaluated for 19 3D convolution layers, ten initial states, three 256 PE accelerator configurations, and two different operand datatypes, required only on average 10% of Timeloop’s random search floating-point operations, yet found mappings with on average 13% lower Energy-Delay-Product (EDP) for the same number of valid mappings. Further, the lowest EDP mappings found using our method had on average a 6.5x higher EDP than simple lower bound EDP estimates, with the best case being only 1.6x higher.

Description

Keywords

DNN Mapping Search, Reinforcement Learning, Energy efficiency

Citation

Portions of this document appear in: Suyash Bakshi and Lennart Johnsson. 2023. Computationally Efficient DNN Mapping Search Heuristic using Deep Reinforcement Learning. ACM Trans. Embed. Comput. Syst. 22, 5s, Article 115 (October 2023), 21 pages. https://doi.org/10.1145/3609110