Computationally Efficient DNN Mapping Search Heuristic Using Deep Reinforcement Learning

dc.contributor.advisorJohnsson, Lennart
dc.contributor.committeeMemberVilalta, Ricardo
dc.contributor.committeeMemberSubhlok, Jaspal
dc.contributor.committeeMemberMattson, Timothy G.
dc.creatorBakshi, Suyash
dc.date.accessioned2024-01-26T23:37:42Z
dc.date.createdDecember 2023
dc.date.issued2023-12
dc.date.updated2024-01-26T23:37:43Z
dc.description.abstractIn this dissertation, we present a computationally efficient Reinforcement Learning (RL) search heuristic for finding high quality mappings of N perfectly nested loops, such as loops in Convolutional Neural Networks (CNNs) for high dimensional data sets, to accelerators with multiple processing elements (PEs) each with a memory hierarchy and a shared-memory for all PEs. Our RL search uses maximum potential operand reuse to guide the search process. It is computationally inexpensive compared to RL reward functions used by state-of-the-art mapping search methods. The maximum potential operand reuse for mappings is also used for an effective mapping pruning strategy that significantly contributes to the overall computational effectiveness of our RL search method. We also present a search space state representation and a parsing strategy therefore that produces only valid mappings. Unlike supervised learning methods, our RL search does not require training datasets, thus is easily applicable to different loop-nests and accelerators. We show that our RL search heuristic evaluated for 19 3D convolution layers, ten initial states, three 256 PE accelerator configurations, and two different operand datatypes, required only on average 10% of Timeloop’s random search floating-point operations, yet found mappings with on average 13% lower Energy-Delay-Product (EDP) for the same number of valid mappings. Further, the lowest EDP mappings found using our method had on average a 6.5x higher EDP than simple lower bound EDP estimates, with the best case being only 1.6x higher.
dc.description.departmentComputer Science, Department of
dc.format.digitalOriginborn digital
dc.format.mimetypeapplication/pdf
dc.identifier.citationPortions of this document appear in: Suyash Bakshi and Lennart Johnsson. 2023. Computationally Efficient DNN Mapping Search Heuristic using Deep Reinforcement Learning. ACM Trans. Embed. Comput. Syst. 22, 5s, Article 115 (October 2023), 21 pages. https://doi.org/10.1145/3609110
dc.identifier.urihttps://hdl.handle.net/10657/16216
dc.language.isoeng
dc.rightsThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. UH Libraries has secured permission to reproduce any and all previously published materials contained in the work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
dc.subjectDNN Mapping Search
dc.subjectReinforcement Learning
dc.subjectEnergy efficiency
dc.titleComputationally Efficient DNN Mapping Search Heuristic Using Deep Reinforcement Learning
dc.type.dcmitext
dc.type.genreThesis
dcterms.accessRightsThe full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period.
local.embargo.lift2025-12-01
local.embargo.terms2025-12-01
thesis.degree.collegeCollege of Natural Sciences and Mathematics
thesis.degree.departmentComputer Science, Department of
thesis.degree.disciplineComputer Science
thesis.degree.grantorUniversity of Houston
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.43 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.81 KB
Format:
Plain Text
Description: