Fu, Xin2024-01-24August 2022023-08Portions of this document appear in: L. Wang, M. Sistla, M. Chen, and X. Fu, “Bs-pfl: Enabling low-cost personalized federated learning by exploring weight gradient sparsity,” in 2022 International Joint Conference on Neural Networks (IJCNN), 2022, pp. 1–8; and in: L. Wang, Q. Wan, P. Ma, J. Wang, M. Chen, S. L. Song, and X. Fu, “Enabling high-efficient reram-based cnn training via exploiting crossbar-level insignificant writing elimination,” IEEE Transactions on Computers, pp. 1–12, 2023.https://hdl.handle.net/10657/16028The popularity of Convolutional Neural Networks (CNNs) has skyrocketed in recent years, making them one of the most widely used and influential deep learning architectures in the field of computer vision and image recognition. However, the practical implementation of CNNs poses several notable challenges. The first challenge is about the training efficiency, as the computational demands of training CNNs can be substantial. Another challenge arises in the domain of security, as CNNs are susceptible to a variety of adversarial attacks, including adversarial input and backdoor insertions. This dissertation focuses specifically on addressing these aforementioned challenges. To tackle the issue of training efficiency, two novel approaches are presented: WRR (Write Reduction on ReRAM) and BS-pFL (Bit Stream guided personalized Federated Learning). WRR introduces an architecture for an in-memory CNN training accelerator that lever-ages the emerging resistive random access memory (ReRAM). BS-pFL, on the other hand, presents a lightweight pruning-based CNN training framework designed for edge devices. It improves training efficiency through the predicting and pruning of insignificant parameters using bit streams, eliminating the need for calculations of those insignificant parameters. In order to address the security concerns in CNNs, two defense schemes are introduced: PV-NA (Process Variation Guided Neuron Aware Noise Injection) and LP-RFL (Label Guided Pruning for Robust Federated Learning). PV-NA is a noise-based defense scheme that specifically targets on adversarial inputs. It utilizes the undervolting technique and process variation to generate hardware based diverse noise patterns, effectively mitigating the impact of misleading adversarial noises during inference. LP-RFL, on the other hand, is a low-cost weight gradient pruning-based defense scheme developed to counter backdoor insertions during the training time. By efficiently pruning the significant malicious weight gradients from backdoored training examples, LP-RFL effectively prevents backdoor insertions and fortifies the integrity of the CNN model.application/pdfengThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. UH Libraries has secured permission to reproduce any and all previously published materials contained in the work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).Machine LearningCNNRobus, SecurityAcceleratorAchieving Low-Cost and High-Efficient Robust Inference and Training for Convolutional Neural Networks2024-01-24Thesisborn digital