Techniques with Impact of Process Variation in CNN Neuron Computation Output

Date

2022-08-09

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

A convolution neural network (CNN) is a type of neural network commonly used to analyze visual images. Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake in prediction or to yield an incorrect classification. It is observed that during an adversarial attack from outside sources, there are often adversarial data sample noises injected into the original collected samples which in turn are introduced to different paths composing the neural network, often minor neurons, to affect the final outcome of the model. This is known as process variation. Each input to a CNN activates a sequence of neurons. It is observed that inputs that are correctly predicted as the same class tend to activate a specific set of neurons distinctive from that of other inputs. In other words, a small, but distinctive, portion of the CNN contributes to each predicted class. From this perspective, the way adversarial samples modify the inference result can be considered as activating a sequence of neurons different from the canonical sequence associated with its predicted output. This paper investigates methods to analyze paths in CNN inferences which allow the detection of adversarial attacks. The main idea here is important neurons, which denote a set of neurons that contribute significantly to the inference output, while unimportant neurons are not activated or triggered. To mitigate the negative impact caused by this adversarial noise, we want to approximate the paths of neurons during interference so that we can keep the important neurons intact and add additional noise to offset, therefore neutralize, the unimportant neurons which are activated or triggered by the attack. The paper also investigates methods to simulate such interference and to mitigate the impact from those activated insignificant neurons, thus correcting the output of adversarial examples, using FPGA by overclocking and/or undervolting and attempting to run neuron-like circuits under these conditions. The use of FPGA is a free energy resource with which to introduce noise. With data of possible correlation between baseline and incorrect values, we can from there control the range of error, therefore, neutralizing the activated unimportant neurons.

Description

Keywords

Convolutional neural networks, Adversarial attack, Process variation

Citation