Deep neural networks (DNNs) have shown their great capability of surpassing human performance in many areas. With the help of quantization, artificial intelligence (AI) powered devices are ubiquitously deployed. Yet, the easily accessible AI-powered edge devices become the target of malicious users who can deteriorate the privacy and integrity of the inference process. This paper proposes two adversarial attack scenarios, including three threat models, which crush local binary pattern networks (LBPNet). These attacks can be applied maliciously to flip a limited number of susceptible bits in kernels within the system's shared memory. The threat could be driven through the Row-Hammer attack and significantly drops the model's accuracy. Our preliminary simulation results demonstrate flipping only the most significant bit of the first LBP layer decreases the accuracy from 99.51 % down to 18 % on the MNIST data-set. We then briefly discuss potential hardware/software -oriented defense mechanisms as countermeasures to such attacks.
Arman Roohi, Shaahin Angizi