Abstract
Existing deep neural networks (DNNs) are computationally expensive and memory intensive, which hinder their further deployment in novel nanoscale devices and applications with lower memory resources or strict latency requirements. In this paper, a novel approach to accelerate on-chip learning systems using memristive quantized neural networks (M-QNNs) is presented. A real problem of multilevel memristive synaptic weights due to device-to-device (D2D) and cycle-to-cycle (C2C) variations is considered. Different levels of Gaussian noise are added to the memristive model during each adjustment. Another method of using memristors with binary states to build M-QNNs is presented, which suffers from fewer D2D and C2C variations compared with using multilevel memristors. Furthermore, methods of solving the sneak path issues in the memristive crossbar arrays are proposed. The M-QNN approach is evaluated on two image classification datasets, that is, ten-digit number and handwritten images of mixed National Institute of Standards and Technology (MNIST). In addition, input images with different levels of zero-mean Gaussian noise are tested to verify the robustness of the proposed method. Another highlight of the proposed method is that it can significantly reduce computational time and memory during the process of image recognition.
Original language | English |
---|---|
Article number | 8705375 |
Pages (from-to) | 1875-1887 |
Number of pages | 13 |
Journal | IEEE Transactions on Cybernetics |
Volume | 51 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2021 |
Externally published | Yes |
Keywords
- Acceleration
- crossbar array
- image processing
- image recognition
- memristor
- quantized convolutional neural networks (CNNs)
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering