A Dual-Precision and Low-Power CNN Inference Engine Using a Heterogeneous Processing-in-Memory Architecture

Sangwoo Jung, Jaehyun Lee, Dahoon Park, Youngjoo Lee, Jong Hyeok Yoon, Jaeha Kung

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we present an energy-scalable CNN model that can adapt to different hardware resource constraints. Specifically, we propose a dual-precision network, named DualNet, that leverages two independent bit-precision paths (INT4 and ternary-binary). DualNet achieves both high accuracy and low complexity by balancing the ratio between two paths. We also present an evolutionary algorithm that allows the automatic search of the optimal ratios. In addition to the novel CNN architecture design, we develop a heterogeneous processing-in-memory (PIM) hardware that integrates SRAM- and eDRAM-based PIMs to efficiently compute two precision paths in parallel. To verify the energy efficiency of DualNet computed on the heterogeneous PIM, we prototyped a test chip in 28nm CMOS technology. To maximize the hardware efficiency, we utilize an improved data mapping scheme achieving the most effective deployment of DualNets on multiple PIM arrays. With the proposed SW-HW co-optimization, we can obtain the most energy-efficient DualNet model operating on the actual PIM hardware. Compared to the other quantized networks with a single bit-precision, DualNet reduces the energy consumption, memory footprint, and latency by 29.0%, 49.5%, 47.3% on average, respectively, for CIFAR-10/100 and ImageNet datasets.

Original languageEnglish
Pages (from-to)5546-5559
Number of pages14
JournalIEEE Transactions on Circuits and Systems I: Regular Papers
Volume71
Issue number12
DOIs
StatePublished - 2024

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Keywords

  • Convolutional neural networks
  • SW-HW co-optimization
  • deep learning
  • mixed-precision quantization
  • processing-in-memory

Fingerprint

Dive into the research topics of 'A Dual-Precision and Low-Power CNN Inference Engine Using a Heterogeneous Processing-in-Memory Architecture'. Together they form a unique fingerprint.

Cite this