Revisiting HyperDimensional Learning for FPGA and Low-Power Architectures

  • Mohsen Imani
  • , Zhuowen Zou
  • , Samuel Bosch
  • , Sanjay Anantha Rao
  • , Sahand Salamat
  • , Venkatesh Kumar
  • , Yeseong Kim
  • , Tajana Rosing

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

69 Scopus citations

Abstract

Today's applications are using machine learning algorithms to analyze the data collected from a swarm of devices on the Internet of Things (IoT). However, most existing learning algorithms are overcomplex to enable real-Time learning on IoT devices with limited resources and computing power. Recently, Hyperdimensional computing (HDC) is introduced as an alternative computing paradigm for enabling efficient and robust learning. HDC emulates the cognitive task by representing the values as patterns of neural activity in high-dimensional space. HDC first encodes all data points to high-dimensional vectors. It then efficiently performs the learning task using a well-defined set of operations. Existing HDC solutions have two main issues that hinder their deployments on low-power embedded devices: (i) the encoding module is costly, dominating 80% of the entire training performance, (ii) the HDC model size and the computation cost grow significantly with the number of classes in online inference.In this paper, we proposed a novel architecture, LookHD, which enables real-Time HDC learning on low-power edge devices. LookHD exploits computation reuse to memorize the encoding module and simplify its computation with single memory access. LookHD also address the inference scalability by exploiting HDC governing mathematics that compresses the HDC trained model into a single hypervector. We present how the proposed architecture can be implemented on the existing low power architectures: ARM processor and FPGA design. We evaluate the efficiency of the proposed approach on a wide range of practical classification problems such as activity recognition, face recognition, and speech recognition. Our evaluations show that LookHD can achieve, on average, 28.3\times faster and 97.4\times more energy-efficient training as compared to the state-of-The-Art HDC implemented on the FPGA. Similarly, in the inference, LookHD is 2.2\times faster, 4.1\times more energy-efficient, and has 6.3\times smaller model size than the same state-of-The-Art algorithms.

Original languageEnglish
Title of host publicationProceeding - 27th IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
PublisherIEEE Computer Society
Pages221-234
Number of pages14
ISBN (Electronic)9780738123370
DOIs
StatePublished - Feb 2021
Event27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021 - Virtual, Seoul, Korea, Republic of
Duration: 27 Feb 20211 Mar 2021

Publication series

NameProceedings - International Symposium on High-Performance Computer Architecture
Volume2021-February
ISSN (Print)1530-0897

Conference

Conference27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021
Country/TerritoryKorea, Republic of
CityVirtual, Seoul
Period27/02/211/03/21

Bibliographical note

Publisher Copyright:
© 2021 IEEE.

Keywords

  • Brain-inspired computing
  • FPGA
  • HyprDimensional computing
  • Machine learning
  • Real-Time learning

Fingerprint

Dive into the research topics of 'Revisiting HyperDimensional Learning for FPGA and Low-Power Architectures'. Together they form a unique fingerprint.

Cite this