Abstract
In this paper, we propose a novel storage architecture, called an Inference-Enabled SSD (IESSD), which employs FPGA-based DNN inference accelerators inside an SSD. IESSD is capable of performing DNN operations inside an SSD, avoiding frequent data movements between application servers and data storage. This boosts up analytics performance of DNN applications. Moreover, by placing accelerators near data within an SSD, IESSD delivers scalable analytics performance which improves with the amount of data to analyze. To evaluate its effectiveness, we implement an FPGA-based proof-of-concept prototype of IESSD and carry out a case study with an image tagging (classification) application. Our preliminary results show that IESSD exhibits 1.81× better performance, achieving 5.31× lower power consumption, over a conventional system with GPU accelerators.
| Original language | English |
|---|---|
| Article number | 8770061 |
| Pages (from-to) | 13-17 |
| Number of pages | 5 |
| Journal | IEEE Computer Architecture Letters |
| Volume | 19 |
| Issue number | 1 |
| DOIs | |
| State | Published - 1 Jan 2020 |
Bibliographical note
Publisher Copyright:© 2002-2011 IEEE.
Keywords
- Solid-state drives
- convolutional neural networks
- deep neural networks
- in-storage processing