Abstract
Running Internet of Things applications on general purpose processors results in a large energy and performance overhead, due to the high cost of data movement. Processing inmemory is a promising solution to reduce the data movement cost by processing the data locally inside the memory. In this paper, we design a Multi-Purpose In-Memory Processing (MPIM) system, which can be used as main memory and for processing. MPIM consists of multiple crossbar memories with the capability of efficient in-memory computations. Instead of transferring the large dataset to the processors, MPIM provides two important in-memory processing capabilities: i) data searching for the nearest neighbor ii) bitwise operations including OR, AND and XOR with small analog sense amplifiers. The experimental results show that the MPIM can achieve up to 5.5x energy savings and 19x speedup for the search operations as compared to AMD GPU-based implementation. For bitwise vector processing, we present 11000x energy improvements with 62x speedup over the SIMD-based computation, while outperforming other state-of-the-art in-memory processing techniques.
Original language | English |
---|---|
Title of host publication | 2017 22nd Asia and South Pacific Design Automation Conference, ASP-DAC 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 757-763 |
Number of pages | 7 |
ISBN (Electronic) | 9781509015580 |
DOIs | |
State | Published - 16 Feb 2017 |
Event | 22nd Asia and South Pacific Design Automation Conference, ASP-DAC 2017 - Chiba, Japan Duration: 16 Jan 2017 → 19 Jan 2017 |
Publication series
Name | Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC |
---|
Conference
Conference | 22nd Asia and South Pacific Design Automation Conference, ASP-DAC 2017 |
---|---|
Country/Territory | Japan |
City | Chiba |
Period | 16/01/17 → 19/01/17 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.