Abstract
In this work, we design, DigitalPIM, a Digital-based Processing In-Memory platform capable of accelerating fundamental big data algorithms in real time with orders of magnitude more energy efficient operation. Unlike the existing near-data processing approach such as HMC 2.0, which utilizes additional low-power processing cores next to memory blocks, the proposed platform implements the entire algorithm directly in memory blocks without using extra processing units. In our platform, each memory block supports the essential operations including: bitwise operation, addition/multiplication, and search operation internally in memory without reading any values out of the block. This significantly mitigates the processing costs of the new architecture, while providing high scalability and parallelism for performing the extensive computations. We exploit these essential operations to accelerate popular big data applications entirely in memory such as machine learning algorithms, query processing, and graph processing.
| Original language | English |
|---|---|
| Title of host publication | GLSVLSI 2019 - Proceedings of the 2019 Great Lakes Symposium on VLSI |
| Publisher | Association for Computing Machinery |
| Pages | 429-434 |
| Number of pages | 6 |
| ISBN (Electronic) | 9781450362528 |
| DOIs | |
| State | Published - 13 May 2019 |
| Event | 29th Great Lakes Symposium on VLSI, GLSVLSI 2019 - Tysons Corner, United States Duration: 9 May 2019 → 11 May 2019 |
Publication series
| Name | Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI |
|---|
Conference
| Conference | 29th Great Lakes Symposium on VLSI, GLSVLSI 2019 |
|---|---|
| Country/Territory | United States |
| City | Tysons Corner |
| Period | 9/05/19 → 11/05/19 |
Bibliographical note
Publisher Copyright:© 2019 ACM.
Keywords
- Big data acceleration
- Energy efficiency
- Non-volatile memories
- Processing in memory