Abstract
Compute in-memory (CIM) is a promising technique that minimizes data transport, the primary performance bottleneck and energy cost of most data intensive applications. This has found wide-spread adoption in accelerating neural networks for machine learning applications. Utilizing a crossbar architecture with emerging non-volatile memories (eNVM) such as dense resistive random access memory (RRAM) or phase change random access memory (PCRAM), various forms of neural networks can be implemented to greatly reduce power and increase on chip memory capacity. However, compute in-memory faces its own limitations at both the circuit and the device levels. Although compute in-memory using the crossbar architecture can greatly reduce data transport, the rigid nature of these large fixed weight matrices forfeits the flexibility of traditional CMOS and SRAM based designs. In this work, we explore the different synchronization barriers that occur from the CIM constraints. Furthermore, we propose a new allocation algorithm and data flow based on input data distributions to maximize utilization and performance for compute-in memory based designs. We demonstrate a 7.47\(\times \) performance improvement over a naive allocation method for CIM accelerators on ResNet18.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, Y.-H., Krishna, T., Emer, J.S., Sze, V.: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52(1), 127–138 (2017)
Davies, M., et al.: Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38(1), 82–99 (2018)
Shafiee, A., et al.: Isaac: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Comput. Architect. News 44(3), 14–26 (2016)
Crafton, B., Spetalnick, S., Murali, G., Krishna, T., Lim, S.K., Raychowdhury, A.: Breaking barriers: maximizing array utilization for compute in-memory fabrics. In: 2020 IFIP/IEEE 28th International Conference on Very Large Scale Integration (VLSI-SoC), IEEE (2020)
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, 2009 (CVPR 2009). pp. 248–255, IEEE (2009)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Wu, J., et al.: A 40nm low-power logic compatible phase change memory technology. In: 2018 IEEE International Electron Devices Meeting (IEDM), pp. 27–36, IEEE (2018)
Yoon, J.-H., Chang, M., Khwa, W.-S., Chih, Y.-D., Chang, M.-F., Raychowdhury, A.: Ternary-weight compute-in-memory RRAM macro with voltage-sensing read and write verification for reliable multi-bit rram operation. In: 2021 IEEE Custom Integrated Circuits Conference (CICC), pp. 1–4, IEEE (2021)
Yoon, J.-H., Chang, M., Khwa, W.-S., Chih, Y.-D., Chang, M.-F., Raychowdhury, A.: 29.1 a 40nm 64kb 56.67 tops/w read-disturb-tolerant compute-in-memory/digital RRAM macro with active-feedback-based read and in-situ write verification. In: 2021 IEEE International Solid-State Circuits Conference (ISSCC), vol. 64, pp. 404–406, IEEE (2021)
Yang, T.-H., et al.: Sparse RERAM engine: joint exploration of activation and weight sparsity in compressed neural networks. In: Proceedings of the 46th International Symposium on Computer Architecture, pp. 236–249 (2019)
Yu, S., Chen, P.-Y.: Emerging memory technologies: recent trends and prospects. IEEE Solid-State Circuits Mag. 8(2), 43–56 (2016)
Peng, X., Liu, R., Yu, S.: Optimizing weight mapping and data flow for convolutional neural networks on processing-in-memory architectures. Regular Papers. IEEE Trans. Circuits Syst. (2019)
Crafton, B., Spetalnick, S., Raychowdhury, A.: Counting cards: exploiting weight and variance distributions for robust compute in-memory. arXiv preprint arXiv:2006.03117. (2020)
Chen, P.-Y., Peng, X., Yu, S.: Neurosim: a circuit-level macro model for benchmarking neuro-inspired architectures in online learning. IEEE Trans. Comput Aided Design Integr. Circuits Syst. 37(12), 3067–3080 (2018)
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 16), pp. 265–283 (2016)
Wilton, S.J., Jouppi, N.P.: Cacti: an enhanced cache access and cycle time model. IEEE J. Solid-State Circuits 31(5), 677–688 (1996)
Dong, X., Xu, C., Xie, Y., Jouppi, N.P.: NVSim: A circuit-level performance, energy, and area model for emerging nonvolatile memory. IEEE Trans. Comput. Aided Design Integr. Circuits Syst. 31(7), 994–1007 (2012)
Acknowledgement
This work was funded by the U.S. Department of Defense’s Multidisciplinary University Research Initiatives (MURI) Program under grant number FOA: N00014-16-R-FO05 and the Semiconductor Research Corporation under the Center for Brain Inspired Computing (C-BRIC) and Qualcomm.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Crafton, B., Spetalnick, S., Murali, G., Krishna, T., Lim, SK., Raychowdhury, A. (2021). Statistical Array Allocation and Partitioning for Compute In-Memory Fabrics. In: Calimera, A., Gaillardon, PE., Korgaonkar, K., Kvatinsky, S., Reis, R. (eds) VLSI-SoC: Design Trends. VLSI-SoC 2020. IFIP Advances in Information and Communication Technology, vol 621. Springer, Cham. https://doi.org/10.1007/978-3-030-81641-4_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-81641-4_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81640-7
Online ISBN: 978-3-030-81641-4
eBook Packages: Computer ScienceComputer Science (R0)