Implementation of STATIC-RANDOM-ACCESS-MEMORY-Based In-Memory Computing-architecture for improving Energy Efficiency

Year : 2025 | Volume : 15 | Issue : 01 | Page : 25 35
    By

    Kiran Sharma,

  • Jitendra Ahir,

  • Laxmi Singh,

  1. Ph.D. Scholar, Department of Electronics and Communication Engineering, Rabindranath Tagore University, Bhopal, Madhya Pradesh, India
  2. Assistant Professor, Department of Electronics and Communication Engineering, Rabindranath Tagore University, Bhopal, Madhya Pradesh, India
  3. Professor, Department of Electronics and Communication Engineering, Rabindranath Tagore University, Bhopal, Madhya Pradesh, India

Abstract

The in-memory-computing architecture the improvement of big data and high-performance computing. In memory-computing (IMC) as reduces the latency and power consumption of data processing. Proposed research paper static random-access memory-based IMC architecture. By completing internal write-back, NMOS transistors increase computational efficiency and eliminate the need to read the computational output right away. A 128×128 STATIC-RANDOM-ACCESS-MEMORY-IMC macro chip is designed using the 78-nm technology. The energy efficiency of 55.3TOPS/W with supply voltage 1.2V and a throughput of 224.1 GOPS/mm2. A neural network using the suggested STATIC-RANDOM-ACCESS-MEMORY IMC architecture achieves 95% accuracy with the Mixed Signal. In-memory computing (IMC) architecture based on static random-access memory (SRAM) presents a viable way to address the rising energy requirements of data-centric applications in contemporary computer systems. SRAM-based IMC reduces data travel by integrating processing into memory arrays, greatly enhancing computational performance and energy efficiency. The implementation of SRAM-based IMC architecture is examined in this paper, with particular attention paid to its approach, advantages, and difficulties. We describe the design ideas, energy-efficient features, and machine learning (ML) and artificial intelligence (AI) applications of architecture. We show through a comparative analysis that SRAM-based IMC performs better in terms of latency and energy economy than conventional von Neumann architecture, opening the door for high-performance and environmentally friendly computing systems.

Keywords: In-memory-computing (IMC), Static-Random-Access-Memory, Row by Row ADC, NMOS, IMC, ML, AI, XNOR gates

[This article belongs to Journal of VLSI Design Tools and Technology ]

How to cite this article:
Kiran Sharma, Jitendra Ahir, Laxmi Singh. Implementation of STATIC-RANDOM-ACCESS-MEMORY-Based In-Memory Computing-architecture for improving Energy Efficiency. Journal of VLSI Design Tools and Technology. 2025; 15(01):25-35.
How to cite this URL:
Kiran Sharma, Jitendra Ahir, Laxmi Singh. Implementation of STATIC-RANDOM-ACCESS-MEMORY-Based In-Memory Computing-architecture for improving Energy Efficiency. Journal of VLSI Design Tools and Technology. 2025; 15(01):25-35. Available from: https://journals.stmjournals.com/jovdtt/article=2025/view=195632


References

  1. Hongyang Jia, Murat Ozatay, Yinqi Tang, Hossein Valavi, Rakshit Pathak, Jinseok Lee. “15.1 A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing,” 2021 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 2021, pp. 236-238, doi: 10.1109/ISSCC42613.2021.9365788.
  2. Jinshan Yue, Xiaoyu Feng, Yifan He, Yuxuan Huang, Yipeng Wang, Zhe Yuan. “15.2 A 2.75-to-75.9TOPS/W Computing-in-Memory NN Processor Supporting Set-Associate Block-Wise Zero Skipping and Ping-Pong CIM with Simultaneous Computation and Weight Updating,” 2021 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 2021, pp. 238-240, doi: 10.1109/ISSCC42613.2021.9365958.
  3. Su, J., Chou, Y., Liu, R., Liu, T., Lu, P., Wu, P., Chung, Y., Hung, L., Ren, J., Pan, T., Li, S., Chang, S., Sheu, S., Lo, W., Wu, C., Si, X., Lo, C., Liu, R., Hsieh, C., Tang, K., & Chang, M. (2021). 16.3 A 28nm 384kb 6T-SRAM Computation-in-Memory Macro with 8b Precision for AI Edge Chips. 2021 IEEE International Solid- State Circuits Conference (ISSCC), 64, 250-252.
  4. Su, J., Si, X., Chou, Y., Chang, T., Huang, W., Tu, Y., Liu, R., Lu, P., Liu, T., Wang, J., Zhang, Z., Jiang, H., Huang, S., Lo, C., Liu, R., Hsieh, C., Tang, K., Sheu, S., Li, S., Lee, H., Chang, S., Yu, S., & Chang, M. (2020). 15.2 A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips. 2020 IEEE International Solid- State Circuits Conference – (ISSCC), 240-242.
  5. Dong, Q., Sinangil, M.E., Erbagci, B., Sun, D., Khwa, W., Liao, H., Wang, Y., & Chang, J. (2020). 15.3 A 351TOPS/W and 372.4GOPS Compute-in-Memory SRAM Macro in 7nm FinFET CMOS for Machine-Learning Applications. 2020 IEEE International Solid- State Circuits Conference – (ISSCC), 242-244.
  6. Yin, S., Jiang, Z., Seo, J., & Seok, M. (2020). XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks. IEEE Journal of Solid-State Circuits, 55, 1733-1743.
  7. J. Mattausch, W. Imafuku, A. Kawabata, T. Ansari, M. Yasuda and T. Koide, “Associative Memory for Nearest-Hamming-Distance Search Based on Frequency Mapping,” in IEEE Journal of Solid-State Circuits, vol. 47, no. 6, pp. 1448-1459, June 2012, doi: 10.1109/JSSC.2012.2190191.
  8. Sasaki, M. Yasuda and H. J. Mattausch, “Digital associative memory for word-parrallel Manhattan-distance-based vector quantization,” 2012 Proceedings of the ESSCIRC (ESSCIRC), Bordeaux, France, 2012, pp. 185-188, doi: 10.1109/ESSCIRC.2012.6341289.
  9. Kaul et al., “14.4 A 21.5M-query-vectors/s 3.37nJ/vector reconfigurable k-nearest-neighbor accelerator with adaptive precision in 14nm tri-gate CMOS,” 2016 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 2016, pp. 260-261, doi: 10.1109/ISSCC.2016.7418006.
  10. Saikia, S. Yin, Z. Jiang, M. Seok and J. -s. Seo, “K-Nearest Neighbor Hardware Accelerator Using In-Memory Computing SRAM,” 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Lausanne, Switzerland, 2019, pp. 1-6, doi: 10.1109/ISLPED.2019.8824822.
  11. Pan et al., “SR-WTA: Skyrmion Racing Winner-Takes-All Module for Spiking Neural Computing,” 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 2019, pp. 1-5, doi: 10.1109/ISCAS.2019.8702783.
  12. NaderiSaatlo and S. Ozoguz, “CMOS high-precision loser-take-all circuit,” IEEJ Trans. Elect. Electron. Eng., 2014; 9(6): 695–696.
  13. Rahiminejad, M. Saberi, R. Lotfi, M. Taherzadeh-Sani, and F. Nabki, “A low-voltage high-precision time-domain winner-take-all circuit,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 67, no. 1, pp. 4–8, Jan. 2020.
  14. Kaiser, O. Nachum, A. Roy, and S. Bengio, “Learning to remember rare events,” in Proc. 5th Int. Conf. Learn. Represent. Conf. Track (ICLR), Toulon, France, 2017, pp. 1–10. [Online]. Available: https://openreview.net/forum?id=SJTQLdqlg
  15. Yin, J. Tang, Z. Xu, and Y.Wang, “Memory augmented deep recurrent neural network for video question answering,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9, pp. 3159–3167, Sep. 2020.

Regular Issue Subscription Original Research
Volume 15
Issue 01
Received 21/01/2025
Accepted 25/01/2025
Published 28/01/2025
Publication Time 7 Days


Login


My IP

PlumX Metrics