Methods for Setting Device Specifications for Analog In-Memory Computing Inference
Zhenyu Wu, Xin Su, Malte J. Rasch, and 1 more author
Advanced Intelligent Systems, 2026
Analog in-memory computing (AIMC) based on non-volatile memories (NVMs) offers a transformative pathway toward ultra-energy-efficient deep learning, enabling high-speed, low-power inference with scalable performance. Recent progress has demonstrated floating-point-level accuracy through hardware-aware training, highlighting AIMC’s potential to rival conventional digital architectures. As NVM technologies for AIMC continue to advance, a systematic framework for defining device specification targets is critical for balancing inference accuracy and fabrication cost. Key device parameters, such as memory window, read and program noise, and conductance drift, directly influence inference fidelity across different time scales while simultaneously affecting manufacturability. Their strong interdependence complicates optimization, as tuning one parameter can relax or tighten constraints on others. To address this challenge, we propose a comprehensive methodology for mapping the multidimensional device specification space required to achieve floating-point-equivalent accuracy, using phase-change memory (PCM) as a representative platform. Cross-model evaluations on convolutional neural networks, recurrent neural networks, and transformer architectures reveal that device characteristics must be co-optimized within this coupled landscape. The resulting analysis identifies families of specification sets that deliver comparable accuracy, enabling flexible, cost-effective hardware design and paving the way for scalable, commercially viable AIMC deployment.