
Everspin Wins $10.5M MRAM IP & Foundry Contract for Purdue’s ME Commons Project
Everspin Technologies, Inc., a global leader in the development and manufacturing of Magnetoresistive Random Access Memory (MRAM) persistent memory solutions, has been awarded a significant contract to collaborate with a consortium led by Purdue University. This initiative aims to integrate Everspin MRAM technology into a cutting-edge program known as CHEETA (CMOS+MRAM Hardware for Energy Efficient AI). This contract, spanning four years, is valued at up to $10.5 million, with the current phase specifically allocating approximately $4 million to Everspin Technologies.
Everspin’s Commitment to Advancing MRAM Technology
Everspin Technologies has long been at the forefront of MRAM innovation, continuously evolving its memory solutions beyond their initial commercialization. Sanjeev Aggarwal, President and CEO of Everspin Technologies, emphasized the company’s pioneering efforts in expanding MRAM’s capabilities:
“MRAM has evolved far beyond its initial commercialization as a memory technology. Nearly two decades after Everspin first brought its MRAM to market, it has evolved its technology into a versatile, energy-efficient solution for computing and memory bandwidth challenges. Everspin continues to advance MRAM’s capabilities through our manufacturing facility in Chandler, Arizona, which has a long history of supporting both commercial MRAM and strategic radiation-hardened solutions for the Department of Defense (DoD). This expertise uniquely positions us to provide MRAM IP, manufacturing services, and design support for next-generation computing architectures.”
This initiative represents a pivotal step in furthering MRAM’s role in advanced computing applications, particularly in artificial intelligence (AI) acceleration and next-generation computing architectures. The incorporation of MRAM technology in AI workloads is expected to significantly improve computational efficiency, energy usage, and system reliability.

The CHEETA Program and Its Objectives
The CHEETA program is designed to explore and develop MTJ-based In-Memory Compute (IMC) macros, which could revolutionize the architecture of next-generation neural accelerators. This initiative aims to leverage MRAM’s non-volatile memory properties to optimize AI computing processes. The primary goals of the CHEETA program include:
- Cross-Layer Exploration of MTJ-Based IMC Macros:
- Investigating the integration of magnetic tunnel junction (MTJ) technology into IMC architectures.
- Addressing challenges in data movement and energy efficiency within AI workloads.
- Enhancing Energy Efficiency:
- Everspin’s MTJs are designed to significantly reduce power consumption in memory transactions.
- Compared to traditional memory architectures, MTJ-based IMC macros are expected to cut power consumption by orders of magnitude.
- Reducing Latency and Improving Performance:
- By embedding computation directly into the memory, MRAM technology reduces data transfer delays.
- This leads to higher performance in AI applications and other compute-intensive tasks.
- Demonstrating Experimental Validation:
- A key deliverable of the project is the experimental demonstration of robust and energy-efficient IMC functionality.
- This proof of concept will validate MRAM’s potential to redefine conventional computing architectures.
The adoption of In-Memory Compute paradigms using MRAM technology could reshape the future of computing, particularly in AI and machine learning applications. With the ability to perform computation within the memory itself, MRAM-enabled IMC solutions promise to overcome current limitations in traditional von Neumann architectures, where data movement between memory and processing units creates performance bottlenecks.
Everspin’s Role and Technological Expertise
Everspin Technologies’ proprietary AgILYST MRAM technology plays a central role in this initiative. Designed to support AI acceleration and next-generation memory architectures, AgILYST MRAM offers:
- High-speed performance with ultra-low latency.
- Endurance and reliability for mission-critical applications.
- Energy efficiency, making it an ideal candidate for AI workloads.
The Chandler, Arizona-based manufacturing facility is instrumental in the development and refinement of MRAM solutions. This facility has a long-standing history of producing high-quality MRAM products, catering to both commercial applications and specialized defense-related projects. The expertise gained from these domains equips Everspin with unique capabilities to meet the demands of the CHEETA program.
The Impact on AI and Next-Generation Computing
The increasing demand for AI and machine learning applications has pushed the limits of existing computing architectures. Traditional systems often struggle with power inefficiencies and data transfer limitations, which hinder performance and scalability. By integrating MRAM into AI workloads, the CHEETA project aims to:
- Enhance computational efficiency by reducing memory bottlenecks.
- Minimize energy consumption, making AI systems more sustainable.
- Improve overall system reliability and endurance.
As AI applications continue to expand into sectors such as healthcare, automotive, and industrial automation, the need for optimized memory and compute solutions grows. The successful implementation of MRAM-based IMC architectures could pave the way for highly efficient AI models that consume less power and operate with increased speed and accuracy.
Broader Industry Implications
The CHEETA initiative also has broader implications for the semiconductor industry. With the semiconductor sector facing challenges related to power efficiency and performance scaling, MRAM-based solutions could provide a new path forward. The adoption of MRAM in AI workloads and high-performance computing could lead to:
- Greater industry-wide adoption of MRAM technology.
- New standards for in-memory computing architectures.
- Collaborations between academia and industry to advance semiconductor innovation.
Furthermore, the strategic involvement of Purdue University and other consortium members highlights the importance of academia-industry partnerships in driving technological breakthroughs. By combining theoretical research with practical applications, the CHEETA program aims to bring real-world solutions to computing challenges faced by AI-driven enterprises.
Everspin’s involvement in the CHEETA project is a testament to the company’s leadership in the MRAM industry and its commitment to pioneering advancements in memory technology. As computing demands continue to evolve, MRAM’s role is expected to expand beyond traditional storage applications into advanced processing and AI acceleration.
Over the next four years, Everspin will continue to work closely with Purdue University and other consortium members to refine MRAM-based IMC architectures. The results of this collaboration could shape the future of AI processing, influencing how next-generation computing systems are designed and deployed.
As AI workloads grow increasingly complex, the need for energy-efficient, high-performance memory solutions becomes more critical. Everspin’s MRAM technology, combined with innovative approaches like in-memory computing, positions the company as a key player in the evolution of AI hardware solutions.
The $10.5 million contract awarded to Everspin Technologies marks a significant milestone in the development of MRAM technology for AI applications. Through the CHEETA initiative, Everspin is set to revolutionize memory and computing paradigms, making AI systems more efficient and sustainable. By addressing power and latency challenges, MRAM-enabled In-Memory Compute solutions have the potential to redefine the landscape of modern computing. With ongoing research and collaboration, Everspin remains at the forefront of memory innovation, driving the future of AI and next-generation computing architectures.