A research team from Peking University has reported on an analog AI chip with unusual performance characteristics. According to a publication in Nature Communications, also reported by the South China Morning Post, the chip is said to perform calculations twelve times faster than advanced digital processors in selected tasks. At the same time, it consumed only 0.5 percent of the energy of these digital systems. The tests included training recommendation systems with datasets whose size the authors compare to that of Netflix and Yahoo. (scmp: 23.01.26)
Results from Training and Image Compression
The chip demonstrated its advantages particularly clearly in recommendation systems, where large matrix operations dominate. The team also tested an image compression task, and the system reconstructed images with nearly the same visual quality as fully digital, high-precision calculations. At the same time, according to the researchers, the memory requirement was halved, meaning the result focuses not only on performance but also on resource efficiency. The project thus addresses two bottlenecks simultaneously: energy and storage.

A member of the team emphasized the significance of the development on social media. “This study pushes the boundaries of analog computing a step further,” explained Sun Zhong. The new chip handled more complex tasks while maintaining the speed and energy efficiency advantages of analog technology. This statement underscores the claim that this is not just a demonstration, but rather a tool for scalable AI workloads.
Why Analog Technology Works Differently Than Digital Chips
Digital processors operate with binary states, while analog approaches utilize continuous electrical quantities. This allows certain calculations, especially matrix multiplications, to be directly implemented in the physical structure of the component, potentially eliminating many computational steps. The chip is based on RRAM, or Resistive Random-Access Memory, where information is stored as conductivity values in memory cells. This enables the hardware to execute large operations in a single step, instead of breaking them down into numerous individual digital operations.
Digital processors operate with binary states, while analog approaches utilize continuous electrical quantities. Back in October 2025, the team described a method for precise and scalable analog matrix computation in Nature Electronics. At the time, the focus was on a classic problem of analog computers: accuracy as the scale increases. Now, the question is whether the new architecture can overcome this hurdle in practical AI scenarios. Only then will the performance figures translate to more than just individual benchmarks.
Criticisms and Next Hurdles for Practical Application
In specialist forums like Hacker News, commentators are skeptical because analog systems have historically been difficult to industrialize. They cite signal noise, variations between components, and the overhead of analog-to-digital conversion, which can negate some of the efficiency gains. One user writes: “It’s not clear to me how they tested this. The code is only available upon request.” Another adds: “Variability from component to component is a huge problem in analog computing.”
Even beyond the debate, clear engineering questions remain unanswered, despite the results generating considerable attention. This includes long-term stability, large-scale manufacturing, and reliable programming models, because developers today rely on digital toolchains almost everywhere. Furthermore, integration into existing software ecosystems determines the speed of adoption, and this is especially true for AI workloads. Only reproducible comparisons by independent groups can build trust in this area.
Where a leap in efficiency would be especially crucial
If the measurements are confirmed, the technology is particularly well-suited for scenarios with strict energy and latency constraints. Edge devices benefit because they are designed to run AI locally and therefore require energy-efficient hardware. Signal processing also appears promising for upcoming mobile communication standards like 6G, where fast matrix operations and low power consumption directly translate into increased range and capacity. The technology will likely enter the market via specialized accelerators before replacing general-purpose processors.
