fbpx

“IBM chip that is revolutionary in its capabilities accelerates AI processes”

IBM NorthPole processor chip

Researchers at IBM in San Jose, California, have created a cutting-edge computer chip inspired by the human brain. This innovative chip has the potential to significantly enhance artificial intelligence (AI) by operating faster and with significantly reduced power consumption. Known as the NorthPole processor chip, it eliminates the need for frequent access to external memory, enabling it to perform tasks such as image recognition more swiftly than existing architectures while consuming much less power.

Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay, has described the chip’s energy efficiency as truly remarkable. The research, published in Science, demonstrates the integration of computing and memory on a large scale, challenging conventional thinking in computer architecture.

NorthPole is designed to run neural networks, which are multi-layered arrays of simple computational units programmed to identify patterns in data. Unlike some other computer chips that require access to external memory (RAM) for each layer, NorthPole features 256 computing units, each with its own memory, effectively mitigating the Von Neumann bottleneck. These cores are interconnected in a network inspired by white-matter connections in the human cerebral cortex, enabling NorthPole to outperform existing AI machines in standard image recognition benchmark tests. It also achieves this with one-fifth of the energy consumption of state-of-the-art AI chips, even without using the latest manufacturing processes. If the NorthPole design were implemented with the most up-to-date manufacturing processes, its efficiency could be 25 times better than current designs.

While NorthPole’s 224 megabytes of RAM may not suffice for large language models, the authors suggest that its architecture could prove valuable in speed-critical applications such as self-driving cars.

The chip brings memory units physically closer to the core’s computing elements, while other researchers explore radical innovations involving new materials and manufacturing processes. These innovations aim to enable memory units to perform calculations, potentially further increasing speed and efficiency. Other approaches, like one involving memristors, circuit elements that can switch between being resistors and conductors, also hold promise in reducing latency and energy costs associated with data transfers. Additionally, various teams, including one at a separate IBM lab in Zurich, Switzerland, are developing methods for storing information by altering a circuit element’s crystal structure, though scaling these newer approaches economically remains to be seen.