AI & ML


Nanomaterials to build next-gen AI hardware?

22 November 2023 AI & ML

From improving scientific analyses and imaging capabilities, to predictive maintenance and monitoring operations in industrial settings, artificial intelligence is becoming ever more present in modern-day society.

The field of nanotechnology – including both nanomaterials themselves and nanofabrication/nanopatterning methods – is already helping to miniaturise computing components, make electronic components more efficient, and improve the overall power potential for computing technologies (as you can fit more components per area than you can with bulkier materials).

As the computing and technological demands for AI technologies become greater in all aspects of society, the hardware will need to be able to keep up. Nanotechnology is already having an impact in improving the computing sector on all fronts, and this extends to the hardware used for powering and facilitating AI algorithms.

Memristive devices

Nanomaterials have been gaining a lot of interest recently for memristive devices, and there’s much research coming out using nanotechnology-facilitated memristive devices for many applications. Memristive devices are two-terminal devices with metal/insulator/metal stack architecture. This acts as a resistance switch that can retain a state of internal resistance/conductance based on the history of the applied voltage and current across the device.

Memristive devices have unique physical properties that give rise to fast and low energy switching, conductance monitoring, and a high degree of scalability. There are several different memristive devices today that can be used. These include drift memristors, diffusive memristors, and phase change memory (PCM) devices. They are classified into these categories based on the switching materials used in the device and the switching dynamics of the device.

Memristive devices tend to have two operation modes: a read mode and a write mode. In the read mode, the conductance is sensed with no disturbance, but during the write mode, the conductance gets programmed in the device by applying a voltage that is greater than the device’s threshold, and these read/write voltages get encoded as pulse trains. PCM devices are slightly different in operation in that they use the phase transitions of the material to change the resistivity across the device, where the material changes from a crystalline material in a low resistivity state to an amorphous material in a high resistivity state.

There are several different areas where nanomaterial-based memristors and PCM devices are being used to improve AI hardware. Memristive devices and phase change crossbar arrays have been used for mixed-signal inference engines and in spin-device-based convolution accelerators for CNN applications. Memristor devices are also being trialled in a number of deep learning memory devices, including mixed-signal accelerators for deep learning algorithms, and in data storage and analogue processing operations.

Memristors are also being investigated for storage and processing approaches in processing-in-memory (PIM) architectures to widen the scope of neural network applications. This approach could also have the potential to use a software/hardware interface that would allow developers to produce neural network code to run on the accelerator.

Computing-in-memory (CIM) chips using nanoscale memristor devices have also been proposed recently, that could help to realise much larger neural network models by reducing the latency and improving the accuracy of the neural networks.

Magnetoelectric devices

The second big area of nanomaterial-based hardware devices that could have an impact on the accuracy and efficiency of AI algorithms are magnetoelectric devices. These devices are composed of stacked layers of magnetic materials and insulators, and work using the alignment of the polarisation of the electronic spin in the metal layers. If the polarisations are aligned, the device exhibits a lower resistance, and any higher degrees of resistance are a function of the degrees of misalignment in the polarisation. These devices are programmed by either an applied voltage or spin-orbit currents, depending on the number of terminals.

Out of all magnetoelectric devices, magnetic tunnelling junctions (MTJ) are offering the most promise for AI hardware. While there are several devices within the family of MTJs, the majority of them consist of two magnetic layers separated by an insulating layer, with one of the magnetic layers being permanently magnetised to a certain axis and the other layer’s magnetisation being altered to provide different resistance values across the device. MTJ devices can either be volatile or non-volatile in nature, and their non-volatility makes them a good candidate for use in nanotech-enhanced AI applications.

There are a few areas where MTJs are being investigated for AI applications. One area is in probabilistic models, such as Restricted Boltzmann Machines (RBM) and Deep Belief Networks (DGNs). The architecture of the hardware for these models is layer-by-layer structures for the multiply-accumulate operations of the neural networks used in these models. The implementation of nanoscale materials in these architectures has utilised a crossbar architecture, with both MTJs and memristor systems being employed in the hardware for these models. The architectural design of the hardware shows similarities with CMOS-only architectures, and only differs with the use of MTJ crossbars for multiply accumulate computations.

Another area where MTJs are being investigated and used is Probabilistic Graphical Model (PGM) architectures. The computational cells contain both computational circuits and memory storage, in the form of Conditional Probability Tables (CPTs), and it is in these CPTs where the non-volatility of MTJs provides a fully connected neural network and low power consumption that prevents latency in the memory access. In some of these architectures, the MTJ devices are taking on the role of both memory storage and computation circuits.

Conclusion

AI continues to get ever more powerful in different aspects of society. Nanomaterials are already having a big impact on improving and miniaturising computing hardware, and it could be that AI-specific hardware is one of the next areas of computing that gains significant improvements thanks to nanomaterials.

For more information visit www.nanotechia.org




Share this article:
Share via emailShare via LinkedInPrint this page

Further reading:

Speeding up the rollout of renewable energy with AI
AI & ML
Understanding that AI, particularly within the renewables space, will not take away jobs, but rather create them, is key to leveraging the immense power of this technology to drive South Africa forward.

Read more...
The dream of Edge AI
Altron Arrow Editor's Choice AI & ML
AI technology carries a great promise – the idea that machines can make decisions based on the world around them, processing information like a human might. But the promise of AI is currently only being fulfilled by big machines.

Read more...
MAX78000 neural network accelerator chip
Altron Arrow AI & ML
The hardware-based convolutional neural network accelerator enables even battery-powered applications to execute AI inferences.

Read more...
Microchip launches MPLAB ML development suite
AI & ML
Microchip’s unique solution is first to support 8-, 16- and 32-bit MCUs and 32-MPUs for machine learning at the edge.

Read more...
ToF sensor enables AI applications
Altron Arrow AI & ML
The VL53L7CH from STMicroelectronics is the perfect Time-of-Flight sensor enabling AI applications, with ultrawide 90° diagonal FoV and low power consumption.

Read more...
Analogue compute platform to accelerate Edge AI
Altron Arrow Editor's Choice AI & ML
Microchip has teamed up with Intelligent Hardware Korea to develop an analogue compute platform to accelerate Edge AI/ML inferencing using Microchip’s memBrain non-volatile in-memory compute technology.

Read more...
World’s most powerful open LLM
AI & ML
With a staggering 180 billion parameters, and trained on 3,5 trillion tokens, Falcon 180B has soared to the top of the Hugging Face Leaderboard for pretrained LLMs.

Read more...
Advancing quality control
Avnet Silica AI & ML
As manufacturing processes continue to become more sophisticated, the importance and effectiveness of advanced DVI solutions escalate, presenting opportunities for improved quality control.

Read more...
Give your edge AI model a performance boost
AI & ML
Join this webinar from STMicroelectronics to learn how to create an edge AI application easily on an STM32 MCU using the NVIDIA TAO toolkit.

Read more...
Three reasons why AI, ML add value for SMMEs only if the basics are in place
AI & ML
There is much chatter around artificial intelligence (AI) and the subfield of machine learning (ML), which can be confusing for SMME owners who may believe that they need to climb on the bandwagon.

Read more...