Editor's Choice


The dream of Edge AI

22 November 2023 Editor's Choice AI & ML

At this point, we should have had flying cars. And robot butlers. And with some bad luck, sentient robots that decide to revolt against us before we can cause the apocalypse. While we don’t have that, it is clear that artificial intelligence (AI) technology has made its way into our world. Every time you ask Alexa to do something, machine learning technology is figuring out what you said and trying to make the best determination on what you wanted it to do. Every time Netflix or Amazon recommends that next movie or next purchase to you, it is based on sophisticated machine learning algorithms that give you compelling recommendations that are far more enticing than sales promotions of the past.

And while we might not all have self-driving cars, we’re all keenly aware of the developments in that space and the potential that autonomous navigation can offer.

AI technology carries a great promise – the idea that machines can make decisions based on the world around them, processing information like a human might (or in a manner superior to what a human would do). But if you think about the examples above, the AI promise here is only being fulfilled by big machines – things that don’t have power, size, or cost constraints, or to put it another way – they can get hot, have line power, are big, and are expensive. Alexa and Netflix rely on big, power-hungry servers in the cloud to figure out your intent. While self-driving cars are likely to rely on batteries, their energy capacity is enormous, considering those batteries must turn the wheels and steer, which are big energy expenses compared to even the most expensive AI decisions.

While the promise of AI is great, little machines are being left behind. Devices that are powered by smaller batteries or have cost and size constraints are unable to participate in the idea that machines can see and hear. Today, these little machines can only make use of simple AI technology: perhaps listening for a single keyword or analysing low-dimensional signals like photoplethysmography (PPG) from a heart rate.

What if little machines could see and hear?

But is there value in small machines being able to see and hear? It is hard to think about things like a doorbell camera taking advantage of technologies like autonomous driving or natural language processing, but there is an opportunity for less complex, less processing-intensive AI computations such as vocabulary recognition, voice recognition, and image analysis.

• Doorbell cameras and consumer security cameras often get triggered by uninteresting events, such as the motion of plants caused by wind, drastic light changes caused by clouds, or even events such as dogs or cats running in front of them. This can result in false triggers, causing the homeowner to begin to ignore the events. In addition, if the homeowner is travelling in a different part of the world, they are probably sleeping while their camera is alarming to changes in lighting caused by sunrise, clouds, and sunset. A smarter camera could get triggered by more specific events, such as a human being in the frame of reference.

• Door locks or other access points can use facial identification or even speech recognition to grant access to authorised personnel, forgoing the need for keys or badges in some cases.

• Lots of cameras want to trigger on certain events: for instance, trail cameras might want to trigger on the presence of a deer in the frame, security cameras might want to trigger on a person in the frame or a noise like a door opening or footsteps, and a personal camera might want to trigger with a spoken command.

• Large vocabulary commands can be useful in many applications: while there are plenty of Hey Alexa solutions, if you start to think about a vocabulary of 20 or more words, you can find use in industrial equipment, home automation, cooking appliances, and plenty of other devices to simplify the human interaction.

These examples only scratch the surface: the idea of allowing small machines to see, hear, and solve problems that in the past would require human intervention is a powerful one and we continue to find creative new use cases every day.

What are the challenges to enabling little machines to see and hear?

So, if AI could be so valuable to little machines, why don’t we have it yet? The answer is computational horsepower. AI inferences are the result of the computation of a neural network model. Think of a neural network model as a rough approximation of how your brain would process a picture or a sound, breaking it into very small pieces and then recognising the pattern when those small pieces are put together.

The workhorse model of modern vision problems is the convolutional neural network (CNN). These kinds of models are excellent at image analysis and are very useful in audio analysis as well. The challenge is that these models take millions or billions of mathematical computations. Traditionally, these applications have a difficult choice to make for implementation:

• Use an inexpensive and low-powered microcontroller solution. While the average power consumption may be low, the CNN can take seconds to compute, meaning the AI inference is not real time, and it consumes considerable battery power.

• Buy an expensive and high-powered processor that can complete those mathematical operations in the required latency. These processors are typically large and require lots of external components including heat sinks or similar cooling components. However, they execute AI inferences very quickly.

• Don’t implement. The low-power microcontroller solution will be too slow to be useful, and the high-powered processor approach will break cost, size, and power budgets.

What is needed is an embedded AI solution built from the ground up to minimise the energy consumption of a CNN computation. AI inferences need to execute at orders of magnitude with less energy than conventional microcontroller or processor solutions, and without the assistance of external components such as memories, which consume energy, size, and cost. If an AI inferencing solution could practically eliminate the energy penalty of machine vision, then even the smallest devices could see and recognise things happening in the world around them.

Lucky for us, we are at the beginning of this – a revolution of the little machines. Products are now available to nearly eliminate the energy cost of AI inferences and enable battery-powered machine vision. One such processor is the MAX78000 Neural Network Accelerator chip, an artificial intelligence microcontroller built to execute AI inferences while spending only microjoules of energy.


Credit(s)



Share this article:
Share via emailShare via LinkedInPrint this page

Further reading:

The trends driving uptake of IoT Platform as a Service
Trinity IoT Editor's Choice Telecoms, Datacoms, Wireless, IoT
IoT platforms, delivered as a service, are the key that will enable enterprises to leverage a number of growing trends within the IT space, and access a range of benefits that will help them grow their businesses.

Read more...
Ultra-low power MEMS accelerometer
Altron Arrow Analogue, Mixed Signal, LSI
Analog Devices’ ADXL366 is an ultra-low power, 3-axis MEMS accelerometer that consumes only 0,96 µA at a 100 Hz output data rate and 191 nA when in motion-triggered wake-up mode.

Read more...
Interlynx-SA: Engineering SA’s digital backbone
Interlynx-SA Editor's Choice
At the heart of the industrial shift towards digitalisation lies the growing demand for telemetry, Industrial IoT (IIoT), advanced networking, and robust data solutions, and Interlynx-SA is meeting this demand.

Read more...
Converting high voltages without a transformer
Altron Arrow Editor's Choice Power Electronics / Power Management
With appropriate power converter ICs, such as the LTC7897 from Analog Devices, many applications can be suitably powered without having to use complex and cost-intensive transformers.

Read more...
MCU platform for battery-powered devices
Altron Arrow DSP, Micros & Memory
The MCX W23 is a new dedicated wireless MCU platform from NXP for battery-powered sensing devices.

Read more...
Grinn Global: From design house to SoM innovator
Editor's Choice
From its beginnings as a small electronic design house, Grinn Global has moved into the spotlight as a system-on-module innovator working alongside technology giants like MediaTek.

Read more...
Precision MEMS IMU modules
Altron Arrow Analogue, Mixed Signal, LSI
The ADIS16575/ADIS16576/ADIS16577 from Analog Devices are precision, MEMS IMUs that includes a triaxial gyroscope and a triaxial accelerometer.

Read more...
Altron Arrow introduces GX10 supercomputer
Altron Arrow AI & ML
Powered by the NVIDIA GB10 Grace Blackwell superchip, this is desktop-scale AI performance previously only available to enterprise data centres.

Read more...
MEMS with embedded AI processing
Altron Arrow Analogue, Mixed Signal, LSI
STMicroelectronics has announced an inertial measurement unit that combines sensors tuned for activity tracking and high-g impact measurement into a single, space-saving package.

Read more...
Multicore CPUs with on-chip accelerators
Altron Arrow DSP, Micros & Memory
NXP’s MCX N94x and N54x MCUs offer advanced features for consumer and industrial applications, including connectivity, security, and power management.

Read more...









While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd | All Rights Reserved