AI for Microcontrollers (2024)

Image Classification at Edge

On a recent project for an microelectronics manufacturer in the gaming industry, we developed a computer vision algorithm capable of classifying poker cards.

The main constraint was the limited computer power (the algorithm should run on a microcontroller), and the fast inference time required (below 150ms).

We compared different approaches. The first approach was a pre-trained model that was fine-tuned and then optimized for the required microcontroller with Tensorflow Lite. The second approach was a model from scratch, using classical neural network architectures.

While training happens within the Python ecosystem, we developed also a command-line interface (CLI) in C++ for inference at the required speed.

Tensorflow Lite for Microcontrollers Training

Another recent project took us to Spain, where a manufacturer of industrial equipment required some training. They were interested in anomaly detection and predictive maintenance for their equipment.

While the Raspberry Pi is a rather powerful controller, this particular client was even more limited in computing power. No hope of using Python!

Our training focused on neural networks, as the trained models are easily transferred from Tensorflow to Tensorflow Lite, either by a suite provided by the microcontroller manufacturer or by EdgeImpulse.