Seize IoT Opportunities With Embedded Machine Learning
To enable our clients to run ML applications on all sorts of embedded devices, we leverage a specific approach called embedded machine learning. It makes it possible to run machine learning models close to the end-users and take advantage of the Internet of Things (IoT) opportunities.
Why do we need embedded machine learning?
AI networks are becoming more sophisticated, faster and more powerful than ever. But this comes at the expense of their portability. State-of-the-art models require vast amounts of power and large machines to operate. As a result, modern cloud computing infrastructures, which provide access to large amounts of computing resources, are the first choice for running machine learning and deep learning applications.
On the other hand, the Internet of Things (IoT) is becoming more and more popular. Connected devices are everywhere, from fridges to watches. Such applications require low latency and must be run close to the end-users. In this case, executing AI in the cloud is not always the best option.
As an example, we built for one of our client deep-learning algorithms for a road sign detection system. Cars are equipped with a camera based on a Raspberry Pi4 device that sends the images to a cloud server where the models are hosted. Nevertheless, sending a video to the cloud is costly, incurs high latency and can be susceptible to privacy leaks.
How embedded machine learning helps?
Embedded Machine Learning and TinyML comes at the intersection of those two challenges to fit AI networks on tiny hardware called microcontrollers and find the perfect balance between performance and power consumption.
At Eura Nova, we develop new methods on hardware from STMicroelectronics, a leading Integrated Device Manufacturer, to implement deep-learning models on small scale microcontrollers.
To talk about how your company can benefit from a system processing data within your ubiquitous hardware products, contact us.
Photo by Stanislaw Zarychta on Unsplash