TinyML: Making Smart Devices Tinier than Ever

By Margot Bagnoli Published on Nov. 4, 2020

TinyML is a type of machine learning that shrinks deep learning networks to fit on tiny hardware. It brings together Artificial Intelligence and intelligent devices. It is 45x18mm of Artificial Intelligence in your pocket. Suddenly, the do-it-yourself weekend project on your Arduino board has a miniature machine learning model embedded in it. Ultra-low-power embedded devices are invading our world, and with new embedded machine learning frameworks, they will further enable the proliferation of AI-powered IoT devices.

Let us now translate this jargon: What is TinyML? And, more importantly - what can (and can’t) it be used for?

What is TinyML? An Introduction

Machine learning is a buzzword that has been around for a while, with many useful applications for chaotic data that needs to be made sense of. But it is less frequently associated with hardware. Usually, ML and hardware are associated with the cloud, which oftentimes is associated with latency, consuming power, and putting machines at the mercy of connection speeds.

What is TinyML 1

Source

Yet, applying Machine Learning to devices is not something new. For a few years now, most of our phones have had some sort of neural network in them. Device music identification, as well as many camera modes (such as the night sight and portrait mode) are just a few examples that rely on embedded deep learning. The algorithms can identify apps that we are more likely to use again and shut off the non-required ones extending the phone battery. However, there are many challenges for embedded AI, some of which are power and space. And that’s where TinyML comes in.

On-device sensor data requires significant computation capabilities and it results in problems such as limited storage capacity, limited central processing unit (CPU) and reduced database performance. TinyML brings Machine Learning to the scene by embedding Artificial Intelligence on small pieces of hardware. With it, it is possible to leverage deep learning algorithms to train the networks on the devices and shrink their size without the hurdle of sending data to the cloud and, hence, added latency in order to analyze it.

TinyML: Understanding The Basics

Pete Warden, TinyML guru and TensorFLow Lite Engineering Lead at Google, has published a book along with Daniel Situnayake. This book, “TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers”, has become a reference in the field.

Some jargon translation for the non-experts:

  • Arduino is an open-source hardware manufacturer that allows anyone to buy a microcontroller board and build their own digital device.
  • A microcontroller is a small computer on a semiconductor chip circuit, basically a set of electronic circuits on a small flat surface usually made of silicon. This set of hardware can replace the traditional pre-built single-board computer Raspberry Pi to run, requiring much less power and space.
  • Lastly, TensorFlow Lite is an embedded machine learning framework created by Google that has a subcategory specifically designed for microcontrollers. In 2019, in addition to TensorFlow Lite, other frameworks started focusing on making deep learning models smaller, faster, and adapted to embedded hardware including uTensor, and Arm’s CMSIS-NN. At the same time, numerous YouTube tutorials started popping up on how to use TinyML and similar frameworks on AI-powered microcontrollers to train, validate and then deploy small neural networks sets on hardware through inference engines.

As of today, the Arduino Nano 33 BLE Sense is the only 32-bit board that supports TensorFlow Lite, making machine learning embedded on hardware accessible to anyone. Arduino collaborates with the startup Edge Impulse to lower power consumption by only sending data when it truly matters in a faster, more efficient and scalable way by processing data directly at the sensor interface through an inference engine.

Machine Learning is oftentimes about optimizing, but TinyML is not just about optimization: some cloud application programming interfaces (APIs) simply preclude interactivity and are too constraining from a power usage perspective. On top of that, these constraints make computing at the edge slower, more expensive, and less predictable.

The difference with the phone-based machine learning previously mentioned is that TinyML enables batteries or energy-harvesting devices to run without the need to recharge manually or change batteries because of power constraints. Think about it as an always-on digital signal processor. This translates into a device that could run at less than one milliwatt, so either a device able to run on a battery for years, or that could use energy harvesting. This also means that the devices simply cannot be connected by radio, because even low-power short-range radio uses from tens to hundreds of milliwatts, and it would only allow for short bursts of power. These limits also result in the need for code that can run with extreme small memory constraints limited to tens of kilobytes, therefore differentiating TinyML from what goes on with raspberry or phones. The idea behind TinyML is to make it as accessible as possible to allow mass proliferation and scale it to trillions of inexpensive and independent sensors, 32-bit microcontrollers that could be sold for $0.50, according to Pete Warden.

Pete joined Google in 2014 and he learned that the Google OK team was able to run voice interfaces and wake words with only 13 kilobytes. They could do that when the lock screen was on, allowing it to use very low power, even if the device was plugged into the wall for energy saving purposes. His dream is to expand the wake word application to a speech recognition app that is cheap, able to run on a pixel for up to a year, and on a coin battery that could fit within 18 megabytes. Imagine all the switches, buttons and components that have already been replaced with wake words and what more could be replaced. Another theme is blending voice interfaces and visual signals through TinyML, allowing devices to understand when you are looking at the device and eliminating background noises such as other people speaking at the same time or equipment in industrial settings.

TinyML: Applications & Use Cases

Predictive maintenance is likely to be one of the most common use cases with the highest impact. In the automotive industry, power is less of a constraint when compared to cost and reliability, therefore industrial environments are likely to benefit from TinyML. For example, the startup Shoreline IoT focuses on a peel-and-stick ultra-low power sensor on a motor that can last for up to 5 years on the same pair of batteries at 1 milliwatt or below power usage. This is a great advantage in industrial settings where it is usually harder to plug in devices to power compared to our homes. The challenge here is that the replacement cycle for industrial machines is fairly long making it more difficult to be innovative.

On-device machine learning has seen applications in other spaces, such as building automation (think of the possible use of low power vision sensors for office lighting systems that currently shut down if you don’t move), toys (with the Arduino gesture recognition accelerometer used in the Magic Wand), and finally drug development and testing by significantly reducing the time necessary (although this domain is more constrained by regulations). Compared to industrial settings, fast iteration cycles enable increased feedback and room for experimentation.

Audio analytics, pattern recognition, and voice human machine interfaces are the fields where most of TinyML is applied today. Many industries can benefit from audio analytics, such as child and elderly care, safety, and equipment monitoring. For example, through a sound model, TinyML model TinyML can detect anomalies by analyzing the audio in farms with a tiny sensor triggering an alert in real-time. In relation to COVID-19, Edge Impulse and Arduino recently published a project using the Nano BLE Sense board to detect the presence of specific coughing sounds in real-time and applied a highly optimized TinyML model to build a cough detection system that runs in under 20 kB of random access memory (RAM).

Aside from audio, TinyML can be used for vision, motion, and gesture recognition as well.

According to Pete Warden, TinyML will be pervasive in many industries. It will impact almost every single industry: retail, healthcare, transportation, wellness, agriculture, fitness, and manufacturing. Our phones can become the edge device that captures data by adding the data acquisition tab on the Edge Impulse Studio, then choosing the sensors, for example the accelerometer sensor to sample the movements of the phone. This allows it to run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low powered microcontrollers.

According to the Emerging Spaces review of Pitchbook, $26 million have been invested in TinyML since January 2020 including venture capital investments by accelerators, early-stage investors and late-stage investors. This is relatively small compared to other more established branches of AI and ML, such as data labeling. Among the trends, the number of deals competed with other hot topics, such as cognitive computing, Next-Generation Security and AIOps.

what-is-tiny-ml-3

Data source: Pitchbook

what-is-tiny-ml-2.png

Data source: Pitchbook

TinyML: Looking Ahead

This fall, Harvard launched the course CS249R: Tiny Machine Learning, mentioning that “the explosive growth in machine learning and the ease of use of platforms like TensorFlow (TF) make it an indispensable topic of study for the modern computer science student”.

Today there are over 250 billion embedded devices active in the world, with an expected 20 percent yearly growth. These devices are gathering large amounts of data and processing this in the cloud has presented quite a challenge. Out of out of those 250 billion devices, about 3 billion that are currently in production are able to support TensorsFlow Lite that are currently in production. TinyML could bridge the gap between edge hardware and device intelligence. Making TinyML more accessible to developers will be crucial to allow the mass proliferation of embedded machine learning to turn wasted data into actionable insights and to create new applications in many industries.

With new types of human machine interfaces (HMI) emerging and the number of intelligent devices increasing, TinyML has the potential to embed AI and computing at the edge ubiquitous, cheaper, scalable and more predictable, changing the paradigm in ML.