TinyML Applications with ONNX models

How to Deploy ONNX models on TinyML devices like Like Microcontrollers

Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. In order to allow AI developers to use models with a variety of frameworks, tools, runtimes, and compilers, ONNX defines a common set of operators—the fundamental components of machine learning and deep learning models in a common file format.

on-device tinyML battery-powered applications are becoming more popular due to their low power, low latency, and improved privacy features. It can be effort, skill and budget intensive to deploy tinyML models on edge devices, IoT, or microcontrollers (also known as MCUs). Complexities for developing and deploying tinyML models range from developing right model, choosing right framework, model conversion, to identifying the right MCU to fit the use case, etc.

Step by Step Instructions

The next video takes you step-by-step through how to bring a pre-trained ONNX model to a microcontroller of your choice. AI Tech Systems (AITS), a leading firmware, software and services provider for low power IoT, Endpoint and tinyML devices, demonstrates how ONNX can be used on Microcontrollers. Get an in-depth look at how to bring on-device tinyML applications to a multitude of verticals, including Industrial IoT, smart space and many more.

Where to get ONNX models?

The ONNX Model Zoo hosts a collection of pre-trained AI models in the ONNX format. With cAInvas your full stack AI app development platform like play-store, play-ground, IDEs and compiler, you can quick compile onnx models to a static library or a binary targeted to a hardware device of your choice.