uTensor was one of the first open source frameworks to bring machine learning onto microcontrollers. uTensor converts machine learning models to readable and self-contained C++ source files, to simplify the integration with any embedded project. It is especially designed for low-power, constrained embedded devices, and it has deep roots in TensorFlow and MbedOS.
A micro-inference engine should be developed alongside a training framework. On resource-constrained devices based on micro-controllers, every bit of computational resource matters. The technologies used in both uTensor and TensorFlow Lite Micro such as FlatBuffer, micro-interpreter, quantization, SIMD, graph-rewriting, and code-generation have made neural-network deployment possible on MCUs.
Machine learning algorithms are evolving at lightning speed. New neural network accelerators are being introduced constantly. uTensor is developed to adapt to this changing landscape. With minimal code, developers can train and deploy TensorFlow models on a wide range of hardware supported by Mbed.
It has features designed to be forward compatible with advances in embedded systems:
- Editable C++ model implementations generated from trained model files
- Extensibility for Tensorflow Lite files, RTL for FPGAs, MLIR, and, other generations
- The ability to place tensors in various memory devices
- Extensibility for optimized kernels, discrete accelerators, and remote services
- Python offline-optimization-tools to enable target-specific and data-driven optimization
TensorFlow is Google’s widely used framework for machine learning. uTensor has a strong connection to Mbed’s communities, allowing us to bring ML to 100s of Mbed hardware targets and the 350k+ Mbed developer community.
More information at www.utensor.ai
More eeNews articles on machine learning: https://www.eenewsembedded.com/search/node/machine%20learning