ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Apr 14, 2025 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Open source real-time translation app for Android that runs locally
Speech-to-text, text-to-speech, speaker diarization, speech enhancement, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, HarmonyOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, support 11 programming languages
Tengine is a lite, high performance, modular inference engine for embedded device
🛠 A lite C++ AI toolkit: 100+🎉 models (Stable-Diffusion, Face-Fusion, YOLO series, Det, Seg, Matting) with MNN, ORT and TRT.
An OBS plugin for removing background in portrait images (video), making it easy to replace the background when recording or streaming.
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but also Mistral 7B on desktops and servers. ARM, x86, WASM, RISC-V supported. Accelerated by XNNPACK.
Machine learning on FPGAs using HLS
nGraph has moved to OpenVINO
Samples and Tools for Windows ML.
Add a description, image, and links to the onnx topic page so that developers can more easily learn about it.
To associate your repository with the onnx topic, visit your repo's landing page and select "manage topics."