Lists (32)
Sort Name ascending (A-Z)
Architecture/Project_Structure
AWS
Business & Compliance
C/Cpp/CUDA
C repos
Carbon
Courses
DBs / Vector DBs
Elixir
Fonts
Functional Programming
GenAI
Golang
JavaScript/TypeScript ML
ML Datasets
ML Federated Learning
ML Papers
MLOps/AIOps
Mojo ML
Nim-Lang
NoSQL
Notebooks repos
nvim
Obsidian.md
Ocaml
OVH Cloud
Python ML
Qwik
Rust repos
Salesforce
Vlang
Zig
Stars
Port of OpenAI's Whisper model in C/C++
Carbon Language's main repository: documents, design, implementation, and related tools. (NOTE: Carbon Language is experimental; see README)
ncnn is a high-performance neural network inference framework optimized for the mobile platform
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
NoSQL data store using the Seastar framework, compatible with Apache Cassandra and Amazon DynamoDB
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices,…
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
lightweight, standalone C++ inference engine for Google's Gemma models.
A flexible, high-performance serving system for machine learning models
Transformer related optimization, including BERT, GPT
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
Tengine is a lite, high performance, modular inference engine for embedded device
Stable Diffusion and Flux in pure C/C++
Fast inference engine for Transformer models
An Open Source Machine Learning Framework for Everyone
Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure
vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
Source code for 'Design Patterns in Modern C++' by Dmitri Nesteruk