Video demo ( version 0.0.1 ):
- Without Miniconda: This version runs without Miniconda.
- Rkllama Docker: A fully isolated version running in a Docker container.
- Support All Models: This branch ensures all models are tested before being merged into the main branch.
- Docker Package
A server to run and interact with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. The difference from other software of this type like Ollama or Llama.cpp is that RKLLama allows models to run on the NPU.
- Version
Lib rkllm-runtime
: V1.1.4.
./models
: contains your rkllm models../lib
: C++rkllm
library used for inference andfix_freqence_platform
../app.py
: API Rest server../client.py
: Client to interact with the server.
- Python 3.8 to 3.12
- Hardware: Orange Pi 5 Pro: (Rockchip RK3588S, NPU 6 TOPS), 16GB RAM.
- Hardware: Orange Pi 5 Plus: (Rockchip RK3588S, NPU 6 TOPS), 16GB RAM.
- OS: Ubuntu 24.04 arm64.
- OS: Armbian Linux 6.1.99-vendor-rk35xx (Debian stable bookworm), v25.2.2.
- Running models on NPU.
- Partial Ollama API compatibility - Primary support for
/api/chat
and/api/generate
endpoints. - Pull models directly from Huggingface.
- Include a API REST with documentation.
- Listing available models.
- Dynamic loading and unloading of models.
- Inference requests with streaming and non-streaming modes.
- Message history.
- Simplified model naming - Use models with familiar names like "qwen2.5:3b".
- CPU Model Auto-detection - Automatic detection of RK3588 or RK3576 platform.
- Optional Debug Mode - Detailed debugging with
--debug
flag.
- French version: click
- Client : Installation guide.
- API REST : English documentation
- API REST : French documentation
- Ollama API: Compatibility guide
- Model Naming: Naming convention
- Clone the repository:
git clone https://github.com/notpunchnox/rkllama
cd rkllama
- Install RKLLama:
chmod +x setup.sh
sudo ./setup.sh
Pull the RKLLama Docker image:
docker pull ghcr.io/notpunchnox/rkllama:main
run server
docker run -it --privileged -p 8080:8080 ghcr.io/notpunchnox/rkllama:main
Set up by: ichlaffterlalu
Virtualization with conda
is started automatically, as well as the NPU frequency setting.
- Start the server
rkllama serve
To enable debug mode:
rkllama serve --debug
- Command to start the client
rkllama
or
rkllama help
- See the available models
rkllama list
- Run a model
rkllama run <model_name>
Then start chatting ( verbose mode: display formatted history and statistics )
You can download and install a model from the Hugging Face platform with the following command:
rkllama pull username/repo_id/model_file.rkllm
Alternatively, you can run the command interactively:
rkllama pull
Repo ID ( example: punchnox/Tinnyllama-1.1B-rk3588-rkllm-1.1.4): <your response>
File ( example: TinyLlama-1.1B-Chat-v1.0-rk3588-w8a8-opt-0-hybrid-ratio-0.5.rkllm): <your response>
This will automatically download the specified model file and prepare it for use with RKLLAMA.
Example with Qwen2.5 3b from c01zaut: https://huggingface.co/c01zaut/Qwen2.5-3B-Instruct-RK3588-1.1.4
-
Download the Model
- Download
.rkllm
models directly from Hugging Face. - Alternatively, convert your GGUF models into
.rkllm
format (conversion tool coming soon on my GitHub).
- Download
-
Place the Model
- Navigate to the
~/RKLLAMA/models
directory on your system. - Make a directory with model name.
- Place the
.rkllm
files in this directory. - Create
Modelfile
and add this :
FROM="file.rkllm" HUGGINGFACE_PATH="huggingface_repository" SYSTEM="Your system prompt" TEMPERATURE=1.0
Example directory structure:
~/RKLLAMA/models/ └── TinyLlama-1.1B-Chat-v1.0 |── Modelfile └── TinyLlama-1.1B-Chat-v1.0.rkllm
You must provide a link to a HuggingFace repository to retrieve the tokenizer and chattemplate. An internet connection is required for the tokenizer initialization (only once), and you can use a repository different from that of the model as long as the tokenizer is compatible and the chattemplate meets your needs.
- Navigate to the
-
Go to the
~/RKLLAMA/
foldercd ~/RKLLAMA/ cp ./uninstall.sh ../ cd ../ && chmod +x ./uninstall.sh && ./uninstall.sh
-
If you don't have the
uninstall.sh
file:wget https://raw.githubusercontent.com/NotPunchnox/rkllama/refs/heads/main/uninstall.sh chmod +x ./uninstall.sh ./uninstall.sh
Ollama API Compatibility: RKLLAMA now implements key Ollama API endpoints, with primary focus on /api/chat
and /api/generate
, allowing integration with many Ollama clients. Additional endpoints are in various stages of implementation.
Enhanced Model Naming: Simplified model naming convention allows using models with familiar names like "qwen2.5:3b" or "llama3-instruct:8b" while handling the full file paths internally.
Improved Performance and Reliability: Enhanced streaming responses with better handling of completion signals and optimized token processing.
CPU Auto-detection: Automatic detection of RK3588 or RK3576 platform with fallback to interactive selection.
Debug Mode: Optional debugging tools with detailed logs that can be enabled with the --debug
flag.
Simplified Model Management:
- Delete models with one command using the simplified name
- Pull models directly from Hugging Face with automatic Modelfile creation
- Custom model configurations through Modelfiles
- Smart collision handling for models with similar names
If you have already downloaded models and do not wish to reinstall everything, please follow this guide: Rebuild Architecture
- OpenAI API compatible.
- Ollama API improvements
- Add multimodal models
- Add embedding models
- Add RKNN for onnx models (TTS, image classification/segmentation...)
GGUF/HF to RKLLM
conversion software
System Monitor:
- ichlaffterlalu: Contributed with a pull request for Docker-Rkllama and fixed multiple errors.
- TomJacobsUK: Contributed with pull requests for Ollama API compatibility and model naming improvements, and fixed CPU detection errors.