RK3588 comes with an NPU with 6 Tops computing power. Currently, RK's NPU adopts a self-developed architecture and only supports the use of non-open source drivers and libraries for operation.
RK's npu sdk is divided into two parts. The PC side uses rknn-toolkit2, which can perform model conversion, reasoning and performance evaluation on the PC side. Specifically, it converts mainstream models such as Caffe, TensorFlow, TensorFlow Lite, ONNX, DarkNet, PyTorch, etc. into RKNN models, and can use this RKNN model on the PC side for reasoning simulation, calculation time and memory overhead. There is another part on the board side, which is the rknn runtime environment, which includes a set of C API libraries and driver modules for communicating with NPU, executable programs, etc. This article introduces how to use rk's npu sdk.
There are two ways to get the RKNN library source code.
If you want to keep the dynamic library consistent with the board, you need to pull the entire SDK provided by Hot Wheels Technology.
The rknn-toolkit2 source code is in the SDK root directory
external/rknn-toolkit2
The rknpu2 source code is in the SDK root directory
external/rknpu2
. It is recommended to copy these two source codes to a separate working directory for use
If you want to experience the latest version of rknn, you can directly pull it from rk's git. Its git address is
The following command is executed on an x86 Ubuntu host
Since the rknn-toolkit2 operating environment has many dependencies, it is recommended to directly install the PC environment of rknn using docker. The docker provided by rk already contains all the necessary environments.
- Uninstall the old docker version
apt-get remove docker docker-engine docker.io containerd runc
- Install dependencies:
sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- Trust docker's gpg public key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
To verify whether the public key is added successfully, you can use the following command
apt-key fingerprint 0EBFCD88
- Add software source and install
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce
To verify whether the installation is successful, you can use the following command
docker -v
The following command is executed on an x86 Ubuntu host
First download rknn-toolkit2. The address is
https://github.com/rockchip-linux/rknn-toolkit2
Note that there is no docker here. This page has a Baidu network disk link for RK. You can go to this link to get docker
Open this network disk link, and then find
Download this file.
Then if you want to update rknn-toolkit, you can also find it in this network disk. It is recommended to choose the version consistent with the one in the sdk.
After downloading, open the file directory where this docker is located, and then execute
sudo docker load --input rknn-toolkit2-1.5.0-cp36-docker.tar.gz
Then execute
sudo docker images
You can see that this image has been loaded
Then use the command
sudo docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb -v $(pwd)/rknn-toolkit2/examples/onnx/yolov5:/rknn_yolov5_demo rknn-toolkit2:1.5.0-cp36 /bin/bash
Where -v is to map the directory into the Docker environment. Here, an example in rknn-toolkit2 is mapped, and other directories can also be mapped. In addition, /dev/bus/usb is needed when using adb debugging later. If the board you have has an adb service, you can enable it. If not, you can disable it.
In the above command, (pwd)/rknn-toolkit2/ needs to be replaced with the directory of the actual rknn-toolkit2 project. In this directory, there is a folder called examples/onnx/yolov5.
After entering docker, use the ls command to view the files under the root directory. You can see that there is indeed a folder called rknn_yolov5_demo.
Enter rknn_yolov5_demo in docker and execute
python3 ./test.py
You can get the following results
Then the converted model is stored in the host machine's rknn-toolkit2/examples/onnx/yolov5, where
yolov5s.rknn To support the model in rknn format
The inference results are saved in result.jpg in this directory, as shown below (the original image is on the left, and the inference result is on the right)
First open rknn-toolkit2, and you need to regenerate the rknn model file suitable for board execution. In the method described above, the generated rknn library is simulated and run on the PC. To generate the target platform, you need to modify test.py
If there is no adb, just add the parameter target_platform='rk3588' to rknn.config. If there is adb and you need to connect adb for debugging, you need to add parameters in rknn.init_runtime.
Then execute python3 ./test.py in docker again to get the yolov5s.rknn file, which can be put into the rk3588 device for running
Then open the rknpu2 folder on the host machine, and select the yolov5 demo again.
cd rknpu2/examples/rknn_yolov5_demo
Modify the application compilation script build-linux_RK3588.sh. The compilation toolchain is in prebuilts/gcc/linux-x86/aarch64 in the sdk directory. Users need to modify the PATH and TOOL_CHAIN variables according to the actual sdk installation path.
Execute build-linux_RK3588.sh. After that, the following files are generated
Put this executable program, rknn model, bus.jpg and coco_80_labels_list.txt under the model folder together in the same path on the board, you can use ssh, adb and other methods.
Then execute in this path
./rknn_yolov5_demo ./model/yolov5s.rknn ./model/bus.jpg
out.jpg will be generated in this directory, check this file
Its simulated reasoning effect is basically the same as that on the PC, and the confidence level is not much different
At this point, the entire process of a rknn demo from PC reasoning to board execution is completed.