Browse Source

update rknn-toolkit2/rknn-toolkit-lite2 to 1.3.0

Signed-off-by: raul.rao <raul.rao@rock-chips.com>
pull/53/head
raul.rao 3 years ago
parent
commit
9ad79343fa
  1. 20
      README.md
  2. BIN
      Rockchip_Quick_Start_RKNN_SDK_CN-1.2.0.pdf
  3. BIN
      Rockchip_Quick_Start_RKNN_SDK_V1.3.0_CN.pdf
  4. 392
      doc/RKNNToolKit2_OP_Support-1.3.0.md
  5. 0
      doc/RRKNNToolKit2_API_Difference_With_Toolkit1-1.3.0.md
  6. BIN
      doc/Rockchip_Quick_Start_RKNN_Toolkit2_CN-1.3.0.pdf
  7. BIN
      doc/Rockchip_Quick_Start_RKNN_Toolkit2_EN-1.3.0.pdf
  8. BIN
      doc/Rockchip_User_Guide_RKNN_Toolkit2_CN-1.2.0.pdf
  9. BIN
      doc/Rockchip_User_Guide_RKNN_Toolkit2_CN-1.3.0.pdf
  10. BIN
      doc/Rockchip_User_Guide_RKNN_Toolkit2_EN-1.2.0.pdf
  11. BIN
      doc/Rockchip_User_Guide_RKNN_Toolkit2_EN-1.3.0.pdf
  12. 26
      doc/changelog-1.3.0.txt
  13. 1
      doc/requirements_cp36-1.3.0.txt
  14. 19
      doc/requirements_cp38-1.3.0.txt
  15. 13
      examples/functions/board_test/test.py
  16. 2
      examples/onnx/yolov5/test.py
  17. 27
      examples/readme.txt
  18. 2
      examples/tensorflow/inception_v3_qat/test.py
  19. BIN
      packages/rknn_toolkit2-1.3.0_11912b58-cp36-cp36m-linux_x86_64.whl
  20. BIN
      packages/rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
  21. BIN
      rknn-toolkit-lite2-v1.2.0/doc/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.2.0_CN.pdf
  22. 6
      rknn-toolkit-lite2-v1.2.0/doc/change_log.txt
  23. BIN
      rknn-toolkit-lite2-v1.2.0/packages/rknn_toolkit_lite2-1.2.0-cp37-cp37m-linux_aarch64.whl
  24. BIN
      rknn-toolkit-lite2-v1.2.0/packages/rknn_toolkit_lite2-1.2.0-cp39-cp39-linux_aarch64.whl
  25. BIN
      rknn_toolkit_lite2/docs/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.3.0_CN.pdf
  26. BIN
      rknn_toolkit_lite2/docs/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.3.0_EN.pdf
  27. 12
      rknn_toolkit_lite2/docs/change_log.txt
  28. BIN
      rknn_toolkit_lite2/examples/inference_with_lite/resnet18_for_rk356x.rknn
  29. BIN
      rknn_toolkit_lite2/examples/inference_with_lite/resnet18_for_rk3588.rknn
  30. 0
      rknn_toolkit_lite2/examples/inference_with_lite/space_shuttle_224.jpg
  31. 0
      rknn_toolkit_lite2/examples/inference_with_lite/test.py
  32. BIN
      rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.3.0-cp37-cp37m-linux_aarch64.whl
  33. BIN
      rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.3.0-cp39-cp39-linux_aarch64.whl
  34. 2
      rknn_toolkit_lite2/packages/rknn_toolkit_lite2_1.3.0_packages.md5sum

20
README.md

@ -1,27 +1,31 @@ @@ -1,27 +1,31 @@
# Introduction
RKNN-Toolkit2 is a software development kit for users to perform model conversion, inference and performance evaluation on PC and Rockchip NPU platforms (RK3566, RK3568, RK3588, RK3588S).
RKNN-Toolkit2 is a software development kit for users to perform model conversion, inference and performance evaluation on PC and Rockchip NPU platforms (RK3566, RK3568, RK3588, RK3588S, RV1103, RV1106).
RKNN-Toolkit-Lite2 provides Python programming interfaces for Rockchip NPU platform (RK3566, RK3568, RK3588, RK3588S) to help users deploy RKNN models and accelerate the implementation of AI applications.
# Download
- You can also download all packages, docker image, examples, docs and platform-tools from baidu cloud: [rknn-toolkit2-v1.2.0](https://eyun.baidu.com/s/3jJ4k51s), fetch code: rknn
- You can also download all packages, docker image, examples, docs and platform-tools from baidu cloud: [RK_NPU_SDK](https://eyun.baidu.com/s/3eTDMk6Y), fetch code: rknn
# Notes
- Currently rknn-toolkit2 is not compatible with [rknn-toolkit](https://github.com/rockchip-linux/rknn-toolkit)
- Currently only support on Ubuntu 18.04 python 3.6
- Currently only support on Ubuntu 18.04 python 3.6 / Ubuntu 20.04 python 3.8
- If you have problem about RKNN-Toolkit2, it is recommended to create a issue or get answers from [Issues](https://github.com/rockchip-linux/rknn-toolkit2/issues).
- Latest version:1.2.0(Release version)
- Latest version:1.3.0(Release version)
# Checksums
## MD5
```
148f076c8707b066307f169975259bbb rknn_toolkit2-1.2.0_f7bb160f-cp36-cp36m-linux_x86_64.whl
69ffac3f944cf682acf563245e532405 packages/rknn_toolkit2-1.3.0_11912b58-cp36-cp36m-linux_x86_64.whl
fdf38d4ec4a8a0ac3c9faeba38baa908 rknn-toolkit2-1.2.0-cp36-docker.tar.gz
f37e2296e631587b929a75028dd235bf packages/rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl
e4430d25066487172a4a861b04098495 rknn_toolkit_lite2-1.2.0-cp37-cp37m-linux_aarch64.whl
f269a9cf3fa209b2615efdd0711f3f18 docker/rknn-toolkit2-1.3.0-cp36-docker.tar.gz
dd9f5509d3342b6cbc52d5814d0909d3 rknn_toolkit_lite2-1.2.0-cp39-cp39-linux_aarch64.whl
db40be96541a121b4c77109d829089b6 docker/rknn-toolkit2-1.3.0-cp38-docker.tar.gz
4dff57052f94a9c080774ab0e6af1d56 rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.3.0-cp37-cp37m-linux_aarch64.whl
1085cd7ee4bb18ef17b03d66e5428d69 rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.3.0-cp39-cp39-linux_aarch64.whl
```
# Feedback and Community Suport
Two ways are followed:

BIN
Rockchip_Quick_Start_RKNN_SDK_CN-1.2.0.pdf

Binary file not shown.

BIN
Rockchip_Quick_Start_RKNN_SDK_V1.3.0_CN.pdf

Binary file not shown.

392
doc/RKNNToolKit2_OP_Support-1.2.0.md → doc/RKNNToolKit2_OP_Support-1.3.0.md

@ -2,11 +2,11 @@ @@ -2,11 +2,11 @@
## Explanation of terms:
Remarks:
**Remarks**:
Operators' specifications must meet the remarks' requirements.
Broadcast rule:
**Broadcast rule**:
- per-layer:
@ -26,71 +26,97 @@ Broadcast rule: @@ -26,71 +26,97 @@ Broadcast rule:
shape(A) = (2, 3, 4, 5), shape(B) = (5,) ==> shape(result) = (2, 3, 4, 5)
**Input Size Restrictions Description**
Assuming that input size is [N,H,W,C] (layout is NHWC)
- Case 1: the first layer is **Convolution**, whose kernel size is [kernel_height, kernel_width]
**W * kernel_height < 7168**
**kernel_height * kernel_width < 128**
- Case 2: first layer is not Convolution, and C == 1 or C == 3 or C == 4
**W < 7168**
- others:
**No Restrictions**
## ONNX OPs supported by RKNN Toolkit2
According to [ONNX official instructions](https://github.com/microsoft/onnxruntime/blob/master/docs/Versioning.md 'ONNX Version Description'), the corresponding ONNX opset version is 12.
The list of ONNX OPs supported by RKNN Toolkit2 is as follows:
| **Operators** | **Remarks** |
|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Add | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| AveragePool | channel: [1, 8192]<br />kernel height/width: [1, 7]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7] |
| ArgMin ||
| ArgMax ||
| BatchNormalization | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| cast | only support bool/int8/float |
| Clip | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Concat | axis: 1,2,3 |
| Conv | channel: unlimited <br />kernel height/width: [1, 31]<br />stride height/width: [1, 7]<br />kernels: [1, 8184]<br />pad left/right/top/bottom: [0, 15]<br />dilation: [1, 31]<br />group: unlimited |
| ConvTranspose | channel: unlimited<br />kernel height/width: [1, 31]<br />stride height/width: 2, 4, 8<br />kernels: [1, 8192]<br />pad left/right/top/bottom: [0, 15]<br />dilation: [1, 31]<br />group: unlimited |
| DepthToSpace ||
| Div | support broadcast rule: per-element/other |
| Elu ||
| Exp ||
| Flatten ||
| Gather ||
| Gemm | channel: unlimited<br /> One input should be Const |
| GlobalAveragePool | channel: [1, 8192]<br />kernel height/width: [1, 343]<br /> |
| GlobalMaxPool | channel: [1, 8192]<br />kernel height/width: [1, 343]<br /> |
| Greater | support broadcast rule: per-element/other |
| HardSigmoid ||
| InstanceNormalization ||
| HardSwish ||
| LeakyRelu | channel: unlimited<br />height: [1, 8192]<br />width: [1, 8176] |
| Less | support broadcast rule: per-element/other |
| LpNormalization ||
| LRN ||
| LSTM | batchsize: 1 |
| GRU | batchsize: 1 |
| MatMul | |
| Max | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br /> dims=4 |
| MaxPool | channel: [1, 8192]<br />kernel height/width: [1, 7]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7]<br />auto_pad only support NOTSET,ceil_mode only support 0,unsupport dilations |
| MaxRoiPool ||
| MaxUnpool | unsupport pad |
| Mul | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| Pad | pad value should >= 0; pad dims must be 2 when mode is reflect or edge |
| Pow ||
| PRelu | channel: unlimited<br />height: [1, 8192]<br />width: [1, 8176]<br />slope support broadcast rule:: per-layer/channel |
| ReduceMax ||
| ReduceMean | output dims <= 4 |
| ReduceSum | output dims <= 4 |
| ReduceMin ||
| Relu | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Reshape ||
| Resize | bilinear(not support tf_crop_and_resize); nearest2d |
| ReverseSequence ||
| RoiAlign | pool type: average |
| Sigmoid ||
| Slice ||
| Softmax ||
| SpaceToDetph ||
| Split ||
| Squeeze ||
| Tanh | channel: unlimited<br />height: [1, 8192]<br />width: [1, 8176] |
| Tile ||
| Transpose ||
| Upsample (resize) | coordinate_transformation_mode unsupport tf_crop_and_resize |
| **Operators** | **Remarks** |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Add | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| AveragePool | channel: [1, 8192]<br />kernel height/width: [1, 7]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7]<br />auto_pad: NOTSET<br />count_include_pad: 1 <br />ceil_mode: 0 |
| ArgMin | |
| ArgMax | |
| BatchNormalization | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| cast | only support bool/int8/float |
| Clip | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Concat | |
| Conv | channel: [0, 150000] <br />kernel height/width: [1, 31]<br />stride height/width: [1, 7]<br />pad left/right/top/bottom: [0, 15] |
| ConvTranspose | channel: [0, 150000]<br />kernel height/width: [1, 31]<br />stride height/width: 2, 4, 8<br />kernels: [1, 8192]<br />pad left/right/top/bottom: [0, 15] |
| DepthToSpace | |
| Div | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8192]<br />support broadcast rule: per-element/other |
| Elu | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br /> |
| Exp | |
| Flatten | |
| Gather | |
| Gemm | channel: [0, 150000]<br />One input should be Const |
| GlobalAveragePool | channel: [1, 8192]<br />kernel height/width: [1, 343]<br /> |
| GlobalMaxPool | channel: [1, 8192]<br />kernel height/width: [1, 343]<br /> |
| Greater | support broadcast rule: per-element/other |
| HardSigmoid | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| HardSwish | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| InstanceNormalization | |
| LeakyRelu | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Less | support broadcast rule: per-element/other |
| LpNormalization | |
| LRN | |
| LSTM | batchsize: 1<br />input_forget: 0 |
| GRU | batchsize: 1 |
| MatMul | |
| Max | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| MaxPool | channel: [1, 8192]<br />kernel height/width: [1, 7]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7]<br />auto_pad: NOTSET<br />ceil_mode: 0<br />dilations: unsupport<br />storage_order: 0 |
| MaxRoiPool | |
| MaxUnpool | unsupport pad |
| Min | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Mul | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| Pad | width: [1, 8176]<br />mode: constant<br />pads n_begin/n_end/c_begin/c_end: 1 |
| Pow | |
| PRelu | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />slope support broadcast rule: per-layer/channel |
| ReduceMax | |
| ReduceMean | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8192] |
| ReduceSum | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8192] |
| ReduceMin | |
| Relu | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Reshape | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Resize | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />mode: nearest<br />scales: [1, 8] |
| ReverseSequence | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />batch_axis: 1 <br />time_axis: 0 |
| RoiAlign | pool type: average |
| Sigmoid | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Slice | steps: 1 |
| Softmax | channel: [1, 8192]<br />axis: 1 |
| SpaceToDetph | |
| Split | axis: 1(channel) |
| Sub | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| Squeeze | |
| Tanh | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Tile | |
| Transpose | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Upsample (resize) | coordinate_transformation_mode unsupport tf_crop_and_resize |
## Caffe OPs supported by RKNN Toolkit2
@ -99,46 +125,46 @@ Caffe protocols RKNN Toolkit2 uses only based on the officially modified protoco @@ -99,46 +125,46 @@ Caffe protocols RKNN Toolkit2 uses only based on the officially modified protoco
The protocol based on the official revision of berkeley comes from [berkeley caffe](https://github.com/BVLC/caffe/tree/master/src/caffe/proto 'Berkeley Caffe'), commit hash is 21d0608. On this basis RKNN Toolkit2 have added some OPs.
Based on this protocol, the list of Caffe OPs supported by RKNN Toolkit2 is as follows:
| **Operators** | **Remarks** |
|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| BatchNorm | same as onnx BatchNormalization |
| bn (BatchNorm + Scale) | same as onnx BatchNormalization according to https://github.com/TimoSaemann/caffe-segnet-cudnn5 |
| BNLL ||
| Concat | same as onnx Concat |
| Convolution | same as onnx Conv |
| ConvolutionDepthwise | channel:unlimited<br />kernel height/width: [1, 8]<br />stride height/width: [1, 7]<br />kernels: 1<br />pad left/right/top/bottom: [0, 15] |
| Crop ||
| Deconvolution | same as ConvTranspose |
| Dropout ||
| Eltwise | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| Flatten ||
| HardSigmoid ||
| InnerProduct | same as onnx Gemm |
| LRN | same as onnx LRN |
| Lstm | same as onnx LSTM according to https://github.com/xmfbit/warpctc-caffe |
| Normalize ||
| Permute | same as onnx Transpose |
| Power ||
| Pooling | same as onnx pooling |
| PRelu | same as onnx PRelu |
| Proposal | batch: 1 |
| Reduction | output dims <= 4 |
| Relu | same as onnx Relu |
| Relu6 | same as onnx Clip |
| Reorg ||
| Reshape | same as onnx Reshape |
| Resize | bilinear; nearest |
| Reverse ||
| ROIPooling | same as MaxRoiPool according to https://github.com/twmht/caffe-pva-faster-rcnn |
| Scale | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| Sigmoid | same as onnx Sigmoid |
| Slice | same as onnx Split |
| Softmax | same as onnx Softmax |
| Split | same as onnx Slice |
| TanH | same as onnx TanH |
| Tile | same as onnx Tile |
| Transpose | same as onnx Transpose |
| Upsample | according to https://github.com/SeanQ88/caffe_upsample and https://github.com/TimoSaemann/caffe-segnet-cudnn5 |
| **Operators** | **Remarks** |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| BatchNorm | same as onnx BatchNormalization |
| bn (BatchNorm + Scale) | same as onnx BatchNormalization according to https://github.com/TimoSaemann/caffe-segnet-cudnn5 |
| BNLL | |
| Concat | same as onnx Concat |
| Convolution | same as onnx Conv |
| ConvolutionDepthwise | kernel height/width: [1, 8]<br />kernels: 1<br />others same as onnx Conv |
| Crop | |
| Deconvolution | same as ConvTranspose |
| Dropout | |
| Eltwise | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176]<br />support broadcast rule: per-layer/channel/element |
| Flatten | |
| HardSigmoid | |
| InnerProduct | same as onnx Gemm |
| LRN | same as onnx LRN |
| Lstm | same as onnx LSTM according to https://github.com/xmfbit/warpctc-caffe |
| Normalize | |
| Permute | same as onnx Transpose |
| Power | |
| Pooling | same as onnx pooling |
| PRelu | same as onnx PRelu |
| Proposal | batch: 1 |
| Reduction | output dims <= 4 |
| Relu | same as onnx Relu |
| Relu6 | same as onnx Clip |
| Reorg | |
| Reshape | same as onnx Reshape |
| Resize | bilinear; nearest |
| Reverse | |
| ROIPooling | same as MaxRoiPool according to https://github.com/twmht/caffe-pva-faster-rcnn |
| Scale | same as onnx Mul |
| Sigmoid | same as onnx Sigmoid |
| Slice | same as onnx Split |
| Softmax | same as onnx Softmax |
| Split | same as onnx Slice |
| TanH | same as onnx TanH |
| Tile | same as onnx Tile |
| Transpose | same as onnx Transpose |
| Upsample | according to https://github.com/SeanQ88/caffe_upsample and https://github.com/TimoSaemann/caffe-segnet-cudnn5 |
## Pytorch OPs supported by RKNN Toolkit2
@ -146,95 +172,95 @@ Based on this protocol, the list of Caffe OPs supported by RKNN Toolkit2 is as f @@ -146,95 +172,95 @@ Based on this protocol, the list of Caffe OPs supported by RKNN Toolkit2 is as f
The Pytorch version supported by RKNN Toolkit2 is >1.6.0, models generated by other versions may not support.
The list of Pytorch OPs supported by RKNN Toolkit2 is as follows:
| **Operators** | **Remarks** |
|---------------------------|------------------------------------|
| aten::_convolution | same as onnx Conv |
| aten::add | same as onnx Add |
| aten::avg_pool2d | same as onnx AveragePool |
| aten::batch_norm | same as onnx BatchNormalization |
| aten::cat | same as onnx Concat |
| aten::chunk ||
| aten::dropout ||
| aten::elu | same as onnx Elu |
| aten::flatten ||
| aten::hardswish | same as onnx HardSwish |
| aten::instance_norm | same as onnx InstanceNormalization |
| aten::layer_norm ||
| aten::leaky_relu | same as onnx LeakyRelu |
| aten::linear | same as onnx Gemm |
| aten::matmul | same as onnx MatMul |
| aten::max_pool2d | same as onnx MaxPool |
| aten::mean | output dims <= 4 |
| aten::mul | same as onnx Mul |
| aten::pad | same as onnx Pad |
| aten::permute | same as onnx Transpose |
| aten::prelu | same as onnx PRelu |
| aten::relu | same as onnx Relu |
| aten::reshape | |
| aten::sigmoid | same as onnx Sigmoid |
| aten::slice | same as onnx Slice |
| aten::split | same as onnx Split |
| aten::softmax | same as onnx Softmax |
| aten::squeeze | same as onnx Squeeze |
| aten::sum | output dims <= 4 |
| aten::tanh | same as onnx TanH |
| aten::upsample_bilinear2d | same as onnx Resize |
| aten::upsample_nearest2d | same as onnx Resize |
| aten::view | same as onnx Reshape |
| **Operators** | **Remarks** |
| ------------------------- | --------------------------------------------------------------- |
| aten::_convolution | same as onnx Conv |
| aten::add | same as onnx Add |
| aten::avg_pool2d | same as onnx AveragePool |
| aten::batch_norm | same as onnx BatchNormalization |
| aten::cat | same as onnx Concat |
| aten::chunk | |
| aten::dropout | |
| aten::elu | same as onnx Elu |
| aten::flatten | |
| aten::hardswish | same as onnx HardSwish |
| aten::instance_norm | same as onnx InstanceNormalization |
| aten::layer_norm | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8192] |
| aten::leaky_relu | same as onnx LeakyRelu |
| aten::linear | same as onnx Gemm |
| aten::matmul | same as onnx MatMul |
| aten::max_pool2d | same as onnx MaxPool |
| aten::mean | output dims <= 4 |
| aten::mul | same as onnx Mul |
| aten::pad | same as onnx Pad |
| aten::permute | same as onnx Transpose |
| aten::prelu | same as onnx PRelu |
| aten::relu | same as onnx Relu |
| aten::reshape | |
| aten::sigmoid | same as onnx Sigmoid |
| aten::slice | same as onnx Slice |
| aten::split | same as onnx Split |
| aten::softmax | same as onnx Softmax |
| aten::squeeze | same as onnx Squeeze |
| aten::sum | output dims <= 4 |
| aten::tanh | same as onnx TanH |
| aten::upsample_bilinear2d | same as onnx Resize |
| aten::upsample_nearest2d | same as onnx Resize |
| aten::view | same as onnx Reshape |
## TensorFlow OPs supported by RKNN Toolkit2
The pb files (contain OPs belows) generated by TensorFlow version 1.12 - 1.15 for 1.x and 2.3 - 2.5 for 2.x are supported by RKNN Toolkit2. For more information on TensorFlow version compatibility, please refer to [tensorflow official instructions on OP version](https://www.tensorflow.org/guide/versions 'Tensorflow official instructions on OP version') .
The list of TensorFlow OPs supported by RKNN Toolkit2 is as follows:
| **Operators** | **Remarks** |
|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| Add | same as onnx Add |
| AvgPool | same as onnx AveragePool |
| Concat | same as onnx Concat |
| Conv2D | same as onnx Conv |
| DepthToSpace ||
| DepthwiseConv2d | channel:unlimited<br />kernel height/width: [1, 8]<br />stride height/width: [1, 7]<br />kernels: 1<br />pad left/right/top/bottom: [0, 15] |
| Div | same as onnx Div |
| Dropout ||
| Flatten ||
| LeakyRelu | same as onnx LeakyRelu |
| Less | same as onnx Less |
| LRN ||
| MatMul | |
| MaxPool | same as onnx MaxPool |
| Mean | output dims <= 4 |
| Pad | same as onnx Pad |
| Relu | same as onnx Relu |
| Reshape ||
| ResizeBilinear ||
| ResizeNearestNeighbor ||
| Sigmoid ||
| Slice ||
| Softmax ||
| Softplus ||
| SpaceToDepth ||
| Split ||
| Squeeze ||
| StridedSlice ||
| Tanh | same as onnx TanH |
| Transpose ||
| **Operators** | **Remarks** |
| --------------------- | ------------------------------------------------------------------------- |
| Add | same as onnx Add |
| AvgPool | same as onnx AveragePool |
| Concat | same as onnx Concat |
| Conv2D | same as onnx Conv |
| DepthToSpace | |
| DepthwiseConv2d | kernel height/width: [1, 8]<br />kernels: 1<br />others same as onnx Conv |
| Div | same as onnx Div |
| Dropout | |
| Flatten | |
| LeakyRelu | same as onnx LeakyRelu |
| Less | same as onnx Less |
| LRN | |
| MatMul | |
| MaxPool | same as onnx MaxPool |
| Mean | output dims <= 4 |
| Pad | same as onnx Pad |
| Relu | same as onnx Relu |
| Reshape | |
| ResizeBilinear | |
| ResizeNearestNeighbor | |
| Sigmoid | |
| Slice | |
| Softmax | |
| Softplus | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| SpaceToDepth | |
| Split | |
| Squeeze | |
| StridedSlice | |
| Tanh | same as onnx TanH |
| Transpose | |
## Darknet OPs supported by RKNN Toolkit2
The list of Darknet OPs supported by RKNN Toolkit2 is as follows:
| **Operators** | **Remarks** |
|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| add | same as onnx Add |
| batchnormalize | same as onnx BatchNormalization |
| concat | same as onnx Concat |
| convolutional | same as onnx Conv |
| depthwise_convolutional | channel:unlimited<br />kernel height/width: [1, 8]<br />stride height/width: [1, 7]<br />kernels: 1<br />pad left/right/top/bottom: [0, 15] |
| fullconnect | |
| leakyrelu | same as onnx LeakyRelu |
| mish ||
| pooling | **AveragePool**:<br />channel: [1, 8192]<br />kernel height/width: [1, 7]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7]<br /><br /> **GlobalAveragePool**:<br /> channel: [1, 8192]<br />kernel height/width: [1, 128]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7] <br /> <br /> **MaxPool/GlobalMaxPool**:<br /> channel: [1, 8192]<br />kernel height/width: [1, 7]<br />stride height/width: [1, 8]<br />pad left/right/top/bottom: [0, 7]<br /><br />MaxPool: <br />auto_pad only support NOTSET,ceil_mode only support 0,unsupport dilations |
| route ||
| shortcut ||
| softmax ||
| upsampling ||
| **Operators** | **Remarks** |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| add | same as onnx Add |
| batchnormalize | same as onnx BatchNormalization |
| concat | same as onnx Concat |
| convolutional | same as onnx Conv |
| depthwise_convolutional | kernel height/width: [1, 8]<br />kernels: 1<br />others same as onnx Conv |
| fullconnect | |
| leakyrelu | same as onnx LeakyRelu |
| mish | channel: [1, 8192]<br />height: [1, 8192]<br />width: [1, 8176] |
| pooling | **AveragePool**: same as onnx AveragePool <br /> **GlobalAveragePool**: same as onnx GlobalAveragePool <br /> **MaxPool/GlobalMaxPool**: same as onnx MaxPool/GlobalMaxPool |
| route | |
| shortcut | |
| softmax | |
| upsampling | |

0
doc/RRKNNToolKit2_API_Difference_With_Toolkit1-1.2.0.md → doc/RRKNNToolKit2_API_Difference_With_Toolkit1-1.3.0.md

BIN
doc/Rockchip_Quick_Start_RKNN_Toolkit2_CN-1.2.0.pdf → doc/Rockchip_Quick_Start_RKNN_Toolkit2_CN-1.3.0.pdf

Binary file not shown.

BIN
doc/Rockchip_Quick_Start_RKNN_Toolkit2_EN-1.2.0.pdf → doc/Rockchip_Quick_Start_RKNN_Toolkit2_EN-1.3.0.pdf

Binary file not shown.

BIN
doc/Rockchip_User_Guide_RKNN_Toolkit2_CN-1.2.0.pdf

Binary file not shown.

BIN
doc/Rockchip_User_Guide_RKNN_Toolkit2_CN-1.3.0.pdf

Binary file not shown.

BIN
doc/Rockchip_User_Guide_RKNN_Toolkit2_EN-1.2.0.pdf

Binary file not shown.

BIN
doc/Rockchip_User_Guide_RKNN_Toolkit2_EN-1.3.0.pdf

Binary file not shown.

26
doc/changelog-1.2.0.txt → doc/changelog-1.3.0.txt

@ -1,8 +1,32 @@ @@ -1,8 +1,32 @@
2021-4-22
版本: v1.3.0:
更新内容:
1. 新功能: python3.8/ubuntu20.04 平台支持
2. 修复一些已知的bug:
1) 修复了一些图优化和量化bug
2021-4-7
版本: v1.2.5:
更新内容:
1. 新功能: rv1103/rv1109平台的支持.
2. 修复一些已知的bug:
1) 修复了一些QAT模型转换问题
2) 修复了一些图优化bug
2021-1-27
版本: v1.2.1-beta:
更新内容:
1. 新功能: 多batch的NHWC格式输入时,在H维度,有效元素个数与实际内存中的元素个数不一致时,支持H方向实际元素个数按照h_stride设置.
2. 修复一些已知的bug:
1) LSTM算子内部变量重名的问题.
2021-1-12
版本:v1.2.0
更新内容:
1. 新功能: rk3588平台的支持; rknn模型加密支持; tensorflow/tflite/pytorch量化感知模型支持; 增加了一些新的 op 支持: InstanceNormalization, Swish, Conv1D等(详见 op support list);增加了参数量计算以及算力分析
2. examples 更新:增加了从 pytorch 转 onnx 的转换 demo:resnet18_export_onnx ;增加了pytorch量化感知模型的加载demo:resnet18_qat demo;增加了模型加密功能
2. examples 更新:增加了从 pytorch 转 onnx 的转换 demo:resnet18_export_onnx ;增加了pytorch量化感知模型的加载demo:resnet18_qat demo;增加了模型加密功能:添加了3588平台 rknn 转换 demo
3. 接口更改:移除了 config,load_caffe,load_tensorflow等接口的一些不必要的参数设置,更新了 eval_perf 接口,详细改动见Uer_Guide文档
4. 修复一些已知的bug:
1) 修复了一些模型无法转换rknn的问题

1
doc/requirements_cp36-1.2.0.txt → doc/requirements_cp36-1.3.0.txt

@ -15,5 +15,6 @@ requests==2.21.0 @@ -15,5 +15,6 @@ requests==2.21.0
opencv-python==4.4.0.46
PuLP==2.4
scikit_image==0.17.2
# if install bfloat16 failed, please install numpy manually first. "pip install numpy==1.16.6"
bfloat16==1.1
flatbuffers==1.12

19
doc/requirements_cp38-1.3.0.txt

@ -0,0 +1,19 @@ @@ -0,0 +1,19 @@
numpy==1.17.3
onnx==1.7.0
onnxoptimizer==0.1.0
onnxruntime==1.6.0
tensorflow==2.2.0
tensorboard==2.2.2
protobuf==3.12.0
torch==1.6.0
torchvision==0.7.0
psutil==5.6.2
ruamel.yaml==0.15.81
scipy==1.4.1
tqdm==4.27.0
requests==2.21.0
opencv-python==4.4.0.46
PuLP==2.4
scikit_image==0.17.2
# if install bfloat16 failed, please install numpy manually first. "pip install numpy==1.17.3"
bfloat16==1.1

13
examples/functions/board_test/test.py

@ -29,7 +29,7 @@ if __name__ == '__main__': @@ -29,7 +29,7 @@ if __name__ == '__main__':
# Pre-process config
print('--> Config model')
rknn.config(mean_values=[103.94, 116.78, 123.68], std_values=[58.82, 58.82, 58.82],
quant_img_RGB2BGR=True, target_platform='rk3566')
quant_img_RGB2BGR=True, target_platform='rk3588')
print('done')
# Load model
@ -72,14 +72,21 @@ if __name__ == '__main__': @@ -72,14 +72,21 @@ if __name__ == '__main__':
# Set inputs
img = cv2.imread('./dog_224x224.jpg')
print('--> List devices')
rknn.list_devices()
# Init runtime environment
print('--> Init runtime environment')
ret = rknn.init_runtime(target='rk3566', perf_debug=True, eval_mem=True)
ret = rknn.init_runtime(target='rk3588', perf_debug=True, eval_mem=True)
if ret != 0:
print('Init runtime environment failed!')
exit(ret)
print('done')
print('--> Get sdk version')
sdk_version = rknn.get_sdk_version()
print(sdk_version)
# eval perf
print('--> Eval perf')
rknn.eval_perf(inputs=[img])
@ -97,7 +104,7 @@ if __name__ == '__main__': @@ -97,7 +104,7 @@ if __name__ == '__main__':
# Accuracy analysis
print('--> Accuracy analysis')
ret = rknn.accuracy_analysis(inputs=['./dog_224x224.jpg'], output_dir='./snapshot', target='rk3566')
ret = rknn.accuracy_analysis(inputs=['./dog_224x224.jpg'], output_dir='./snapshot', target='rk3588')
if ret != 0:
print('Accuracy analysis failed!')
exit(ret)

2
examples/onnx/yolov5/test.py

@ -231,7 +231,7 @@ if __name__ == '__main__': @@ -231,7 +231,7 @@ if __name__ == '__main__':
# pre-process config
print('--> Config model')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], output_tensor_type='int8')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]])
print('done')
# Load ONNX model

27
examples/readme.txt

@ -0,0 +1,27 @@ @@ -0,0 +1,27 @@
The directory structure of examples is as follows:
.
├── caffe
│ ├── mobilenet_v2 # mobilenet_v2 float model
│ └── vgg-ssd # vgg-ssd float model
├── onnx
│ ├── resnet50v2 # resnet50v2 float model
│ └── yolov5 # yolov5 float model
├── pytorch
│ ├── resnet18 # resnet18 float model
│ ├── resnet18_qat # resnet18 QAT model
│ └── resnet18_export_onnx # how to export onnx model from pytorch
├── tensorflow
│ ├── ssd_mobilenet_v1 # ssd_mobilenet_v1 float model
│ └── inception_v3_qat # inception_v3 QAT model
├── tflite
│ ├── mobilenet_v1 # mobilenet_v1 float model
│ └── mobilenet_v1_qat # mobilenet_v1 QAT model
├── darknet
│ └── yolov3_416x416 # yolov3 float model
└── functions
├── accuracy_analysis # how to use accuracy-analysis function
├── batch_size # how to expand batch for use multi-batch function
├── multi_input_test # multi-input float model
├── hybrid_quant # how to use hybrid-quantization function
├── mmse # how to use mmse function
└── board_test # how to connect the board for debugging

2
examples/tensorflow/inception_v3_qat/test.py

@ -104,7 +104,7 @@ if __name__ == '__main__': @@ -104,7 +104,7 @@ if __name__ == '__main__':
# Pre-process config
print('--> Config model')
rknn.config()
rknn.config(mean_values=[104, 117, 123], std_values=[128, 128, 128])
print('done')
# Load model

BIN
packages/rknn_toolkit2-1.2.0_f7bb160f-cp36-cp36m-linux_x86_64.whl → packages/rknn_toolkit2-1.3.0_11912b58-cp36-cp36m-linux_x86_64.whl

Binary file not shown.

BIN
packages/rknn_toolkit2-1.3.0_11912b58-cp38-cp38-linux_x86_64.whl

Binary file not shown.

BIN
rknn-toolkit-lite2-v1.2.0/doc/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.2.0_CN.pdf

Binary file not shown.

6
rknn-toolkit-lite2-v1.2.0/doc/change_log.txt

@ -1,6 +0,0 @@ @@ -1,6 +0,0 @@
2022-01-14
版本:v1.2.0
1. 新功能:
1.1 RKNN模型推理;
1.2 SDK版本查询;
1.3 模型可运行平台查询。

BIN
rknn-toolkit-lite2-v1.2.0/packages/rknn_toolkit_lite2-1.2.0-cp37-cp37m-linux_aarch64.whl

Binary file not shown.

BIN
rknn-toolkit-lite2-v1.2.0/packages/rknn_toolkit_lite2-1.2.0-cp39-cp39-linux_aarch64.whl

Binary file not shown.

BIN
rknn_toolkit_lite2/docs/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.3.0_CN.pdf

Binary file not shown.

BIN
rknn-toolkit-lite2-v1.2.0/doc/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.2.0_EN.pdf → rknn_toolkit_lite2/docs/Rockchip_User_Guide_RKNN_Toolkit_Lite2_V1.3.0_EN.pdf

Binary file not shown.

12
rknn_toolkit_lite2/docs/change_log.txt

@ -0,0 +1,12 @@ @@ -0,0 +1,12 @@
2022-04-27
版本: v1.3.0
1. 功能完善:
1.1 完善init_runtime失败的提示信息;
1.2 适配1.3.0版本NPU驱动。
2022-01-14
版本:v1.2.0
1. 新功能:
1.1 RKNN模型推理;
1.2 SDK版本查询;
1.3 模型可运行平台查询。

BIN
rknn-toolkit-lite2-v1.2.0/examples/inference_with_lite/resnet18_for_rk356x.rknn → rknn_toolkit_lite2/examples/inference_with_lite/resnet18_for_rk356x.rknn

Binary file not shown.

BIN
rknn-toolkit-lite2-v1.2.0/examples/inference_with_lite/resnet18_for_rk3588.rknn → rknn_toolkit_lite2/examples/inference_with_lite/resnet18_for_rk3588.rknn

Binary file not shown.

0
rknn-toolkit-lite2-v1.2.0/examples/inference_with_lite/space_shuttle_224.jpg → rknn_toolkit_lite2/examples/inference_with_lite/space_shuttle_224.jpg

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

0
rknn-toolkit-lite2-v1.2.0/examples/inference_with_lite/test.py → rknn_toolkit_lite2/examples/inference_with_lite/test.py

BIN
rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.3.0-cp37-cp37m-linux_aarch64.whl

Binary file not shown.

BIN
rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.3.0-cp39-cp39-linux_aarch64.whl

Binary file not shown.

2
rknn_toolkit_lite2/packages/rknn_toolkit_lite2_1.3.0_packages.md5sum

@ -0,0 +1,2 @@ @@ -0,0 +1,2 @@
4dff57052f94a9c080774ab0e6af1d56 rknn_toolkit_lite2-1.3.0-cp37-cp37m-linux_aarch64.whl
1085cd7ee4bb18ef17b03d66e5428d69 rknn_toolkit_lite2-1.3.0-cp39-cp39-linux_aarch64.whl
Loading…
Cancel
Save