Spaces:
Runtime error
Runtime error
File size: 1,929 Bytes
186701e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
# Inference MMYOLO Models with DeepStream
This project demonstrates how to inference MMYOLO models with customized parsers in [DeepStream SDK](https://developer.nvidia.com/deepstream-sdk).
## Pre-requisites
### 1. Install Nvidia Driver and CUDA
First, please follow the official documents and instructions to install dedicated Nvidia graphic driver and CUDA matched to your gpu and target Nvidia AIoT devices.
### 2. Install DeepStream SDK
Second, please follow the official instruction to download and install DeepStream SDK. Currently stable version of DeepStream is v6.2.
### 3. Generate TensorRT Engine
As DeepStream builds on top of several NVIDIA libraries, you need to first convert your trained MMYOLO models to TensorRT engine files. We strongly recommend you to try the supported TensorRT deployment solution in [EasyDeploy](../../easydeploy/).
## Build and Run
Please make sure that your converted TensorRT engine is already located in the `deepstream` folder as the config shows. Create your own model config files and change the `config-file` parameter in [deepstream_app_config.txt](deepstream_app_config.txt) to the model you want to run with.
```bash
mkdir build && cd build
cmake ..
make -j$(nproc) && make install
```
Then you can run the inference with this command.
```bash
deepstream-app -c deepstream_app_config.txt
```
## Code Structure
```bash
βββ deepstream
β βββ configs # config file for MMYOLO models
β β βββ config_infer_rtmdet.txt
β βββ custom_mmyolo_bbox_parser # customized parser for MMYOLO models to DeepStream formats
β β βββ nvdsparsebbox_mmyolo.cpp
| βββ CMakeLists.txt
β βββ coco_labels.txt # labels for coco detection
β βββ deepstream_app_config.txt # deepStream reference app configs for MMYOLO models
β βββ README_zh-CN.md
β βββ README.md
```
|