Jetson nano (B01) configures pytorch and torchvision environment + tensorrtx model transformation + deepstream deployment yolov5 (pro test available)

jetson nano To configure pytorch and torchvision Environmental Science +tensorrt Model transformation +Deepstream Deploy yolov5( Close test available )

Because a game came into contact with jetson nano, Need to use pycharm Train your own model plus yolov5 Target detection , And deployed to jetson nano On , It didn’t come out until the end of the game , later jetson nano Start eating ash , Later, I started my work again because of the need of big innovation yolov5 The way of deployment . Online information is intermittent , It’s not very clear , There are too many pits and bug, Stumble in the environment configuration for several days , Finally, the target detection is realized , Because I stepped on too many pits , So I decided to write this blog to record my relationship with jetson nano and yolov5 Love and kill each other .
This article is not my original , It was me who combined csdn Many excellent bloggers on the github Written down by some excellent resources on . I put the link at the end , Thank you again for your help .
The following is the main body of this article , I hope it can help you open Yolov5 Gate .

YOLO The father of Joseph Redmon At the beginning of this year, when it announced its withdrawal from the research of computer vision , Many people think that target detection artifact YOLO This is the end of the series .
However, in 4 month 23 Japan , successor YOLO V4 But came quietly .Alexey Bochkovskiy Published a piece called YOLOV4: Optimal Speed and Accuracy of Object Detection The article .
YOLO V4 yes YOLO A series of major updates , Its presence COCO Average precision on data set (AP) And frame rate accuracy (FPS) Improved respectively 10% and 12%, And got Joseph Redmon The official recognition of , It is considered to be one of the strongest real-time object detection models .
Just as computer vision practitioners are trying to study YOLO V4 When , Never in my wildest dreams , Some people disagree .
6 month 25 Japan ,Ultralytics Released YOLOV5 First official version of , Its performance and YOLO V4 Not like Bozhong , It is also the most advanced object detection technology today , And in reasoning speed is currently the strongest .
 Insert picture description here

1. Hardware required

jetson nano B01 4G
USB camera
Computer display

2. Software environment

jetson nano Next :
Jetpack 4.5.1
Deepstream 5.1
windows

Next, let’s enter the text :
**

One 、 burn Jetpack 4.5.1 Mirror image

1. NVIDIA’s official website download address :
developer.nvidia.com/embedded/dlc/jetson-nano-dev-kit-sd-card-image
Some students download slowly , Here I attach my baidu online link :
link :https://pan.baidu.com/s/1RNw8x6PCM-WdwNFhfgLRuA
Extraction code :sml9
2. Burn ready
In the long wait for download , Let’s first format the memory card , The software I use here is SDFormatter
 Insert picture description here

After formatting, be sure to pop up U disc , Burning , Don’t burn directly ( Step on the pit yourself !!)
The burning software uses win32Disk
 Insert picture description here

Here is attached the Baidu network disk link of the above two software :
win32Disk:
link :https://pan.baidu.com/s/1uG8AnHu4XgOqTLLVulEhpg
Extraction code :gknk
SDFormatte:
link :https://pan.baidu.com/s/1irK8jni9cE6E0meXJv_VYg
Extraction code :rw44
Connect the display after burning , Make some basic configuration , When the configuration is complete , Turn it on .
**

Two 、 install Deepstream 5.1

I stepped on many pits ,Deepstream The official website download is very slow , Need to climb over the wall to download . Here I also attach Baidu network disk link :

Downloading is a very slow process …
Let’s configure the installation first Deepstream The environment needed
1. Installation environment

$ sudo apt install 
libssl1.0.0 
libgstreamer1.0-0 
gstreamer1.0-tools 
gstreamer1.0-plugins-good 
gstreamer1.0-plugins-bad 
gstreamer1.0-plugins-ugly 
gstreamer1.0-libav 
libgstrtspserver-1.0-0 
libjansson4=2.11-1

2. Enter the following command to extract and install DeepStream SDK:

sudo tar -xvf deepstream_sdk_v5.1.0_jetson.tbz2 -C /
cd /opt/nvidia/deepstream/deepstream-5.1
sudo ./install.sh
sudo ldconfig

Be sure to use the above code when extracting , Don’t put opt Files in home Under the table of contents ( Many subsequent environments will make mistakes ), I stepped on the pit myself
**

3、 … and 、 install torch Environmental Science

**
1. install pytorch1.8.0 and torchvision0.9.0( Remember not to install the version at will , The version must correspond to )
install pytorch

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev
pip3 install Cython
pip3 install numpy torch-1.8.0-cp36-cp36m-linux_aarch64.whl

install torchvision0.9.0

sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
git clone --branch v0.9.0 https://github.com/pytorch/vision torchvision
cd torchvision
export BUILD_VERSION=0.9.0
python3 setup.py install --user

**

Four 、 take Pytorch Model to wts file

**
1. clone yolov5-4.0 Document and tensorrtx file ( What I use here is yolov5-4.0, Errors may occur when converting models in other versions )

git clone -b yolov5-v4.0 https://github.com/wang-xinyu/tensorrtx.git
git clone -b v4.0 https://github.com/ultralytics/yolov5.git

2. Download the latest yolov5s.pt To yolov5 Under the table of contents

wget https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5s.pt -P yolov5/weights

3. take tensorrtx Under the document gen_wts.py File copy to yolov5 Folder

cp tensorrtx/yolov5/gen_wts.py yolov5/gen_wts.py

4. Generate yolov5s.wts file

cd yolov5
python3 gen_wts.py

After the execution is completed, the yolov5 There will be… Under the folder yolov5s.wts File generation
**

5、 … and 、 take wts The file is converted to tensorrt Model

1. stay tensorrt Create under file build Folder

cd tensorrtx/yolov5
mkdir build
cd build
cmake ..
make

2. The generated yolov5s.wts File move to tensorrtx Under the yolov5 In the folder

cp yolov5/yolov5s.wts tensorrtx/yolov5/build/yolov5s.wts

3. Convert to tensorrt Model (yolov5s.wts The file will be tensorrt/yolov5/build Build in folder )

sudo ./yolov5 -s yolov5s.wts yolov5s.engine s

After the execution is completed, the build The folder will appear yolov5s.engine file

4. Create custom yolo Folder and copy the generated files ( This yolo The folder was created by itself )

mkdir /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
cp yolov5s.engine /opt/nvidia/deepstream/deepstream-5.1/sources/yolo/yolov5s.engine

If the creation is unsuccessful, you can use sudo -i Get into root Permission creation
**

6、 … and 、 compile nvdsinfer_custom_impl_Yolo file

1. Run the command

sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/

This step means giving permissions to the folder

2. take Deepstream-yolo-master Under the external Copy folder to the folder I created yolo Under the document ( have access to winscp Software )
Attached here is Deepstream-yolo-master Baidu disk link :
link :https://pan.baidu.com/s/1XfuIT33GCE3QElYg1cQu3A
Extraction code :ju2u
With winscp Download link :
winscp.net/eng/index.php

3. Compile

cd /opt/nvidia/deepstream/deepstream-5.1/sources/yolo
CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo

After successful compilation , Congratulations on all your deployments .
**

7、 … and 、 test model

The model test is in the yolo Under the folder
Input

deepstream-app -c deepstream_app_config.txt

After a few minutes, the interface shown in the figure below appears ( The first loading takes a long time )
 Please add a picture description
8、 … and 、 call usb The camera detects
modify deepstream_app_config.txt file
 Insert picture description here
Talk about the picture above source0 Replace with
 Insert picture description here

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

function

deepstream-app -c deepstream_app_config.txt

 Please add a picture description
The running frame rate is about 5 Around the frame , After consulting the relevant information, we found that pytorch Model transformation does not optimize jetson nano Bottom conversion code , I use onnx The model transformation has done 25 Around the frame , It can basically carry out real-time detection . Optimization algorithm I will continue to write a blog later .

It’s one o’clock in the morning , I wrote this blog to record my deployment yolov5 Experience and process of , Hope to deploy after me yolov5 Students who carry out target detection can go more smoothly , At the same time, open my blog , In the future, I will write a blog about what I have learned .

Here are the links of excellent bloggers , Thank you again .
(
blog.csdn.net/qq_40305597)
(blog.csdn.net/IamYZD)

Read more here: Source link