Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
Tags
- 부분공간
- 촉각센서
- Kinetic
- ubuntu
- linetracing
- G++
- CuDNN
- ROS
- YoLO
- turtlebot3
- webots
- ax200
- gcc
- rqt
- RGB-D
- opencv
- autorace
- 전이학습
- turtlebot
- Installation
- Linux
- error
- roslaunch
- CUDA
- darknet_ros
- FreeCAD
- sources.list
- Gazebo
- amcl
- roslib
Archives
- Today
- Total
기술 성공, 실패 기록소
detectron2 & VideoPose3D 본문
728x90
1. Installation
https://github.com/facebookresearch/VideoPose3D
https://github.com/facebookresearch/detectron2
먼저 detectron2 설치.
2. Inference in the wild - 임의의 비디오
https://github.com/facebookresearch/VideoPose3D/blob/main/INFERENCE.md
Step 1: setup
download ffmpeg
Download the pretrained model for generating 3D predictions
Step 3: inferring 2D keypoints with Detectron
Using Detectron2 (new)
cd inference
python infer_video_d2.py \
--cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \
--output-dir output_directory \
--image-ext mp4 \
input_directory
The results will be exported to output_directory as custom NumPy archives (.npz files)
ex)
python infer_video_d2.py \
--cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \
--output-dir output \
--image-ext mp4 \
vidoes
Step 4: creating a custom dataset
Run our dataset preprocessing script from the data directory:
python prepare_data_2d_custom.py -i /path/to/detections/output_directory -o myvideos
This creates a custom dataset named myvideos
and saved to data_2d_custom_myvideos.npz
Step 5: rendering a custom video and exporting coordinates
python run.py -d custom -k myvideos -arc 3,3,3,3,3 -c checkpoint \
--evaluate pretrained_h36m_detectron_coco.bin --render \
--viz-subject input_video.mp4 --viz-action custom \
--viz-camera 0 --viz-video /path/to/input_video.mp4 \
--viz-output output.mp4 --viz-size 6
You can also export the 3D joint positions (in camera space) to a NumPy archive.
To this end, replace --viz-output with --viz-export and specify the file name.
https://colab.research.google.com/github/Justinemmerich/VideoPose3D/blob/master/Notebook/Facebook_VideoPose3D_Inference_in_the_wild_Detectron.ipynb#scrollTo=QHrkZReqb2er
python run.py -d custom -k myvideos -arc 3,3,3,3,3 -c \
checkpoint --evaluate pretrained_h36m_detectron_coco.bin \
--render --viz-subject video.mp4 --viz-action custom --viz-camera 0 \
--viz-video video.mp4 --viz-output output.mp4 --viz-export outputfile --viz-size 6
위와 같은식으로 하면, 렌더링된 비디오와, 3d coordinates npz file이 생성됨.
'followMotion' 카테고리의 다른 글
melodic에서 realsense D435 사용 (0) | 2022.02.27 |
---|---|
유튜브 동영상 다운로드 (0) | 2022.02.26 |
densepose 설치하는 법. (0) | 2022.02.15 |
[Ubuntu18.04 환경설정] CUDA9.0 cuDNN7.1.4 (0) | 2022.02.08 |
detectron2 설치 (0) | 2022.01.21 |