기술 성공, 실패 기록소

detectron2 & VideoPose3D 본문

followMotion

detectron2 & VideoPose3D

sunlab 2022. 2. 25. 22:56
728x90

1. Installation

https://github.com/facebookresearch/VideoPose3D

 

GitHub - facebookresearch/VideoPose3D: Efficient 3D human pose estimation in video using 2D keypoint trajectories

Efficient 3D human pose estimation in video using 2D keypoint trajectories - GitHub - facebookresearch/VideoPose3D: Efficient 3D human pose estimation in video using 2D keypoint trajectories

github.com

https://github.com/facebookresearch/detectron2

 

GitHub - facebookresearch/detectron2: Detectron2 is a platform for object detection, segmentation and other visual recognition t

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks. - GitHub - facebookresearch/detectron2: Detectron2 is a platform for object detection, segmentation a...

github.com

먼저 detectron2 설치.

 

2. Inference in the wild - 임의의 비디오

https://github.com/facebookresearch/VideoPose3D/blob/main/INFERENCE.md

 

GitHub - facebookresearch/VideoPose3D: Efficient 3D human pose estimation in video using 2D keypoint trajectories

Efficient 3D human pose estimation in video using 2D keypoint trajectories - GitHub - facebookresearch/VideoPose3D: Efficient 3D human pose estimation in video using 2D keypoint trajectories

github.com

Step 1: setup
download ffmpeg
Download the pretrained model for generating 3D predictions

Step 3: inferring 2D keypoints with Detectron
Using Detectron2 (new)

cd inference
python infer_video_d2.py \
    --cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \
    --output-dir output_directory \
    --image-ext mp4 \
    input_directory

The results will be exported to output_directory as custom NumPy archives (.npz files)
    
 ex)
 python infer_video_d2.py \
    --cfg COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml \
    --output-dir output \
    --image-ext mp4 \
    vidoes

Step 4: creating a custom dataset

Run our dataset preprocessing script from the data directory:

python prepare_data_2d_custom.py -i /path/to/detections/output_directory -o myvideos

This creates a custom dataset named myvideos
and saved to data_2d_custom_myvideos.npz


Step 5: rendering a custom video and exporting coordinates
python run.py -d custom -k myvideos -arc 3,3,3,3,3 -c checkpoint \
	--evaluate pretrained_h36m_detectron_coco.bin --render \
    --viz-subject input_video.mp4 --viz-action custom \
    --viz-camera 0 --viz-video /path/to/input_video.mp4 \
    --viz-output output.mp4 --viz-size 6
    
You can also export the 3D joint positions (in camera space) to a NumPy archive.
To this end, replace --viz-output with --viz-export and specify the file name.
https://colab.research.google.com/github/Justinemmerich/VideoPose3D/blob/master/Notebook/Facebook_VideoPose3D_Inference_in_the_wild_Detectron.ipynb#scrollTo=QHrkZReqb2er

python run.py -d custom -k myvideos -arc 3,3,3,3,3 -c \
	checkpoint --evaluate pretrained_h36m_detectron_coco.bin \
    --render --viz-subject video.mp4 --viz-action custom --viz-camera 0 \
    --viz-video video.mp4 --viz-output output.mp4 --viz-export outputfile --viz-size 6
    
위와 같은식으로 하면, 렌더링된 비디오와, 3d coordinates npz file이 생성됨.

'followMotion' 카테고리의 다른 글

melodic에서 realsense D435 사용  (0) 2022.02.27
유튜브 동영상 다운로드  (0) 2022.02.26
densepose 설치하는 법.  (0) 2022.02.15
[Ubuntu18.04 환경설정] CUDA9.0 cuDNN7.1.4  (0) 2022.02.08
detectron2 설치  (0) 2022.01.21