This is an AI training pipeline
- Record videos
- Image capturing
- Image labeling
- Preprocessing
- Train AI models
Start camera drivers:
# ros2 launch realsense_iras rs_camera.launch.py
# ros2 run rc_genicam_driver rc_genicam_driver
cd roboception_driver
source start_docker.sh
Start RQT Image view
source /opt/ros/foxy/setup.bash
rqt
Start PTU Tracking
cd ros2_aruco
source start_docker.sh
cd ptu_driver
source start_docker.sh
Start recording videos in docker (Foxy):
cd petra_training_pipeline
source start_docker.sh
cd src/petra_training_pipeline/data/bags
source ../../bash/take_video.sh
Start in docker (Foxy):
source start_docker.sh
cd src/petra_training_pipeline/data/bags
source ../../bash/save_images_from_bags.sh
Or from video_file.mp4
ros2 launch camera_simulator camera_simulator.launch.py
Start in Dashing docker:
cd openpose_ros2_docker
source start_docker.sh
Start separate petra_training_pipeline_2 docker (Foxy):
IMPORTANT is _2
different docker because bash script kills running tasks of ros2
source start_docker_2.sh
ros2 launch petra_training_pipeline save_openpose.launch.py
start in normal petra_training_pipeline docker:
source start_docker.sh
cd src/petra_training_pipeline/data/bags
source ../../bash/save_openpose_in_csv_petra.sh
uncomment in openpose_saver_node.py
# self.df_features.to_csv(self.folder_path + "/features.csv")
start in Dashing:
ros2 run petra_patient_monitoring patient_filter.py
ros2 launch openpose_ros openpose_ros.launch.py
start in Foxy (docker):
colcon build --symlink-install
source install/setup.bash
~/ros_ws/dataset_probot$ . ../src/petra_training_pipeline/bash/save_openpose_in_csv_probot.sh
- Run
get_list_files.py
to receive a CSV-file with names and video info of all generated pictures - Run
labeling.py
to label manually, the CSV-file of step 1 is the input forlabeling.py
Run generate_training_data.py
to generate CSV-file with Openpose data and merge with labels. CSV-file is the input for training AI models
cd src/petra_training_pipeline/scripts/model_training/
python3 model_train_automl.py
To train and save the model 5 predefined options are available:
model_train_kp.py
for training with keypointsmodel_train_ensemble.py
for training ensemble modelmodel_train_features.py
for training with featuresmodel_train_automl.py
for training with keypoints and AutoMLmodel_train_knn.py
for training with KNN Trained models are saved in\models
Run model_predict_live.py
to receive output prediction for single frame. To receive results for more than one frame, especially for analysis purposes, run model_result.py
cd src/petra_training_pipeline/scripts/model_predicting/
python3 model_results.py
cd src/petra_training_pipeline/scripts/data_analysis_plots/
python3 plot_paper.py
In data/results/metrics.csv
all scores, thresholds, delays and names for all models are stored.
The best model to chose is the one with highest recall_test.
Currently: SVC_walking_25kp_all
- Docker file with foxy because passing paramters in cli is not supported in dashing
- Recording openpose data and saving in csv file
- Finish openpose_saver_node writing in csv file and with keypoints + features
- Patient filter adapt to only filer when more then 1 pose is visible
- run apriltag and feature detector and openpose with bag file and save with openpose_saver_node
- Filter gloria data with closest / biggest openpose human (relevante Person immer im Vordergrund)