Skip to content

IRAS-HKA/petra_training_pipeline

Repository files navigation

Training pipeline

This is an AI training pipeline

  1. Record videos
  2. Image capturing
  3. Image labeling
  4. Preprocessing
  5. Train AI models

1. Record videos

Start camera drivers:

# ros2 launch realsense_iras rs_camera.launch.py
# ros2 run rc_genicam_driver rc_genicam_driver
cd roboception_driver
source start_docker.sh

Start RQT Image view

source /opt/ros/foxy/setup.bash
rqt

Start PTU Tracking

cd ros2_aruco
source start_docker.sh

cd ptu_driver
source start_docker.sh

Start recording videos in docker (Foxy):

cd petra_training_pipeline
source start_docker.sh
cd src/petra_training_pipeline/data/bags
source ../../bash/take_video.sh

2. Image capturing

Start in docker (Foxy):

source start_docker.sh
cd src/petra_training_pipeline/data/bags
source ../../bash/save_images_from_bags.sh

Or from video_file.mp4

ros2 launch camera_simulator camera_simulator.launch.py

3. Generate Openpose data

PeTRA

Start in Dashing docker:

cd openpose_ros2_docker
source start_docker.sh

Start separate petra_training_pipeline_2 docker (Foxy): IMPORTANT is _2 different docker because bash script kills running tasks of ros2

source start_docker_2.sh
ros2 launch petra_training_pipeline save_openpose.launch.py

start in normal petra_training_pipeline docker:

source start_docker.sh
cd src/petra_training_pipeline/data/bags
source ../../bash/save_openpose_in_csv_petra.sh

Probot

uncomment in openpose_saver_node.py

# self.df_features.to_csv(self.folder_path + "/features.csv")

start in Dashing:

ros2 run petra_patient_monitoring patient_filter.py ros2 launch openpose_ros openpose_ros.launch.py

start in Foxy (docker):

colcon build --symlink-install
source install/setup.bash
~/ros_ws/dataset_probot$ . ../src/petra_training_pipeline/bash/save_openpose_in_csv_probot.sh

4. Labeling

  1. Run get_list_files.py to receive a CSV-file with names and video info of all generated pictures
  2. Run labeling.py to label manually, the CSV-file of step 1 is the input for labeling.py

5. Preprocessing

Run generate_training_data.py to generate CSV-file with Openpose data and merge with labels. CSV-file is the input for training AI models

6. Train model

cd src/petra_training_pipeline/scripts/model_training/
python3 model_train_automl.py

To train and save the model 5 predefined options are available:

  • model_train_kp.py for training with keypoints
  • model_train_ensemble.py for training ensemble model
  • model_train_features.py for training with features
  • model_train_automl.py for training with keypoints and AutoML
  • model_train_knn.py for training with KNN Trained models are saved in \models

Run model_predict_live.py to receive output prediction for single frame. To receive results for more than one frame, especially for analysis purposes, run model_result.py

7. Generate results

cd src/petra_training_pipeline/scripts/model_predicting/
python3 model_results.py

8. Evaluate best model and plots

cd src/petra_training_pipeline/scripts/data_analysis_plots/
python3 plot_paper.py

In data/results/metrics.csv all scores, thresholds, delays and names for all models are stored.
The best model to chose is the one with highest recall_test.
Currently: SVC_walking_25kp_all

Todo

  • Docker file with foxy because passing paramters in cli is not supported in dashing
  • Recording openpose data and saving in csv file
  • Finish openpose_saver_node writing in csv file and with keypoints + features
  • Patient filter adapt to only filer when more then 1 pose is visible
  • run apriltag and feature detector and openpose with bag file and save with openpose_saver_node
  • Filter gloria data with closest / biggest openpose human (relevante Person immer im Vordergrund)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published