You can directly try a demo on: https://modelscope.cn/models/damo/cv_hrnet_crowd-counting_dcanet/summary
Download the datasets ShanghaiTech A, ShanghaiTech B and UCF-QNRF
Then generate the density maps via generate_density_map_perfect_names_SHAB_QNRF_NWPU_JHU.py.
After that, create a folder named JSTL_large_dataset, and directly copy all the processed data in JSTL_large_dataset.
The tree of the folder should be:
`DATASET` is `SHA`, `SHB` or `QNRF_large`.
-JSTL_large_dataset
-den
-test
-Npy files with the name of DATASET_img_xxx.npy, which logs the info of density maps.
-train
-Npy files with the name of DATASET_img_xxx.npy, which logs the info of density maps.
-ori
-test_data
-ground_truth
-MAT files with the name of DATASET_img_xxx.mat, which logs the original dot annotations.
-images
-JPG files with the name of DATASET_img_xxx.mat, which logs the original image files.
-train_data
-ground_truth
-MAT files with the name of DATASET_img_xxx.mat, which logs the original dot annotations.
-images
-JPG files with the name of DATASET_img_xxx.mat, which logs the original image files.Download the pretrained hrnet model HRNet-W40-C from the link https://github.com/HRNet/HRNet-Image-Classification and put it directly in the root path of the repository.
After doing that, download the pretrained model via
bash download_models.shAnd put the IDK model into folder './output', change the model name in test.sh or test_fast.sh scripts.
sh test.shOr if you have two GPUs, then
sh test_fast.shAs the whole training of our pipeline is a little bit complex. So we put the whole code of the pipeline into different main folders. Follow the steps below:
-
You need to the baseline of DCANet in the folder
Phase1_JSTL,JSTLindicates train all the images from the observed domains together. -
Then you can use
Phase2_Get_weightorPhase2_Get_weight_Fastto extract$\Delta MAE$ for each channel for each image. The code inPhase2_Get_weightis the navie extractly implementation directly upon the definition, however, it is quite slow. The code inPhase2_Get_weight_Fastis much faster. -
After obtaining the indicators, go to
Phase_Cal_domainand select the suitable script on your own. Then you can get the domain kernels recorded in the file*.npz. -
Copy
*.npztoPhase3C_guided_trainingforDDKtraining. You should copy the baseline model into the folder and load the pretrained weights for further DDK training. You can get the instructions from the files*.sh. -
After the model is trained, copy the model into
Phase3Cb_fast, also copy*.npzto this folder, you can either perform IDK training directly from DCANet$_{base}$ or from DCANet (${\cal L}_D$ ). -
You can also train/test WorldExpo or UCF_CC_50 in the folder
Phase0_Train_WE_And_Test_WE_UCFCC.
Refer to test_unknown folder.
It is suggested that you should create soft links of image folder(e.g., JSTL_large_dataset) and hrnet model(i.e., hrnetv2_w40_imagenet_pretrained) in each main folder. Always, read config.py in each main folder if a path error occurs.
If you find our paper give your any insights, please cite:
@ARTICLE{yan2021towards,
author={Yan, Zhaoyi and Li, Pengyu and Wang, Biao and Ren, Dongwei and Zuo, Wangmeng},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={Towards Learning Multi-domain Crowd Counting},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/TCSVT.2021.3137593}}
