This is the official implementation of our TMLR paper fingerprints of super resolution networks.
Requriements are the same as for BasicSR:
- Python >= 3.7
- PyTorch >= 1.7
- NVIDIA GPU + CUDA
- Linux (we have not tested on Windows)
Clone this Github repo and install the Python package requirements.
git clone [email protected]:JeremyIV/SISR-fingerprints.git
cd SISR-fingerprints
pip install -r requirements.txtOur dataset of 205,000 super-resolved images can be downloaded from Google Drive (164 Gb).
We provide a script to perform this download, although google drive's limiting of command-line downloads may cause issues. To try the script:
cd classification/datasets/data/SISR
bash download_SISR_data.shIf that stops working, you can download the dataset through a browser. Download all of the zip files into the classification/datasets/data/SISR folder, then, same as in the script, run:
for folder in $(ls *.zip); do unzip $folder && rm $folder; done
mv SISR/* ./
rmdir SISRAll of the model attribution and parsing results are recorded in an sqlite database. To initialize that databse, run
python database/create_database.pyThe classification/train_classifier.py script can both train and evaluate model attribution and parsing classifiers. The simplest way to train and evaluate these classifiers is to run:
python classification/train_classifier.py -opt classification/options/path/to/classifier_config.yamlThis can take a couple days on a Titan X GPU. To perform a quick test run to make sure everything is working first, run:
python classification/train_classifier.py -opt classification/options/attribution/all_models/ConvNext_CNN_SISR_all_models_quick_test.yamlThis should take 2-3 minutes. If it completes without error, you're ready to train the real classifiers.
Descriptions of each classifier configuration file can be found in the README.md files in classification/options/.
However, while training can be performed in parallel across many GPUs or machines, only one process may record results to the sqlite database at a time. Since training all of these classifiers takes almost 90 GPU days on Titan X gpus, we highly recommend training in parallel, and then evaluating serially afterwards. To train a model without evaluating it, run:
python classification/train_classifier.py -opt classification/options/path/to/classifier_config.yaml --mode trainThis will write the trained model to classification/classifiers/experiments/(CNN|PRNU)/classifier_name/. If you distributed training across multiple machines, you'll need to collect all of these trained models onto the same machine for evaluation.
Then to evaluate that model, run:
python classification/train_classifier.py -opt classification/options/path/to/classifier_config.yaml --mode testAlmost all figures and numerical values presented in the paper can be automatically generated from scripts in the analysis/ directory.
To generate the numerical values that appear in our paper, run:
python analysis/values/generate_values.pyThese values will appear in paper/computed_values.tex.
Each figure in our paper (except for Figures 1 and 2) can be generated by a script in the analysis/figures/ directory. For example, to generate Figure 3, run:
python analysis/figures/custom_model_tsne.pyThese figures will appear in paper/figures/.
@article{
vonderfecht2022fingerprints,
title={Fingerprints of Super Resolution Networks},
author={Jeremy Vonderfecht and Feng Liu},
journal={Transactions on Machine Learning Research},
year={2022},
url={https://openreview.net/forum?id=Jj0qSbtwdb},
note={}
}