Skip to content

Commit 894c913

Browse files
authored
Merge pull request #133 from Roman-Supernova-PIT/u/rknop/docs
Write a lot of documentation (with some small code fixes)
2 parents daadeba + 4117b9d commit 894c913

File tree

13 files changed

+633
-331
lines changed

13 files changed

+633
-331
lines changed

changes/133.docs.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Add a lot of docs, make a few small code fixes so the examples actually work.

docs/conf.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,8 @@
119119
'logo_text_align': "left",
120120
'description': "Software developed by the Roman SNPIT",
121121
'sidebar_width':'250px',
122+
'page_width':'75%',
123+
'body_max_width':'120ex',
122124
'show_relbars':True,
123125
}
124126

docs/development.rst

Lines changed: 143 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,157 @@
44
Development
55
===========
66

7+
.. contents::
8+
9+
710
If you're one of the phrosty developers, or are otherwise interested in contributing, here is some useful information.
811

12+
Note that fully running phrosty requires an NVIDIA GPU with enough memory. Empirically, as 12GB GPU is not enough. We have succesfully run phrosty on NVIDIA GPUs with 40GB of memory.
13+
14+
.. _running-tests:
915

1016
Running Tests
1117
-------------
1218

19+
**Warning**: all docker images are currently built only for the ``amd64`` architecture (also sometimes known as ``x86_64``). If you're on ``arm64``, then things may not work, and if they do work, they may be horribly slow as you're emulating a different architecture. NERSC's Perlmutter cluster, and any Linux system running on hardware with an Intel or AMD CPU, are on the ``amd64`` architecture. Current Macs, and any other systems based on an ARM chip, are on the ``arm64`` architecture.
20+
1321
Running all of the tests requires an NVIDIA GPU with enough memory. We are able to run them on 40GB NVIDIA GPUs, a GPU with only 12GB is not enough. (TODO: figure out the actual cutoff.)
1422

15-
To run the tests, make sure to run the SNPIT container as described in :ref:`running-snpit-container`. Inside the container, cd into the `/phrosty/phrosty/tests` directory and run::
23+
To run the tests, forst make sure you've set up your environment and pulled down the necessary docker images as described in :ref:`phrosty installation prerequisites<phrosty-installation-prerequisites>`.
24+
25+
If you haven't already, get a copy of phrosty::
26+
27+
git clone https://github.com/Roman-Supernova-PIT/phrosty.git
28+
29+
Second, in the same directory, get a copy of the photometry test data::
30+
31+
git clone https://github.com/Roman-Supernova-PIT/photometry_test_data.git
32+
33+
Make a couple of necessary directories::
34+
35+
mkdir dia_out_dir
36+
mkdir phrosty_temp
37+
38+
Run the container with::
39+
40+
docker run --gpus=all -it \
41+
--mount type=bind,source=$PWD,target=/home \
42+
--mount type=bind,source=$PWD/photometry_test_data,target=/photometry_test_data \
43+
--mount type=bind,source=$PWD/dia_out_dir,target=/dia_out_dir \
44+
--mount type=bind,source=$PWD/phrosty_temp,target=/phrosty_temp \
45+
--env LD_LIBRARY_PATH=/usr/lib64:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/local/cuda/lib64/stubs \
46+
--env PYTHONPATH=/roman_imsim \
47+
--env OPENBLAS_NUM_THREADS=1 \
48+
--env MKL_NUM_THREADS=1 \
49+
--env NUMEXPR_NUM_THREADS=1 \
50+
--env OMP_NUM_THREADS=1 \
51+
--env VECLIB_MAXIMUM_THREADS=1 \
52+
--env TERM=xterm \
53+
rknop/roman-snpit-env:cuda-dev \
54+
/bin/bash
55+
56+
**On NERSC Perlmutter**, run the container with::
57+
58+
podman-hpc run --gpu -it \
59+
--mount type=bind,source=$PWD,target=/home \
60+
--mount type=bind,source=$PWD/photometry_test_data,target=/photometry_test_data \
61+
--mount type=bind,source=$PWD/dia_out_dir,target=/dia_out_dir \
62+
--mount type=bind,source=$PWD/phrosty_temp,target=/phrosty_temp \
63+
--env LD_LIBRARY_PATH=/usr/lib64:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/local/cuda/lib64/stubs \
64+
--env PYTHONPATH=/roman_imsim \
65+
--env OPENBLAS_NUM_THREADS=1 \
66+
--env MKL_NUM_THREADS=1 \
67+
--env NUMEXPR_NUM_THREADS=1 \
68+
--env OMP_NUM_THREADS=1 \
69+
--env VECLIB_MAXIMUM_THREADS=1 \
70+
--env TERM=xterm \
71+
--annotation run.oci.keep_original_groups=1 \
72+
registry.nersc.gov/m4385/rknop/roman-snpit-env:cuda-dev \
73+
/bin/bash
74+
75+
Once inside the container, cd into ``/home/phrosty/phrosty/tests`` and run::
1676

1777
SNPIT_CONFIG=phrosty_test_config.yaml pytest -v
1878

79+
80+
Manually running a test lightcurve
81+
------------------------------------
82+
83+
Currently, we do not have tests written for ``phrosty/pipeline.py``; this will be rectified soon, and hopefully this documentation will be updated to reflect that.
84+
85+
If you want to build a lightcurve using the test data, follow the instructions in the previous section for getting the ``phrosty`` and ``photometry_test_data`` archives, and for pulling the ``cuda-dev`` version of the roman SNPIT docker image.
86+
87+
You need a few of extra directories when running your container, to store temporary and output data. You can put these where you want, but for this example we are going to assume you make three directories underneath the same place you checked out the two git archvies: ``phrosty_temp``, ``dia_out_dir``, and ``lc_out_dir``.
88+
89+
With these directories in place, run a container with::
90+
91+
docker run --gpus=all -it \
92+
--mount type=bind,source=$PWD,target=/home \
93+
--mount type=bind,source=$PWD/photometry_test_data,target=/photometry_test_data \
94+
--mount type=bind,source=$PWD/phrosty_temp,target=/phrosty_temp \
95+
--mount type=bind,source=$PWD/dia_out_dir,target=/dia_out_dir \
96+
--mount type=bind,source=$PWD/lc_out_dir,target=/lc_out_dir \
97+
--env LD_LIBRARY_PATH=/usr/lib64:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/local/cuda/lib64/stubs \
98+
--env PYTHONPATH=/roman_imsim \
99+
--env OPENBLAS_NUM_THREADS=1 \
100+
--env MKL_NUM_THREADS=1 \
101+
--env NUMEXPR_NUM_THREADS=1 \
102+
--env OMP_NUM_THREADS=1 \
103+
--env VECLIB_MAXIMUM_THREADS=1 \
104+
--env TERM=xterm \
105+
rknop/roman-snpit-env:cuda-dev \
106+
/bin/bash
107+
108+
**on NERSC Perlmutter**, the command would be::
109+
110+
podman-hpc run -gpu -it \
111+
--mount type=bind,source=$PWD,target=/home \
112+
--mount type=bind,source=$PWD/photometry_test_data,target=/photometry_test_data \
113+
--mount type=bind,source=$PWD/phrosty_temp,target=/phrosty_temp \
114+
--mount type=bind,source=$PWD/dia_out_dir,target=/dia_out_dir \
115+
--mount type=bind,source=$PWD/lc_out_dir,target=/lc_out_dir \
116+
--env LD_LIBRARY_PATH=/usr/lib64:/usr/lib/x86_64-linux-gnu:/usr/local/cuda/lib64:/usr/local/cuda/lib64/stubs \
117+
--env PYTHONPATH=/roman_imsim \
118+
--env OPENBLAS_NUM_THREADS=1 \
119+
--env MKL_NUM_THREADS=1 \
120+
--env NUMEXPR_NUM_THREADS=1 \
121+
--env OMP_NUM_THREADS=1 \
122+
--env VECLIB_MAXIMUM_THREADS=1 \
123+
--env TERM=xterm \
124+
--annotation run.oci.keep_original_groups=1 \
125+
registry.nersc.gov/m4385/rknop/roman-snpit-env:cuda-dev \
126+
/bin/bash
127+
128+
If you placed any of the new directories anywhere other than underneath your current working directory, modify the ``source=...`` parts of the command above to reflect that.
129+
130+
Inside the container, cd into ``/home/phrosty`` and try running::
131+
132+
nvidia-smi
133+
134+
If you don't get errors, it should list the nvidia GPUs you have available. If it doesn't list GPUs, then the rest of this won't work.
135+
136+
Next, try running::
137+
138+
cd /home/phrosty
139+
pip install -e .
140+
SNPIT_CONFIG=phrosty/tests/phrosty_test_config.yaml python phrosty/pipeline.py --help | less
141+
142+
You should see all the options you can pass to phrosty. There are a lot, because there are (verbose) options for everything that's in the config file. Press ``q`` to get out of ``less``.
143+
144+
Try running::
145+
146+
SNPIT_CONFIG=phrosty/tests/phrosty_test_config.yaml python phrosty/pipeline.py \
147+
--oid 20172782 \
148+
--ra 7.551093401915147 \
149+
--dec -44.80718106491529 \
150+
-b Y106 \
151+
-t phrosty/tests/20172782_instances_templates_1.csv \
152+
-s phrosty/tests/20172782_instances_science_2.csv \
153+
-p 3 -w 3 \
154+
-v
155+
156+
If all is well, after it's done running the output will end with something like::
157+
158+
[2025-08-13 17:35:24 - INFO] - Results saved to /lc_out_dir/data/20172782/20172782_Y106_all.csv
159+
160+
On your host system (as well as inside the container), you should see new files in ``lc_out_dir``, ``dia_out_dir``, and ``phrosty_temp``. (Inside the container, these are at ``/lc_out_dir``, ``/dia_out_dir``, and ``/phrosty_temp``.)

docs/index.rst

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,19 @@ This is the documentation for phrosty.
99

1010
This package contains the Python software suite developed for use with the Roman Telescope project, as part of the Roman Supernova Project Implementation Team (PIT) project.
1111

12-
<<ENTER STATEMENT OF NEED HERE>>
12+
Statement of Need
13+
=================
1314

14-
<<STATE OF THE FIELD>>
15+
TODO LAUREN
16+
17+
State of the Field
18+
===================
19+
20+
TODO LAUREN
21+
22+
23+
Contact and Support
24+
===================
1525

1626
Individuals who wish to contribute to phrosty, report issues, or seek support are encouraged to submit an issue via [GitHub](https://github.com/Roman-Supernova-PIT/phrosty/issues) and use the pre-loaded templates for feature requests and issue reports. Please include as much detail as possible, including a description of the problem, any associated error messages, inputs, and details about the environment the user is running. Please adhere to the phrosty [code of conduct](https://github.com/Roman-Supernova-PIT/phrosty/blob/main/CODE_OF_CONDUCT.md).
1727

0 commit comments

Comments
 (0)