Skip to content

Commit a471ded

Browse files
committed
Fix some really bad css choices
1 parent de16639 commit a471ded

File tree

3 files changed

+7
-4
lines changed

3 files changed

+7
-4
lines changed

docs/conf.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,8 @@
119119
'logo_text_align': "left",
120120
'description': "Software developed by the Roman SNPIT",
121121
'sidebar_width':'250px',
122+
'page_width':'75%',
123+
'body_max_width':'120ex',
122124
'show_relbars':True,
123125
}
124126

docs/installation.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,8 @@ If you get a permission error trying to do this, try::
4848

4949
Give it your usual NERSC username and password (without any OTP). Once that's done, try the ``podman-hpc pull`` command again. If you don't seem to have access to the registry, then you can just pull ``docker.io/rknop/roman-snpit-env:cuda-dev`` instead.
5050

51-
After you've pulled, run ``podman-hpc images``. You should see output something like (*note* you will probably have to scroll to the right in the quoted code below, because readthedocs uses an absurdly (and, egregiously, specified in pixels) small width for the CSS column containing this text)::
52-
51+
After you've pulled, run ``podman-hpc images``. You should see output something like::
52+
5353
REPOSITORY TAG IMAGE ID CREATED SIZE R/O
5454
registry.nersc.gov/m4385/rknop/roman-snpit-env cuda-dev 6b39a47ffc5b 25 minutes ago 8.6 GB false
5555
registry.nersc.gov/m4385/rknop/roman-snpit-env cuda-dev 6b39a47ffc5b 25 minutes ago 8.6 GB true

docs/usage.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,8 @@ Inside the container, run::
5252

5353
That will install the checked out version of phrosty in your currently running environment. Note that if you exit the container and run a new container, you will have to ``pip install -e`` phrosty again, as the changes you make to a container only persist as long as that same container is still running.
5454

55-
Next, try running:
55+
Next, try running::
56+
5657
SNPIT_CONFIG=phrosty/tests/phrosty_test_config.yaml python phrosty/pipeline.py --help | less
5758

5859
You should see all the options you can pass to phrosty. There are a lot, because there are (verbose) options for everything that's in the config file. Press ``q`` to get out of ``less``.
@@ -256,7 +257,7 @@ At the top are the directives that control how the job is submitted. Many of th
256257

257258
You can probably leave the rest of the flags as is. The ``--cpus-per-task`` and ``--gpus-per-task`` flags are set so that it will only ask for a quarter of a node. (The queue manager is very particular about numbers passed to GPU nodes on the shared queue. It needs you to ask for exactly 32 CPU cores for each GPU, and it needs you to ask for _exactly_ the right amount of memory. The extra comment marks on the ``####SBATCH --mem`` line tell slurm to ignore it, as it seems to get the default right, and it's not worth fiddling with it to figure out what you should ask for. A simple calculation would suggest that 64GB per GPU is what you should ask for, but when you do that, slurm thinks you're asking for 36 CPUs worth of memory, not 32 CPUs worth of memory. The actual number is something like 56.12GB, but again, since the default seems to do the right thing, it's not worth fiddling with this.)
258259

259-
If look look at the bottom of the script, you will see that the number of parallel worker jobs that phrosty uses is set to 15 (``-p 15`` as a flag to ``python phrosty/phrosty/pipeline.py``). The total number of processes that the python program runs at once is this, plus the number of FITS writer threads (given by ``-w``), plus one for the master process that launches all of the others. You will notice that this total is less than the 32 CPUs that we nominally have. To be safe, assume that each of the ``-p`` processes will use ~6GB of memory. By limiting ourselves to 9 processes, we should safely fit within the amount of CPU memory allocated to the job (allowing for some overhead for the driver process and the FITS writer processes). (TODO: we really want to get this memory usage down.) Based on performance, you might want to play with the number of FITS writing threads (the number after ``-w``); assume that each FITS writer process will use ~1GB of memory. (TODO: investigate how much they really use.)
260+
If look look at the bottom of the script, you will see that the number of parallel worker jobs that phrosty uses is set to 9 (``-p 9`` as a flag to ``python phrosty/phrosty/pipeline.py``). The total number of processes that the python program runs at once is this, plus the number of FITS writer threads (given by ``-w``), plus one for the master process that launches all of the others. You will notice that this total is less than the 32 CPUs that we nominally have. To be safe, assume that each of the ``-p`` processes will use ~6GB of memory. By limiting ourselves to 9 processes, we should safely fit within the amount of CPU memory allocated to the job (allowing for some overhead for the driver process and the FITS writer processes). (TODO: we really want to get this memory usage down.) Based on performance, you might want to play with the number of FITS writing threads (the number after ``-w``); assume that each FITS writer process will use ~1GB of memory. (TODO: investigate how much they really use.)
260261

261262
**Make sure expected directories exists**: If you look at the batch script, you'll see a number of ``--mount`` flags that bind-mount directories inside the container. From the location where you submit your job, all of the ``source=`` part of those ``--mount`` directives must be available. For the demo, you will need to create the following directories underneath where you plan to submit the script::
262263

0 commit comments

Comments
 (0)