-
Notifications
You must be signed in to change notification settings - Fork 2
add optional param_preproc and grad_preproc arguments #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Reviewer's GuideEnhance NeptuneLogger by introducing optional preprocessing hooks for parameters and gradients, integrating them into existing logging flows, and documenting the additions in the changelog. Sequence diagram for parameter and gradient preprocessing during loggingsequenceDiagram
participant Model
participant NeptuneLogger
participant NamespaceHandler
Model->>NeptuneLogger: Forward pass triggers parameter/gradient logging
alt log_gradients
NeptuneLogger->>NeptuneLogger: Apply grad_preproc or .norm() to gradient
NeptuneLogger->>NamespaceHandler: Append processed gradient
end
alt log_parameters
NeptuneLogger->>NeptuneLogger: Apply param_preproc or .norm() to parameter
NeptuneLogger->>NamespaceHandler: Append processed parameter
end
Class diagram for NeptuneLogger with param_preproc and grad_preprocclassDiagram
class NeptuneLogger {
+__init__(..., param_preproc: Optional[Callable], grad_preproc: Optional[Callable])
-_grad_preproc: Optional[Callable]
-_param_preproc: Optional[Callable]
-_add_hooks_for_grads()
-_add_hooks_for_params()
}
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @rschiewer - I've reviewed your changes - here's some feedback:
- Initialize
self._param_preproc
andself._grad_preproc
unconditionally in__init__
to ensure they always exist, avoiding potential attribute errors. - Extract the preprocessing-vs-default-norm logic in both hooks into a shared helper method to reduce duplication.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Initialize `self._param_preproc` and `self._grad_preproc` unconditionally in `__init__` to ensure they always exist, avoiding potential attribute errors.
- Extract the preprocessing-vs-default-norm logic in both hooks into a shared helper method to reduce duplication.
## Individual Comments
### Comment 1
<location> `src/neptune_pytorch/impl/__init__.py:102` </location>
<code_context>
log_gradients: bool = False,
log_parameters: bool = False,
log_freq: int = 100,
+ param_preproc: Optional[Callable[[torch.Tensor], torch.Tensor]] = None,
+ grad_preproc: Optional[Callable[[torch.Tensor], torch.Tensor]] = None,
):
verify_type("run", run, (Run, Handler))
</code_context>
<issue_to_address>
Consider clarifying the expected output type of the preproc callables.
The current type hints allow any tensor output, but downstream code expects outputs suitable for list appending (e.g., scalars or 1D tensors). Please clarify this in the documentation or enforce the expected output shape.
Suggested implementation:
```python
param_preproc: Optional[Callable[[torch.Tensor], Union[torch.Tensor, float, int]]] = None,
grad_preproc: Optional[Callable[[torch.Tensor], Union[torch.Tensor, float, int]]] = None,
```
```python
):
"""
Args:
...
param_preproc: Optional callable to preprocess parameter tensors before logging.
The callable should return a scalar, 1D tensor, or a value suitable for appending to a list.
grad_preproc: Optional callable to preprocess gradient tensors before logging.
The callable should return a scalar, 1D tensor, or a value suitable for appending to a list.
"""
```
</issue_to_address>
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
This modification of the original code adds two additional arguments to the NeptuneLogger initialization. The
param_preproc
argument is a callable that is applied to the parameter before logging instead of the defaultparam.norm()
. Thegrad_prerpoc
functions analogously when logging gradients.The original motivation for these changes was to provide an easy way to handle inf/NaN values before logging. However, it can also be used to choose aggregation functions different from the tensor norm.
Summary by Sourcery
Add optional preprocessing callbacks for parameters and gradients in NeptuneLogger to enable custom aggregation and handle NaN/Inf values.
New Features:
param_preproc
andgrad_preproc
arguments to NeptuneLogger for custom preprocessing of parameters and gradients before loggingDocumentation: