Skip to content

Lindsay-Lab/DynVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DynVision: A Modeling Toolbox for Biologically Plausible Recurrent Visual Networks

Python 3.11+ PyTorch License: MIT

DynVision is a modular toolbox for constructing and evaluating recurrent convolutional neural networks (RCNNs) with biologically inspired dynamics. It provides a flexible framework for exploring how recurrent connections and temporal dynamics shape visual processing in artificial neural networks and how these networks can be aligned with properties of biological visual systems.

DynVision Overview

Key Features

  • Biologically Plausible Dynamics: Implement neural dynamics governed by continuous differential equations with realistic time constants and delays
  • Diverse Recurrent Architectures: Explore various recurrent connection types (self, full, depthwise, pointwise, local topographic)
  • Minimal Coding Requirements: Customization of models, training hyperparameters, testing scenarios, data selection, parameter sweeps, visualizations can be achieved by editing by human-readable config files. For more elaborate extensions there are template files and guides.
  • Modular Components: Easily combine and reconfigure biologically-inspired features:
    • Recurrent processing within and across areas
    • Skip and feedback connections
    • Retinal preprocessing
    • Supralinear activation
    • Adaptive input gain
  • Modular Operation Order: Easily rearrange the execution order of layer operations like convolution, adding recurrence, applying delays, nonlinearity, pooling, recording activity, etc.
  • Efficient Workflow Management: Leverages Snakemake for reproducible experiments and parameter sweeps
  • PyTorch Lightning Integration: Standardized training with minimal boilerplate
  • Optimized Performance: Fast data loading with FFCV, GPU acceleration, mixed precision
  • Comprehensive Model Zoo: Access pre-implemented architectures like AlexNet, CorNetRT, ResNet, CordsNet, and DyRCNNx4

Installation

# Clone repository
git clone https://github.com/Lindsay-Lab/dynvision.git
cd dynvision

# Create conda environment
conda create -n dynvision python=3.11
conda activate dynvision

# Install dependencies
pip install -e .

For more detailed installation instructions, see the Installation Guide.

Quick Start

import torch
from dynvision.models import DyRCNNx4

# Create a 4-layer RCNN with recurrent connections
model = DyRCNNx4(
    n_classes=10,
    input_dims=(20, 3, 224, 224),  # (timesteps, channels, height, width)
    recurrence_type="full",        # Full recurrent connectivity
    dt=2,                          # Integration time step (ms)
    tau=5,                         # Neural time constant (ms)
    tff=8,                         # feedforward delay (ms)
    trc=4,                         # recurrence delay (ms)
)

# Forward pass with a batch of inputs
batch = torch.randn(1, 20, 3, 224, 224)  # (batch, timesteps, channels, height, width)
outputs = model(batch)

For a step-by-step tutorial, see the Getting Started guide.

Example Experiments

DynVision includes pre-configured experiments to explore temporal response properties of different recurrent architectures:

# Train and run contrast response experiment with on multiple models
snakemake --config experiment=contrast model_name=['AlexNet', 'ResNet18', 'CorNetRT'] data_name=cifar100

# Evaluate stimulus duration effects with different recurrence types
snakemake -j4 --config experiment=duration model_name=DyRCNNx4 model_args="{rctype: [full, self, pointdepthwise]}"

Temporal Dynamics Example

Documentation

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Toolbox for RCNN-based dynamical vision models

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published