Simplify deep learning development with a powerful DSL, cross-framework support, and built-in debugging
# Compile DSL to TensorFlow/PyTorch code
neural compile examples/mnist.neural --backend tensorflow --output mnist_tensorflow.py
# Run the generated script
python mnist_tensorflow.py
# Generate docs (Markdown, optional PDF with --pdf)
neural docs examples/mnist.neural --output model.md
# Visualize architecture & shapes
neural visualize examples/mnist.neural --format png
# Clean generated artifacts (dry-run by default; add --yes to apply)
neural clean --yes --all
⚠️ BETA STATUS: Neural-dsl v0.2.9 is under active development—bugs may exist, feedback welcome! Not yet recommended for production use.
- Overview
- Pain Points Solved
- Key Features
- Installation
- Quick Start
- Debugging with NeuralDbg
- Cloud Integration
- Why Neural?
- Documentation
- Examples
- Contributing
- Community
- Support
Neural is a domain-specific language (DSL) designed for defining, training, debugging, and deploying neural networks. With declarative syntax, cross-framework support, and built-in execution tracing (NeuralDbg), it simplifies deep learning development whether via code, CLI, or a no-code interface.
Neural addresses deep learning challenges across Criticality (how essential) and Impact Scope (how transformative):
| Criticality / Impact | Low Impact | Medium Impact | High Impact |
|---|---|---|---|
| High | - Shape Mismatches: Pre-runtime validation stops runtime errors. - Debugging Complexity: Real-time tracing & anomaly detection. |
||
| Medium | - Steep Learning Curve: No-code GUI eases onboarding. | - Framework Switching: One-flag backend swaps. - HPO Inconsistency: Unified tuning across frameworks. |
|
| Low | - Boilerplate: Clean DSL syntax saves time. | - Model Insight: FLOPs & diagrams. - Config Fragmentation: Centralized setup. |
- Core Value: Fix critical blockers like shape errors and debugging woes with game-changing tools.
- Strategic Edge: Streamline framework switches and HPO for big wins.
- User-Friendly: Lower barriers and enhance workflows with practical features.
Help us improve Neural DSL! Share your feedback: Typeform link.
- YAML-like Syntax: Define models intuitively without framework boilerplate.
- Shape Propagation: Catch dimension mismatches before runtime.
- ✅ Interactive shape flow diagrams included.
- Multi-Framework HPO: Optimize hyperparameters for both PyTorch and TensorFlow with a single DSL config (#434).

- Enhanced HPO Support: Added HPO tracking for Conv2D kernel_size and improved ExponentialDecay parameter handling (v0.2.7).
- Automated Issue Management: Improved GitHub workflows for automatically creating and closing issues based on test results (v0.2.8).
- Aquarium IDE: Specialized IDE for neural network development with visual design and real-time shape propagation (v0.2.9).
- Enhanced Dashboard UI: Improved NeuralDbg dashboard with a more aesthetic dark theme design (#452).
- Blog Support: Infrastructure for blog content with markdown support and Dev.to integration (#445).
- NeuralPaper.ai: Interactive model visualization platform with annotation capabilities (in development).
- Multi-Backend Export: Generate code for TensorFlow, PyTorch, or ONNX.
- Training Orchestration: Configure optimizers, schedulers, and metrics in one place.
- Visual Debugging: Render interactive 3D architecture diagrams.
- Extensible: Add custom layers/losses via Python plugins.
- NeuralDbg: Built-in Neural Network Debugger and Visualizer.
- No-Code Interface: Quick Prototyping for researchers and an educational, accessible tool for beginners.
NeuralDbg provides real-time execution tracing, profiling, and debugging, allowing you to visualize and analyze deep learning models in action. Now with an enhanced dark theme UI for better visualization (#452).
✅ Real-Time Execution Monitoring – Track activations, gradients, memory usage, and FLOPs.

✅ Shape Propagation Debugging – Visualize tensor transformations at each layer. ✅ Gradient Flow Analysis – Detect vanishing & exploding gradients. ✅ Dead Neuron Detection – Identify inactive neurons in deep networks. ✅ Anomaly Detection – Spot NaNs, extreme activations, and weight explosions. ✅ Step Debugging Mode – Pause execution and inspect tensors manually.
Prerequisites: Python 3.8+, pip
# Install the latest stable version
pip install neural-dsl
# Or specify a version
pip install neural-dsl==0.2.9 # Latest version with Aquarium IDE integration# Clone the repository
git clone https://github.com/Lemniscate-world/Neural.git
cd Neural
# Create a virtual environment (recommended)
python -m venv venv
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txtCreate a file named mnist.neural with your model definition:
network MNISTClassifier {
input: (28, 28, 1) # Channels-last format
layers:
Conv2D(filters=32, kernel_size=(3,3), activation="relu")
MaxPooling2D(pool_size=(2,2))
Flatten()
Dense(units=128, activation="relu")
Dropout(rate=0.5)
Output(units=10, activation="softmax")
loss: "sparse_categorical_crossentropy"
optimizer: Adam(learning_rate=0.001)
metrics: ["accuracy"]
train {
epochs: 15
batch_size: 64
validation_split: 0.2
}
}# Generate and run TensorFlow code
neural run mnist.neural --backend tensorflow --output mnist_tf.py
# Or generate and run PyTorch code
neural run mnist.neural --backend pytorch --output mnist_torch.pyneural visualize mnist.neural --format pngThis will create visualization files for inspecting the network structure and shape propagation:
architecture.png: Visual representation of your modelshape_propagation.html: Interactive tensor shape flow diagramtensor_flow.html: Detailed tensor transformations
neural debug mnist.neuralOpen your browser to http://localhost:8050 to monitor execution traces, gradients, and anomalies interactively.
neural --no_codeOpen your browser to http://localhost:8051 to build and compile models via a graphical interface.
neural debug mnist.neuralFeatures:
✅ Layer-wise execution trace ✅ Memory & FLOP profiling ✅ Live performance monitoring
neural debug --gradients mnist.neuralDetect vanishing/exploding gradients with interactive charts.
neural debug --dead-neurons mnist.neural🛠 Find layers with inactive neurons (common in ReLU networks).
neural debug --anomalies mnist.neuralFlag NaNs, weight explosions, and extreme activations.
neural debug --step mnist.neural🔍 Pause execution at any layer and inspect tensors manually.
Neural now supports running in cloud environments like Kaggle, Google Colab, and AWS SageMaker, with both direct execution in the cloud and remote control from your local terminal.
In your Kaggle notebook or Google Colab:
# Install Neural DSL
!pip install neural-dsl==0.2.9
# Import the cloud module
from neural.cloud.cloud_execution import CloudExecutor
# Initialize the cloud executor
executor = CloudExecutor()
print(f"Detected environment: {executor.environment}")
print(f"GPU available: {executor.is_gpu_available}")
# Define a model
dsl_code = """
network MnistCNN {
input: (28, 28, 1)
layers:
Conv2D(32, (3, 3), "relu")
MaxPooling2D((2, 2))
Flatten()
Dense(128, "relu")
Dense(10, "softmax")
loss: "categorical_crossentropy"
optimizer: Adam(learning_rate=0.001)
}
"""
# Compile and run the model
model_path = executor.compile_model(dsl_code, backend='tensorflow')
results = executor.run_model(model_path, dataset='MNIST')
# Start the NeuralDbg dashboard with ngrok tunnel
dashboard_info = executor.start_debug_dashboard(dsl_code, setup_tunnel=True)
print(f"Dashboard URL: {dashboard_info['tunnel_url']}")# In your SageMaker notebook
from neural.cloud.cloud_execution import CloudExecutor
# Initialize the cloud executor
executor = CloudExecutor() # Automatically detects SageMaker environment
# Define and run your model as aboveControl cloud environments directly from your local terminal:
# Connect to a cloud platform
neural cloud connect kaggle
# Start an interactive shell connected to Kaggle
neural cloud connect kaggle --interactive
# Execute a Neural DSL file on Kaggle
neural cloud execute kaggle my_model.neural
# Run Neural in cloud mode with remote access
neural cloud run --setup-tunnelReady-to-use notebooks are available for:
| Feature | Neural | Raw TensorFlow/PyTorch |
|---|---|---|
| Shape Validation | ✅ Auto | ❌ Manual |
| Framework Switching | 1-line flag | Days of rewriting |
| Architecture Diagrams | Built-in | Third-party tools |
| Training Config | Unified | Fragmented configs |
| Neural DSL | TensorFlow Output | PyTorch Output |
|---|---|---|
Conv2D(filters=32) |
tf.keras.layers.Conv2D(32) |
nn.Conv2d(in_channels, 32) |
Dense(units=128) |
tf.keras.layers.Dense(128) |
nn.Linear(in_features, 128) |
| Task | Neural | Baseline (TF/PyTorch) |
|---|---|---|
| MNIST Training | 1.2x ⚡ | 1.0x |
| Debugging Setup | 5min 🕒 | 2hr+ |
Explore advanced features:
- Custom Layers Guide (Coming soon)
- ONNX Export Tutorial (Coming soon)
- Training Configuration (Coming soon)
- NeuralDbg Debugging Features (Coming soon)
- HPO Configuration Guide (Coming soon)
Explore common use cases in examples/ with step-by-step guides in docs/examples/:
Note: You may need to zoom in to see details in these architecture diagrams.
NeuralPaper.ai is an interactive platform for visualizing, annotating, and sharing neural network models. It provides a web-based interface for exploring model architectures, understanding tensor flows, and collaborating on model development.
- Interactive Model Visualization: Explore model architectures with interactive diagrams
- Code Annotation: Add explanations and insights to specific parts of your model code
- Collaborative Sharing: Share annotated models with colleagues and the community
- Integration with Neural DSL: Seamless workflow from model definition to visualization
# Start the NeuralPaper.ai backend
cd neuralpaper
./start.shThen open your browser to http://localhost:3000 to access the NeuralPaper.ai interface.
The Neural repository is organized into the following main directories:
docs/: Documentation filesexamples/: Example Neural DSL filesneural/: Main source codeneural/cli/: Command-line interfaceneural/parser/: Neural DSL parserneural/shape_propagation/: Shape propagation and validationneural/code_generation/: Code generation for different backendsneural/visualization/: Visualization toolsneural/dashboard/: NeuralDbg dashboardneural/hpo/: Hyperparameter optimizationneural/cloud/: Cloud integration (Kaggle, Colab, SageMaker)
neuralpaper/: NeuralPaper.ai implementationAquarium/: Specialized IDE for neural network developmentprofiler/: Performance profiling toolstests/: Test suite
For a detailed explanation of the repository structure, see REPOSITORY_STRUCTURE.md.
Each directory contains its own README with detailed documentation:
- neural/cli: Command-line interface
- neural/parser: Neural DSL parser
- neural/code_generation: Code generation
- neural/shape_propagation: Shape propagation
- neural/visualization: Visualization tools
- neural/dashboard: NeuralDbg dashboard
- neural/hpo: Hyperparameter optimization
- neural/cloud: Cloud integration
- neuralpaper: NeuralPaper.ai implementation
- Aquarium: Specialized IDE for neural network development
- profiler: Performance profiling tools
- docs: Documentation
- examples: Example models
- tests: Test suite
To get deterministic results across Python, NumPy, PyTorch, and TensorFlow, use the built-in seeding utility:
from neural.utils.seed import set_seed
set_seed(42)This sets Python's random, NumPy, PyTorch (including CUDA if present), and TensorFlow seeds.
These packages enable optional features and backends. Install only what you need:
- torch: PyTorch backend for code generation and execution
- tensorflow: TensorFlow backend for code generation and execution
- onnx: Export or interop with ONNX
- jax: Experimental backend and numerical utilities
- optuna: Hyperparameter optimization
- dash, flask: Dashboard and API/visualization components
- scikit-learn: Metrics, datasets, and HPO utilities
Examples:
# PyTorch backend
pip install torch
# TensorFlow backend
pip install tensorflow
# ONNX export support
pip install onnx
# HPO and metrics utilities
pip install optuna scikit-learn
# Dashboard/visualization
pip install dash flaskWe welcome contributions! See our:
To set up a development environment:
git clone https://github.com/Lemniscate-world/Neural.git
cd Neural
pip install -r requirements-dev.txt # Includes linter, formatter, etc.
pre-commit install # Auto-format code on commitIf you find Neural useful, please consider supporting the project:
- ⭐ Star the repository: Help us reach more developers by starring the project on GitHub
- 🔄 Share with others: Spread the word on social media, blogs, or developer communities
- 🐛 Report issues: Help us improve by reporting bugs or suggesting features
- 🤝 Contribute: Submit pull requests to help us enhance Neural (see Contributing)
This repository has been cleaned and optimized for better performance. Large files have been removed from the Git history to ensure a smoother experience when cloning or working with the codebase.
Join our growing community of developers and researchers:
- Discord Server: Chat with developers, get help, and share your projects
- Twitter @NLang4438: Follow for updates, announcements, and community highlights
- GitHub Discussions: Participate in discussions about features, use cases, and best practices
Thank you for your interest in improving Neural. This section outlines a minimal, fast local workflow to lint, type‑check, test, and audit changes before opening a PR.
- Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate
- Install the project (editable) and dev tools used by CI
pip install -e .
pip install ruff mypy pip-audit pytest
- Lint (Ruff)
python -m ruff check .
- Type check (mypy)
Fast, scoped type check for currently‑hardened modules:
python -m mypy neural/code_generation neural/utils
Full project type check (may show many findings; tighten gradually):
python -m mypy .
- Tests (targeted and full)
Run fast, targeted tests:
python -m pytest -q tests/test_seed.py tests/code_generator/test_policy_and_parity.py tests/code_generator/test_policy_helpers.py -rA
Run full test suite (may require optional deps such as torch/tensorflow/onnx):
python -m pytest -q -rA
- Supply‑chain audit
python -m pip_audit -l --progress-spinner off
- Keep PRs small and focused; include context in the description.
- Run lint, type check (scoped or full), tests, and pip‑audit locally before pushing.
- Do not commit secrets/keys. Use environment variables; keep .env or credentials out of Git.
- Follow the shape/policy rules in codegen; add or update tests for any policy changes.
Install only what you need for the tests you are running (examples):
# PyTorch backend
your-shell> pip install torch
# TensorFlow backend
your-shell> pip install tensorflow
# ONNX export
your-shell> pip install onnx
If you have questions or want guidance on tightening typing or adding new policy checks, open a discussion or draft PR.



