This repository contains a Helm chart for deploying a Squid HTTP proxy server in Kubernetes. The chart is designed to be self-contained and deploys into a dedicated proxy
namespace.
Increase the inotify
resource limits to avoid Kind issues related to
too many open files in.
To increase the limits temporarily, run the following commands:
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
When using Dev Containers, the automation will fetch those values, and if it's successful, it will verify them against the limits above and fail container initialization if either value is too low.
- gcc
- Go 1.21 or later
- Docker or Podman
- kind (Kubernetes in Docker)
- kubectl
- Helm v3.x
- Mage (for automation -
go install github.com/magefile/mage@latest
) - mirrord (for local development with cluster network access - optional but recommended)
Install the following debug symbols to enable proper debugging of Go applications:
# Enable debug repositories and install debug symbols
sudo dnf install -y dnf-plugins-core
sudo dnf --enablerepo=fedora-debuginfo,updates-debuginfo install -y \
glibc-debuginfo \
gcc-debuginfo \
libgcc-debuginfo
This repository includes a dev container configuration that provides a consistent development environment with all prerequisites pre-installed. To use it, you need the following on your local machine:
-
Podman: The dev container is based on Podman. Ensure it is installed on your system.
-
Docker alias for Podman: The VSCode Dev Containers extension relies on using the "docker" command, so the podman command needs to be aliased to "docker". You can either:
- Fedora Workstation: Install the
podman-docker
package:sudo dnf install podman-docker
- Fedora Silverblue/Kinoite: Install via rpm-ostree:
sudo rpm-ostree install podman-docker
(requires reboot) - Manual alias (any system):
sudo ln -s /usr/bin/podman /usr/local/bin/docker
- Fedora Workstation: Install the
-
VS Code with Dev Containers extension: For the best experience, use Visual Studio Code with the Dev Containers extension.
-
Running Podman Socket: The dev container connects to your local Podman socket. Make sure the user service is running before you start the container:
systemctl --user start podman.socket
To enable the socket permanently so it starts on boot, you can run:
systemctl --user enable --now podman.socket
Once these prerequisites are met, you can open this folder in VS Code and use the "Reopen in Container" command to launch the environment with all tools pre-configured.
This repository includes complete automation for setting up your local development and testing environment. Use this approach for the easiest setup:
# Set up the complete local dev/test environment automatically
mage all
This single command will:
- Create the 'caching' kind cluster (or connect to existing)
- Build the consolidated squid container image (with integrated squid-exporter and per-site exporter)
- Load the image into the cluster
- Deploy the Helm chart with all dependencies
- Verify the deployment status
You can also run individual components:
# Cluster management
mage kind:up # Create/connect to cluster
mage kind:status # Check cluster status
mage kind:down # Remove cluster
mage kind:upClean # Force recreate cluster
# Image management
mage build:squid # Build consolidated squid image (includes integrated squid-exporter)
mage build:loadSquid # Load squid image into cluster
# Deployment management
mage squidHelm:up # Deploy/upgrade helm chart
mage squidHelm:status # Check deployment status
mage squidHelm:down # Remove deployment
mage squidHelm:upClean # Force redeploy
# Testing
mage test:cluster # Run tests with mirrord cluster networking
# Complete cleanup
mage clean # Remove everything (cluster, images, etc.)
# See all available automation commands
mage -l
If you prefer manual control or want to understand the individual steps:
# Create a new kind cluster (or use: mage kind:up)
kind create cluster --name caching
# Verify the cluster is running
kubectl cluster-info --context kind-caching
# Build the container image (or use: mage build:squid)
podman build -t localhost/konflux-ci/squid:latest -f Containerfile .
# Load the image into kind (or use: mage build:loadSquid)
kind load image-archive --name caching <(podman save localhost/konflux-ci/squid:latest)
# Build the container image (or use: mage build:squid)
podman build -t localhost/konflux-ci/squid-test:latest -f test.Containerfile .
# Load the image into kind (or use: mage build:loadSquid)
kind load image-archive --name caching <(podman save localhost/konflux-ci/squid-test:latest)
# Install the Helm chart (or use: mage squidHelm:up)
helm install squid ./squid --set environment=dev
# Verify deployment (or use: mage squidHelm:status)
kubectl get pods -n proxy
kubectl get svc -n proxy
By default:
-
The cert-manager and trust-manager dependencies will be deployed into the cert-manager namespace. If you wish to disable these deployments, you can do so by setting the parameter
--set installCertManagerComponents=false
-
The resources (issuer, certificate, bundle) are created if you wish to disable the resources creation, you can do so by setting the parameter
--set selfsigned-bundle.enabled=false
# Install squid + cert-manager + trust-manager + resources
helm install squid ./squid
# Install squid without cert-manager, without trust-manager and without creating resources
helm install squid ./squid --set installCertManagerComponents=false --set selfsigned-bundle.enabled=false
# Install squid + cert-manager + trust-manager without creating resources
helm install squid ./squid --set selfsigned-issuer.enabled=false
# Install squid without cert-manager, without trust-manager + create resources
helm install squid ./squid --set installCertManagerComponents=false
The Squid proxy is accessible at:
- Same namespace:
http://squid:3128
- Cross-namespace:
http://squid.proxy.svc.cluster.local:3128
# Create a test pod for testing the proxy (HTTP)
kubectl run test-client --image=curlimages/curl:latest --rm -it -- \
sh -c 'curl --proxy http://squid.proxy.svc.cluster.local:3128 http://httpbin.org/ip'
# Create a test pod for testing SSL-Bump (HTTPS)
kubectl run test-client-ssl --image=curlimages/curl:latest --rm -it -- \
sh -c 'curl -k --proxy http://squid.proxy.svc.cluster.local:3128 https://httpbin.org/ip'
# Forward the proxy port to your local machine
export POD_NAME=$(kubectl get pods --namespace proxy -l "app.kubernetes.io/name=squid,app.kubernetes.io/instance=squid" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace proxy port-forward $POD_NAME 3128:3128
# In another terminal, test the proxy (HTTP)
curl --proxy http://127.0.0.1:3128 http://httpbin.org/ip
This repository includes comprehensive end-to-end tests to validate the Squid proxy deployment and HTTP caching functionality. The test suite uses Ginkgo for behavior-driven testing and mirrord for local development with cluster network access.
# Complete test setup and execution (recommended)
mage all
In this mode tests are invoked by Helm via helm test
.
For local development and debugging, use mirrord to run tests with cluster network access:
# Setup test environment
mage squidHelm:up
# Run tests locally with cluster network access
mage test:cluster
This uses mirrord to "steal" network connections from a target pod and runs the test locally (outside of the Kind cluster) with Ginkgo. This allows for local debugging without rebuilding test containers
The repository includes complete VS Code configuration for Ginkgo testing:
Use VS Code's debug panel to run tests with breakpoints:
- Debug Ginkgo E2E Tests: Run all E2E tests with debugging
- Debug Ginkgo Tests (Current File): Debug tests in the currently open file
- Run Ginkgo Tests with Coverage: Generate test coverage reports
Available VS Code tasks (Ctrl+Shift+P → "Tasks: Run Task"):
- Setup Test Environment: Runs
mage all
to prepare everything - Run Ginkgo E2E Tests: Execute the full test suite
- Clean Test Environment: Clean up all resources
- Run Focused Ginkgo Tests: Run specific test patterns
When adding new tests:
- Use test helpers: Leverage
tests/testhelpers/
for common operations - Follow Ginkgo patterns: Use
Describe
,Context
,It
for clear test structure - Add cache-busting: Use unique URLs to prevent test interference
- Verify cleanup: Ensure tests clean up resources properly
- Update VS Code config: Add debug configurations for new test files
This chart includes comprehensive Prometheus monitoring capabilities through:
- Integrated upstream squid-exporter for standard Squid metrics
- A per-site exporter that parses Squid access logs (via STDOUT piping) to emit per-host request metrics
Both exporters are bundled into the single Squid container image. The monitoring system provides detailed metrics about Squid's operational status, including:
- Liveness: Squid service information and connection status
- Bandwidth Usage: Client HTTP and Server HTTP traffic metrics
- Hit/Miss Rates: Cache performance metrics (general and detailed)
- Storage Utilization: Cache information and memory usage
- Service Times: Response times for different operations
- Connection Information: Active connections and request details
Monitoring is enabled by default. To customize or disable it:
# In values.yaml or via --set flags
squidExporter:
enabled: true # Set to false to disable
port: 9301
metricsPath: "/metrics"
extractServiceTimes: "true" # Enables detailed service time metrics
resources:
requests:
cpu: 10m
memory: 16Mi
limits:
cpu: 100m
memory: 64Mi
Per-site exporter (enabled by default):
# In values.yaml or via --set flags
perSiteExporter:
enabled: true
port: 9302
metricsPath: "/metrics"
If you're using Prometheus Operator, the chart automatically creates a ServiceMonitor:
prometheus:
serviceMonitor:
enabled: true
interval: 30s
scrapeTimeout: 10s
namespace: "" # Leave empty to use the same namespace as the app
When enabled, the ServiceMonitor exposes endpoints for both exporters: 9301
(standard) and 9302
(per-site).
ServiceMonitor CRD Included:
The chart includes the ServiceMonitor CRD in the crds/
directory, so it will be automatically installed by Helm when the chart is deployed. This follows the same pattern as the cert-manager and trust-manager CRDs.
If you don't want ServiceMonitor functionality, you can disable it:
helm install squid ./squid --set prometheus.serviceMonitor.enabled=false
For non-Prometheus Operator setups, disable ServiceMonitor and use manual Prometheus configuration:
For manual Prometheus setup (if you have an existing Prometheus instance), add this scrape configuration to your external Prometheus configuration:
scrape_configs:
- job_name: 'squid-proxy'
static_configs:
- targets: ['squid.proxy.svc.cluster.local:9301']
scrape_interval: 30s
metrics_path: '/metrics'
Complete example: See docs/prometheus-config-example.yaml
in this repository for a full Prometheus configuration file.
The monitoring system provides numerous metrics including:
squid_client_http_requests_total
: Total client HTTP requestssquid_client_http_hits_total
: Total client HTTP hitssquid_client_http_errors_total
: Total client HTTP errorssquid_server_http_requests_total
: Total server HTTP requestssquid_cache_memory_bytes
: Cache memory usagesquid_cache_disk_bytes
: Cache disk usagesquid_service_times_seconds
: Service times for different operationssquid_up
: Squid availability status
squid_site_requests_total{host="<hostname>"}
: Total requests per origin hostsquid_site_hits_total{host="<hostname>"}
: Cache hits per hostsquid_site_misses_total{host="<hostname>"}
: Cache misses per hostsquid_site_bytes_total{host="<hostname>"}
: Bytes transferred per hostsquid_site_hit_ratio{host="<hostname>"}
: Hit ratio gauge per hostsquid_site_response_time_seconds{host="<hostname>",le="..."}
: Response time histogram per host
# Forward the standard squid-exporter metrics port
kubectl port-forward -n proxy svc/squid 9301:9301
# View standard metrics in your browser or with curl
curl http://localhost:9301/metrics
Per-site exporter metrics:
# Forward the per-site exporter metrics port
kubectl port-forward -n proxy svc/squid 9302:9302
# View per-site metrics
curl http://localhost:9302/metrics
The metrics are exposed on the service:
# Standard squid-exporter metrics (from within the cluster)
curl http://squid.proxy.svc.cluster.local:9301/metrics
Per-site exporter metrics (from within the cluster):
curl http://squid.proxy.svc.cluster.local:9302/metrics
-
Check if the squid container is running:
kubectl get pods -n proxy kubectl logs -n proxy deployment/squid -c squid-exporter
-
Verify cache manager access:
# Test from within the pod kubectl exec -n proxy deployment/squid -c squid-exporter -- \ curl -s http://localhost:3128/squid-internal-mgr/info
-
Check ServiceMonitor (if using Prometheus Operator):
kubectl get servicemonitor -n proxy kubectl describe servicemonitor -n proxy squid
If you see "access denied" errors, ensure that the squid configuration allows localhost manager access. The default configuration should work, but if you've modified squid.conf
, make sure these lines are present:
http_access allow localhost manager
http_access deny manager
Symptom: When trying to create a kind cluster, you see:
ERROR: failed to create cluster: node(s) already exist for a cluster with the name "kind"
Solution: Either use the existing cluster or delete it first:
# Option 1: Use existing cluster (recommended for dev container users)
kind export kubeconfig --name caching
kubectl cluster-info --context kind-caching
# Option 2: Delete and recreate
kind delete cluster --name caching
kind create cluster --name caching
Symptom: kubectl
commands fail with connection errors when using the dev container
Solution: Configure kubectl access to the existing cluster:
# Check current context
kubectl config current-context
# List available contexts
kubectl config get-contexts
# Export kubeconfig for existing cluster
kind export kubeconfig --name caching
# Switch to the kind context if needed
kubectl config use-context kind-caching
# Test connectivity
kubectl get pods --all-namespaces
Symptom: Pod shows ImagePullBackOff
or ErrImagePull
Solution: Ensure the image is loaded into kind:
# Check if image is loaded
docker exec -it caching-control-plane crictl images | grep squid
# If missing, reload the image
kind load image-archive --name caching <(podman save localhost/konflux-ci/squid:latest)
Symptom: Pod logs show Permission denied
when accessing /etc/squid/squid.conf
Solution: This is usually resolved by the correct security context in our chart. Verify:
kubectl describe pod -n proxy $(kubectl get pods -n proxy -o name | head -1)
Look for:
runAsUser: 1001
runAsGroup: 0
fsGroup: 0
Symptom: Helm install fails with namespace ownership errors
Solution: Clean up and reinstall:
helm uninstall squid 2>/dev/null || true
kubectl delete namespace proxy 2>/dev/null || true
# Wait a few seconds for cleanup
sleep 5
helm install squid ./squid
Symptom: Pods cannot connect to the proxy
Solution: Check if the pod network CIDR is covered by Squid's ACLs:
# Check cluster CIDR
kubectl cluster-info dump | grep -i cidr
# Verify it's covered by the localnet ACLs in squid.conf
# Default ACLs cover: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
Symptom: Tests fail with "POD_IP environment variable not set"
Solution: Ensure downward API is configured (automatically handled by Helm chart):
kubectl describe pod -n proxy <test-pod-name>
Symptom: mage test:cluster
fails with mirrord connection errors
Solution: Verify mirrord infrastructure is deployed and working:
# Verify mirrord target pod is ready
kubectl get pods -n proxy -l app.kubernetes.io/component=mirrord-target
# Check mirrord target pod logs
kubectl logs -n proxy mirrord-test-target
# Verify mirrord configuration
cat .mirrord/mirrord.json
# Ensure mirrord is installed
which mirrord
Symptom: Tests fail unexpectedly or show connection issues
Solution: Debug test execution and cluster state:
# Run tests with verbose output
mage test:cluster # Check output for detailed error messages
# Verify cluster state before running tests
mage squidHelm:status
# Check if all pods are running
kubectl get pods -n proxy
# Verify proxy connectivity manually
kubectl run debug --image=curlimages/curl:latest --rm -it -- \
curl -v --proxy http://squid.proxy.svc.cluster.local:3128 http://httpbin.org/ip
# View test logs from helm tests
kubectl logs -n proxy -l app.kubernetes.io/component=test
Symptom: You're using the dev container and have an existing kind cluster with a different name
Solution: Either use the existing cluster or create the expected one:
# Option 1: Check what clusters exist
kind get clusters
# Option 2: Export kubeconfig for existing cluster (if using default 'kind' cluster)
kind export kubeconfig --name kind
kubectl config use-context kind-kind
# Option 3: Create the expected 'caching' cluster for consistency with automation
kind create cluster --name caching
# Check pod status
kubectl get pods -n proxy
# View pod logs
kubectl logs -n proxy deployment/squid
# Test connectivity from within cluster
kubectl run debug --image=curlimages/curl:latest --rm -it -- curl -v --proxy http://squid.proxy.svc.cluster.local:3128 http://httpbin.org/ip
# Check service endpoints
kubectl get endpoints -n proxy
# Verify test infrastructure (when running tests)
kubectl get pods -n proxy -l app.kubernetes.io/component=mirrord-target
kubectl get pods -n proxy -l app.kubernetes.io/component=test
# View test logs from helm tests
kubectl logs -n proxy -l app.kubernetes.io/component=test
The deployment includes TCP-based liveness and readiness probes on port 3128. You may see health check entries in the access logs as:
error:transaction-end-before-headers
This is normal - Kubernetes is performing TCP health checks without sending complete HTTP requests.
This section provides step-by-step testing instructions for the Squid proxy monitoring integration.
# Test basic cluster access
kubectl cluster-info
# Show current context
kubectl config current-context
# Verify nodes are ready
kubectl get nodes
Expected Result: Cluster information displays correctly with no errors.
# Check if ServiceMonitor CRD exists
kubectl get crd servicemonitors.monitoring.coreos.com
# If the CRD doesn't exist, install Prometheus Operator
if ! kubectl get crd servicemonitors.monitoring.coreos.com &> /dev/null; then
echo "Installing Prometheus Operator..."
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/main/bundle.yaml
# Wait for CRDs to be established
kubectl wait --for condition=established --timeout=60s crd/servicemonitors.monitoring.coreos.com
fi
Expected Result: ServiceMonitor CRD is available for monitoring integration.
# Remove any existing deployment
helm uninstall squid 2>/dev/null || true
# Remove namespace
kubectl delete namespace proxy 2>/dev/null || true
# Wait for cleanup
sleep 10
Expected Result: Clean environment with no conflicts.
# Deploy with monitoring enabled
helm install squid ./squid \
--set squidExporter.enabled=true \
--set prometheus.serviceMonitor.enabled=true \
--set cert-manager.enabled=false \
--wait --timeout=300s
Expected Result: Deployment succeeds with STATUS: deployed
.
# Wait for pods to be ready
kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=squid -n proxy --timeout=120s
# Check pod status
kubectl get pods -n proxy -o wide
Expected Result: Pod shows 2/2 Running
(squid + squid-exporter containers).
# Check service
kubectl get svc -n proxy
# Check service details
kubectl describe svc squid -n proxy
Expected Result: Service exposes ports 3128 (proxy) and 9301 (metrics).
# Check ServiceMonitor
kubectl get servicemonitor -n proxy
# Check ServiceMonitor details
kubectl describe servicemonitor squid -n proxy
Expected Result: ServiceMonitor created with correct selector and endpoints.
# Get pod name
POD_NAME=$(kubectl get pods -n proxy -l app.kubernetes.io/name=squid -o jsonpath="{.items[0].metadata.name}")
echo "Testing pod: $POD_NAME"
# Check container names
kubectl get pod $POD_NAME -n proxy -o jsonpath='{.spec.containers[*].name}'
Expected Result: Shows both squid
and squid-exporter
containers.
# Check squid container logs
kubectl logs -n proxy $POD_NAME -c squid --tail=10
# Check squid-exporter container logs
kubectl logs -n proxy $POD_NAME -c squid-exporter --tail=10
Expected Result:
- Squid logs show successful startup with no permission errors
- Squid-exporter logs show successful connection to cache manager
# Port forward to metrics endpoint
kubectl port-forward -n proxy $POD_NAME 9301:9301 &
PF_PID=$!
sleep 3
# Test metrics endpoint
curl -s http://localhost:9301/metrics | head -20
# Cleanup
kill $PF_PID 2>/dev/null || true
Expected Result: Prometheus metrics displayed starting with # HELP
and # TYPE
comments.
# Port forward via service
kubectl port-forward -n proxy svc/squid 9301:9301 &
PF_PID=$!
sleep 3
# Test metrics via service
curl -s http://localhost:9301/metrics | grep -c "^squid_"
# Cleanup
kill $PF_PID 2>/dev/null || true
Expected Result: Number of squid metrics found (typically 20+ metrics).
# Port forward for detailed metrics check
kubectl port-forward -n proxy svc/squid 9301:9301 &
PF_PID=$!
sleep 3
# Check for key metrics
curl -s http://localhost:9301/metrics | grep -E "squid_up|squid_client_http|squid_cache_"
# Cleanup
kill $PF_PID 2>/dev/null || true
Expected Result: Shows key metrics like:
squid_up 1
(service health)squid_client_http_requests_total
(request counters)squid_cache_memory_bytes
(cache stats)
# Port forward to proxy port
kubectl port-forward -n proxy $POD_NAME 3128:3128 &
PF_PID=$!
sleep 3
# Test cache manager info
curl -s http://localhost:3128/squid-internal-mgr/info | head -10
# Cleanup
kill $PF_PID 2>/dev/null || true
Expected Result: Cache manager information displayed (version, uptime, etc.).
# Port forward to proxy port
kubectl port-forward -n proxy $POD_NAME 3128:3128 &
PF_PID=$!
sleep 3
# Test cache manager counters
curl -s http://localhost:3128/squid-internal-mgr/counters | head -10
# Cleanup
kill $PF_PID 2>/dev/null || true
Expected Result: Statistics counters displayed (requests, hits, misses, etc.).
# Port forward to proxy port
kubectl port-forward -n proxy $POD_NAME 3128:3128 &
PF_PID=$!
sleep 3
# Test proxy with external request
curl -s --proxy http://localhost:3128 http://httpbin.org/ip
# Cleanup
kill $PF_PID 2>/dev/null || true
Expected Result: JSON response showing IP address, indicating request went through proxy.
# Create test pod and test proxy
kubectl run test-client --image=curlimages/curl:latest --rm -it -- \
curl --proxy http://squid.proxy.svc.cluster.local:3128 --connect-timeout 10 http://httpbin.org/ip
Expected Result: JSON response showing external IP, confirming proxy works from within cluster.
# Generate some proxy traffic
kubectl port-forward -n proxy $POD_NAME 3128:3128 &
PF_PROXY_PID=$!
sleep 3
# Make a few requests
curl -s --proxy http://localhost:3128 http://httpbin.org/ip > /dev/null
curl -s --proxy http://localhost:3128 http://httpbin.org/headers > /dev/null
# Kill proxy port forward
kill $PF_PROXY_PID 2>/dev/null || true
sleep 2
# Check if metrics reflect the traffic
kubectl port-forward -n proxy $POD_NAME 9301:9301 &
PF_METRICS_PID=$!
sleep 3
# Check request counters
curl -s http://localhost:9301/metrics | grep "squid_client_http_requests_total"
# Cleanup
kill $PF_METRICS_PID 2>/dev/null || true
Expected Result: Request counters show non-zero values, indicating metrics are being updated.
After completing all tests, verify:
- Deployment: Chart installs successfully
- Containers: Both squid and squid-exporter containers running
- Service: Ports 3128 and 9301 accessible
- ServiceMonitor: Created (if Prometheus Operator available)
- Metrics: squid-exporter provides Prometheus metrics
- Cache Manager: Accessible via localhost manager interface
- Proxy: Functions correctly for external requests
- Integration: Metrics update after proxy usage
- Cleanup: All resources removed cleanly
For rapid testing during development:
# Quick deployment test
helm install squid ./squid --set cert-manager.enabled=false --wait
# Quick functionality test
kubectl port-forward -n proxy svc/squid 3128:3128 &
curl --proxy http://localhost:3128 http://httpbin.org/ip
pkill -f "kubectl port-forward.*3128"
# Quick metrics test
kubectl port-forward -n proxy svc/squid 9301:9301 &
curl -s http://localhost:9301/metrics | grep squid_up
pkill -f "kubectl port-forward.*9301"
# Quick cleanup
helm uninstall squid && kubectl delete namespace proxy
# Remove everything (cluster, deployments, images) in one command
mage clean
If you prefer manual control:
# Uninstall the Helm release
helm uninstall squid
# Remove the namespaces (optional, will be recreated on next install)
kubectl delete namespace proxy
# If you used the "squid" helm chart to install cert-manager
kubectl delete namespace cert-manager
# Delete the entire cluster (or use: mage kind:down)
kind delete cluster --name caching
# Remove the local container images
podman rmi localhost/konflux-ci/squid:latest
podman rmi localhost/konflux-ci/squid-test:latest
squid/
├── Chart.yaml # Chart metadata
├── values.yaml # Default configuration values
├── squid.conf # Squid configuration file
└── templates/
├── _helpers.tpl # Template helpers
├── configmap.yaml # ConfigMap for squid.conf
├── deployment.yaml # Squid deployment
├── namespace.yaml # Proxy namespace
├── service.yaml # Squid service
├── serviceaccount.yaml # Service account
├── servicemonitor.yaml # Prometheus ServiceMonitor
└── NOTES.txt # Post-install instructions
- The proxy runs as non-root user (UID 1001)
- Access is restricted to RFC 1918 private networks
- Unsafe ports and protocols are blocked
- No disk caching is enabled by default (memory-only)
When modifying the chart:
- Test changes locally with kind
- Update this README if adding new features
- Verify the proxy works both within and across namespaces
- Check that cleanup procedures work correctly
The Mage-based automation system provides several key benefits:
- Dependency Management: Automatically handles prerequisites (cluster → image → deployment)
- Error Handling: Robust error handling with clear feedback
- Idempotent Operations: Can run commands multiple times safely
- Smart Logic: Detects existing resources and handles install vs upgrade scenarios
- Consistent Patterns: All resource types follow the same up/down/status/upClean pattern
- Single Command Setup:
mage all
sets up the complete environment - Efficient Cleanup:
mage clean
removes all resources in the correct order
For the best experience, use the automation commands instead of manual setup!
This project is licensed under the terms specified in the LICENSE file.