-
Notifications
You must be signed in to change notification settings - Fork 320
Description
Is your feature request related to a problem? Please describe.
Yes. Running untrusted, LLM-generated code directly on a host machine is not safe. While using standard containers (like Docker) provides some isolation, they share the host's kernel, which can be a vulnerability. For production-grade security, the best practice is to use a managed platform like Google Kubernetes Engine (GKE) with an additional layer of sandboxing like gVisor. This provides strong, kernel-level isolation for each workload. The problem is that ADK currently lacks a built-in executor that seamlessly integrates with this secure GKE and gVisor stack. This leaves a gap for developers who need a scalable and highly secure way to run agent-generated code without building a complex custom solution from scratch.
Describe the solution you'd like
We would like to add comprehensive documentation for the new GkeCodeExecutor
. This documentation should guide users on how to set up and use this feature for executing Python code securely in a GKE Sandbox environment.
google/adk-python#1629 contains the implementations
Describe alternatives you've considered
Current code executors are insufficient for GKE environments:
- BuiltInCodeExecutor: Securely sandboxed with gVisor but is restricted to Gemini models.
- ContainerCodeExecutor: Incompatible with GKE because it requires direct Docker daemon access, which is unavailable to pods.
To address this gap, the new GkeCodeExecutor
uses the Kubernetes API to dynamically run code in gVisor-sandboxed pods. This provides the secure, isolated, and controlled execution environment that GKE users require.
Additional context
google/adk-python#1629