Open
Description
Describe the bug
When using the skip_summarization
flag inside an after_tool_callback
of a nested LlmAgent
(via AgentTool
), the raw JSON tool response is not returned to the parent agent as expected. Instead, the response appears empty.
To Reproduce
Steps to reproduce the behavior:
- Define a static tool that returns a simple dictionary.
- Set up a nested
LlmAgent
usingAgentTool
. - In the sub-agent's
after_tool_callback
, settool_context.actions.skip_summarization = True
. - Run a call through the root agent.
- Observe the response: the expected JSON is not propagated up.
"""Agent module demonstrating proper AgentTool usage in Google ADK."""
import logging
import warnings
import os
import litellm
from typing import Optional, Dict
from google.adk.agents import LlmAgent
from google.adk.tools.agent_tool import AgentTool
from google.adk.models.lite_llm import LiteLlm
warnings.filterwarnings("ignore", category=UserWarning, module=".*pydantic.*")
# Configure logging
logging.basicConfig(level=getattr(logging, os.getenv("LOG_LEVEL", "INFO")))
logger = logging.getLogger(__name__)
def static_response(number: Optional[str] = None) -> dict:
"""
Fixed response with input parameter.
Returns:
Dictionary containing the result
"""
return {
"status": "success",
"result": f"{number}",
}
def after_tool_callback(tool, args, tool_context, tool_response) -> Optional[Dict]:
"""Callback to preserve raw JSON responses."""
try:
print(f"After_tool_callback called: tool_context.action.skip_summarization = True")
print(f"After_tool_callback tool_response: {tool_response}")
tool_context.actions.skip_summarization = True
return None
except Exception as e:
# Log error but don't break the tool execution flow
print(f"Error in after_tool_callback: {e}")
return None
def log_after_tool_callback(tool, args, tool_context, tool_response) -> Optional[Dict]:
"""Callback to log tool response."""
try:
print(f"Log_After_tool_callback tool_response: {tool_response}")
return None
except Exception as e:
# Log error but don't break the tool execution flow
print(f"Error in log_after_tool_callback: {e}")
return None
# Create a specialized calculation agent (this is the sub-agent)
static_response_agent = LlmAgent(
model=LiteLlm(model=os.getenv("MODEL", "gpt-4o-mini")),
after_tool_callback=after_tool_callback,
name='agent_agent',
instruction="""You are a specialized agent. Always use the static_response tool when something is requested.""",
tools=[static_response],
)
# Create the main agent that orchestrates multiple specialized agents
root_agent = LlmAgent(
model=LiteLlm(model=os.getenv("MODEL", "gpt-4o-mini")),
after_tool_callback=log_after_tool_callback,
name='root_agent',
instruction="""You are a test agent. When users ask to call a tool, you will call the tool.""",
tools=[
AgentTool(agent=static_response_agent),
],
)
Expected behavior
Raw JSON tool response is returned to the root agent
Logs
INFO: Started server process [25094]
INFO: Waiting for application startup.
+-----------------------------------------------------------------------------+
| ADK Web Server started |
| |
| For local testing, access at http://localhost:8000. |
+-----------------------------------------------------------------------------+
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
2025-06-02 21:53:37,443 - INFO - fast_api.py:395 - New session created
INFO: 127.0.0.1:53440 - "POST /apps/agent_test/users/user/sessions HTTP/1.1" 200 OK
INFO: 127.0.0.1:53428 - "GET /list-apps?relative_path=./ HTTP/1.1" 200 OK
INFO: 127.0.0.1:53440 - "GET /apps/agent_test/eval_sets HTTP/1.1" 200 OK
INFO: 127.0.0.1:53428 - "GET /apps/agent_test/eval_results HTTP/1.1" 200 OK
INFO: 127.0.0.1:53428 - "GET /apps/agent_test/users/user/sessions HTTP/1.1" 200 OK
INFO: 127.0.0.1:53428 - "GET /apps/agent_test/users/user/sessions HTTP/1.1" 200 OK
INFO: 127.0.0.1:53428 - "POST /run_sse HTTP/1.1" 200 OK
2025-06-02 21:53:41,527 - INFO - envs.py:47 - Loaded .env file for agent_test at /home/wzhhxz/AI/aveland/.env
2025-06-02 21:53:41,528 - INFO - envs.py:47 - Loaded .env file for agent_test at /home/wzhhxz/AI/aveland/.env
2025-06-02 21:53:42,665 - INFO - _client.py:1025 - HTTP Request: GET https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json "HTTP/1.1 200 OK"
2025-06-02 21:53:43,206 - WARNING - agent_loader.py:71 - Error importing agent_test: cannot import name 'root_agent' from partially initialized module 'agent_test' (most likely due to a circular import) (/home/wzhhxz/AI/aveland/agent_test/__init__.py)
�[92m21:53:43 - LiteLLM:INFO�[0m: utils.py:2991 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-06-02 21:53:43,214 - INFO - utils.py:2991 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-06-02 21:53:44,246 - INFO - _client.py:1740 - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
�[92m21:53:44 - LiteLLM:INFO�[0m: cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:44,257 - INFO - cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:44,258 - INFO - fast_api.py:705 - Generated event in agent run streaming: {"content":{"parts":[{"functionCall":{"id":"call_puef4Tumr464fHgemQjiiJzq","args":{"request":"static_response with number 5"},"name":"agent_agent"}}],"role":"model"},"partial":false,"usageMetadata":{"candidatesTokenCount":19,"promptTokenCount":77,"totalTokenCount":96},"invocationId":"e-6c9fd782-f9ad-4328-8d61-959224323a02","author":"root_agent","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{}},"longRunningToolIds":[],"id":"grAiK7JH","timestamp":1748894023.208539}
�[92m21:53:44 - LiteLLM:INFO�[0m: cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:44,260 - INFO - cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
�[92m21:53:44 - LiteLLM:INFO�[0m: utils.py:2991 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-06-02 21:53:44,262 - INFO - utils.py:2991 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-06-02 21:53:45,446 - INFO - _client.py:1740 - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
�[92m21:53:45 - LiteLLM:INFO�[0m: cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:45,451 - INFO - cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:45,454 - INFO - fast_api.py:705 - Generated event in agent run streaming: {"content":{"parts":[{"functionResponse":{"id":"call_puef4Tumr464fHgemQjiiJzq","name":"agent_agent","response":{"result":""}}}],"role":"user"},"invocationId":"e-6c9fd782-f9ad-4328-8d61-959224323a02","author":"root_agent","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"G5dwnw2L","timestamp":1748894025.454391}
�[92m21:53:45 - LiteLLM:INFO�[0m: utils.py:2991 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
2025-06-02 21:53:45,458 - INFO - utils.py:2991 -
LiteLLM completion() model= gpt-4o-mini; provider = openai
�[92m21:53:45 - LiteLLM:INFO�[0m: cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:45,460 - INFO - cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:46,856 - INFO - _client.py:1740 - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
�[92m21:53:46 - LiteLLM:INFO�[0m: cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:46,859 - INFO - cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:46,860 - INFO - fast_api.py:705 - Generated event in agent run streaming: {"content":{"parts":[{"text":"It seems there was no specific static response returned for the number 5. If you have a particular request or need further assistance, please let me know!"}],"role":"model"},"partial":false,"usageMetadata":{"candidatesTokenCount":32,"promptTokenCount":109,"totalTokenCount":141},"invocationId":"e-6c9fd782-f9ad-4328-8d61-959224323a02","author":"root_agent","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{}},"id":"zbGGtsMU","timestamp":1748894025.456298}
�[92m21:53:46 - LiteLLM:INFO�[0m: cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
2025-06-02 21:53:46,863 - INFO - cost_calculator.py:655 - selected model name for cost calculation: openai/gpt-4o-mini-2024-07-18
After_tool_callback called: tool_context.action.skip_summarization = True
After_tool_callback tool_response: {'status': 'success', 'result': '5'}
Log_After_tool_callback tool_response:
INFO: 127.0.0.1:53428 - "GET /apps/agent_test/users/user/sessions/7607ee50-1aea-4da9-a6cf-47e8df3197d1 HTTP/1.1" 200 OK
INFO: 127.0.0.1:53456 - "GET /debug/trace/session/7607ee50-1aea-4da9-a6cf-47e8df3197d1 HTTP/1.1" 200 OK