-
-
Notifications
You must be signed in to change notification settings - Fork 194
Support removing tools #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👍 LGTM, would be useful! |
@tpaulshippy I was playing with this and had a problem where the LLM tried to call a tool that was defined in the chat's first message which I subsequently removed before sending a later message. (Seems like this was discussed a bit #229 (comment) ) I was using The current behavior of Line 138 in a35aa11
def execute_tool(tool_call)
tool = tools[tool_call.name.to_sym]
args = tool_call.arguments
tool.call(args)
end The simplest is probably to change it to |
I thought the comment from Carmine was about previous tool calls in the message history. You seem to be talking about the LLM trying to call a tool that previously was available right? Do you have the full payload of the request where you saw this? Super curious why the LLM would try to call a tool it's not given (even if it was given previously). Was there anything in the payload that would tell the LLM about the tool? |
@tpaulshippy Ah I see -- I think what was breaking it in my case was removing a tool before another I was trying to implement a tool use limit, where a tool could only be used N times, and would then remove itself from As a minimal example, here's a tool that removes itself from class GetNextWordTool < RubyLLM::Tool
description "Returns the next word"
def initialize(words, chat)
@words = words
@chat = chat
end
def execute
result = @words.shift || ""
@chat.tools.delete(:get_next_word) # Removes itself after first call
result
end
end
chat = RubyLLM.chat(provider: :ollama, model: "qwen3:8b").with_temperature(0.6)
chat.with_tools(GetNextWordTool.new(["unpredictable", "beginnings"], chat))
chat.ask("/nothink Use the get_next_word tool to get the first word. Then, call the get_next_word tool a second time to get the second word. Respond with a JSON array containing these two words. Do not guess. Use the tool twice.") which results in:
So I'm happy to concede that this issue probably shouldn't block this PR! 😄 Maybe just add to docs a note that it's unsafe to remove tools from within a tool call? 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @tpaulshippy I guess this can be useful, however would be great to have one test where we hit the real LLMs. I'd suggest to copy this test, but between the two executions you remove the Weather tool:
ruby_llm/spec/ruby_llm/chat_tools_spec.rb
Lines 48 to 63 in 4ff492d
CHAT_MODELS.each do |model_info| # rubocop:disable Style/CombinableLoops | |
model = model_info[:model] | |
provider = model_info[:provider] | |
it "#{provider}/#{model} can use tools in multi-turn conversations" do # rubocop:disable RSpec/ExampleLength,RSpec/MultipleExpectations | |
chat = RubyLLM.chat(model: model, provider: provider) | |
.with_tool(Weather) | |
response = chat.ask("What's the weather in Berlin? (52.5200, 13.4050)") | |
expect(response.content).to include('15') | |
expect(response.content).to include('10') | |
response = chat.ask("What's the weather in Paris? (48.8575, 2.3514)") | |
expect(response.content).to include('15') | |
expect(response.content).to include('10') | |
end | |
end |
@crmne Done! It's interesting to see what each model does. |
What this does
I realized recently just how many tokens tools can take.
Here's an example of saying "Hello" to Bedrock with 4 basic local tools + the tools from the Playwright MCP:

This call took 3024 input tokens.
Without the Playwright MCP, the call takes 842 tokens.
In a chat with an agentic interface, I want the option to add/remove tools at will to save on tokens.
It also simplifies tool selection for the LLM if there are fewer tools to choose from.
Type of change
Scope check
Quality check
overcommit --install
and all hooks passmodels.json
,aliases.json
)API changes
Related issues
Resolves #229