Skip to content

Support removing tools #238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

Conversation

tpaulshippy
Copy link
Contributor

@tpaulshippy tpaulshippy commented Jun 11, 2025

What this does

I realized recently just how many tokens tools can take.

Here's an example of saying "Hello" to Bedrock with 4 basic local tools + the tools from the Playwright MCP:
image
This call took 3024 input tokens.
Without the Playwright MCP, the call takes 842 tokens.

In a chat with an agentic interface, I want the option to add/remove tools at will to save on tokens.

It also simplifies tool selection for the LLM if there are fewer tools to choose from.

Type of change

  • New feature

Scope check

  • I read the Contributing Guide
  • This aligns with RubyLLM's focus on LLM communication
  • This isn't application-specific logic that belongs in user code
  • This benefits most users, not just my specific use case

Quality check

  • I ran overcommit --install and all hooks pass
  • I tested my changes thoroughly
  • I updated documentation if needed
  • I didn't modify auto-generated files manually (models.json, aliases.json)

API changes

  • New public methods/classes

Related issues

Resolves #229

@compumike
Copy link
Contributor

👍 LGTM, would be useful!

@compumike
Copy link
Contributor

@tpaulshippy I was playing with this and had a problem where the LLM tried to call a tool that was defined in the chat's first message which I subsequently removed before sending a later message. (Seems like this was discussed a bit #229 (comment) ) I was using qwen3:32b. So it would be good to look at what RubyLLM's error handling behavior is in this case, when a tool call is made to a tool name that is not defined.

The current behavior of RubyLLM::Chat#execute_tool throws a NoMethodError: undefined method call' for nil` exception:

def execute_tool(tool_call)

    def execute_tool(tool_call)
      tool = tools[tool_call.name.to_sym]
      args = tool_call.arguments
      tool.call(args)
    end

The simplest is probably to change it to tool&.call(args) so that it just returns nil. More advanced would be to pass some error message back that the tool name is not found.

@tpaulshippy
Copy link
Contributor Author

I thought the comment from Carmine was about previous tool calls in the message history. You seem to be talking about the LLM trying to call a tool that previously was available right? Do you have the full payload of the request where you saw this? Super curious why the LLM would try to call a tool it's not given (even if it was given previously). Was there anything in the payload that would tell the LLM about the tool?

@compumike
Copy link
Contributor

@tpaulshippy Ah I see -- I think what was breaking it in my case was removing a tool before another role: :user message gets sent.

I was trying to implement a tool use limit, where a tool could only be used N times, and would then remove itself from chat.tools. But it looks like this is not the right way to implement that! 😂

As a minimal example, here's a tool that removes itself from chat.tools after its first use:

class GetNextWordTool < RubyLLM::Tool
  description "Returns the next word"
  
  def initialize(words, chat)
    @words = words
    @chat = chat
  end
  
  def execute
    result = @words.shift || ""
    @chat.tools.delete(:get_next_word) # Removes itself after first call
    result
  end
end

chat = RubyLLM.chat(provider: :ollama, model: "qwen3:8b").with_temperature(0.6)
chat.with_tools(GetNextWordTool.new(["unpredictable", "beginnings"], chat))
chat.ask("/nothink Use the get_next_word tool to get the first word. Then, call the get_next_word tool a second time to get the second word. Respond with a JSON array containing these two words. Do not guess. Use the tool twice.")

which results in:

RubyLLM: request: {"model":"qwen3:8b","messages":[{"role":"user","content":"/nothink Use the get_next_word tool to get the first word. Then, call the get_next_word tool a second time to get the second word. Respond with a JSON array containing these two words. Do not guess. Use the tool twice."}],"stream":false,"temperature":0.6,"tools":[{"type":"function","function":{"name":"get_next_word","description":"Returns the next word","parameters":{"type":"object","properties":{},"required":[]}}}],"tool_choice":"auto"}
RubyLLM: response: Status 200
RubyLLM: response: {"id"=>"chatcmpl-46",
 "object"=>"chat.completion",
 "created"=>1752446954,
 "model"=>"qwen3:8b",
 "system_fingerprint"=>"fp_ollama",
 "choices"=>
  [{"index"=>0,
    "message"=>
     {"role"=>"assistant",
      "content"=>"<think>\n" + "\n" + "</think>\n" + "\n",
      "tool_calls"=>
       [{"id"=>"call_xgpixrge",
         "index"=>0,
         "type"=>"function",
         "function"=>{"name"=>"get_next_word", "arguments"=>"{}"}},
        {"id"=>"call_8nh08jq5",
         "index"=>1,
         "type"=>"function",
         "function"=>{"name"=>"get_next_word", "arguments"=>"{}"}}]},
    "finish_reason"=>"tool_calls"}],
 "usage"=>{"prompt_tokens"=>177, "completion_tokens"=>38, "total_tokens"=>215}}

RubyLLM: Tool get_next_word called with: {}
RubyLLM: Tool get_next_word returned: "unpredictable"
NoMethodError: undefined method `call' for nil
from <REDACTED>/lib/ruby_llm/chat.rb:148:in `execute_tool'

So I'm happy to concede that this issue probably shouldn't block this PR! 😄 Maybe just add to docs a note that it's unsafe to remove tools from within a tool call? 🚀

Copy link
Owner

@crmne crmne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @tpaulshippy I guess this can be useful, however would be great to have one test where we hit the real LLMs. I'd suggest to copy this test, but between the two executions you remove the Weather tool:

CHAT_MODELS.each do |model_info| # rubocop:disable Style/CombinableLoops
model = model_info[:model]
provider = model_info[:provider]
it "#{provider}/#{model} can use tools in multi-turn conversations" do # rubocop:disable RSpec/ExampleLength,RSpec/MultipleExpectations
chat = RubyLLM.chat(model: model, provider: provider)
.with_tool(Weather)
response = chat.ask("What's the weather in Berlin? (52.5200, 13.4050)")
expect(response.content).to include('15')
expect(response.content).to include('10')
response = chat.ask("What's the weather in Paris? (48.8575, 2.3514)")
expect(response.content).to include('15')
expect(response.content).to include('10')
end
end

@crmne crmne added the enhancement New feature or request label Jul 16, 2025
@tpaulshippy
Copy link
Contributor Author

@crmne Done! It's interesting to see what each model does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE] Remove tools on the fly in the chat
3 participants