docs/concepts/tool_calling/ #28109
Replies: 8 comments 7 replies
-
it seems like bind_tools requires list of list. I guess this might be better: |
Beta Was this translation helpful? Give feedback.
-
## tool 1:
tavily_search_tool = TavilySearchResults(max_results=2)
## tool 2:
from langchain_core.tools import tool
@tool(name_or_callable="magic_words")
def magic_words(username: str) -> str:
"""answer a dynamic password if user want to know magic words
Args:
username: name of user, or "<UNKNOWN>" if you don't know the username.
"""
mw = "xxxxxxxxxx" + username
print(mw)
return mw
## tool 3
from langchain_core.tools import tool
@tool
def multiply(a: int, b: int) -> int:
"""Multiply a and b."""
return a * b
# -----
tools = [tavily_search_tool, magic_words, multiply]
llm_with_tools = llm.bind_tools(tools) I can get currect result with tavily_search
is anything wrong? |
Beta Was this translation helpful? Give feedback.
-
can we bind tools with hugging face model ?, whic is load like I'm using HuggingFace model |
Beta Was this translation helpful? Give feedback.
-
Can we call another tool from another tool that is defined in the same stategraph? |
Beta Was this translation helpful? Give feedback.
-
I am following your Rag part 2 tutorial. In said tutorial, you create a tool, in which you say: """Generate tool call for retrieval or respond.""" Now, i am using Ollama, and unfortunately, it seems that : AttributeError: 'OllamaLLM' object has no attribute 'bind_tools' Do you guys know of a way to circumvent that problem by any chance? Thanks |
Beta Was this translation helpful? Give feedback.
-
I wrote a multiply function like tutorial and also implemented a tool, but the content is empty. what's the problem?: def multiply(a: int, b: int) -> int: tools = [multiply] function_call arguments: |
Beta Was this translation helpful? Give feedback.
-
brilliant feature! |
Beta Was this translation helpful? Give feedback.
-
Can someone help me find the piece of code where the LLM chooses which tool(s) to use in response to a given user query? Or is that a black-box feature that each LLM provider implements in their API and whose logic we have no access to? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
docs/concepts/tool_calling/
https://python.langchain.com/docs/concepts/tool_calling/
Beta Was this translation helpful? Give feedback.
All reactions