Replies: 1 comment
-
This feature is similar to belows, but are different in that this suggests controlling arguments in "invoke-time". |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Single LLM ChatModel can handle
think=False
argument to inactivate reasoning features.But arguments injection at a chain for an Ollama client model is not supported.
It's needed for utilizing full functions of an Ollama model.
Motivation
Including '/no_think' in system/human prompt can inactivate reasoning, but it still generates
<think>
tokens.Also,
extract_reasoning
argument inChatOllama
does not avoid generating<think>
tokens.If not needed,
<think>
tokens should not be generated.Proposal (If applicable)
I implemented proposal code but I'm unfamiliar with PR/english and failed in testing step.. 😢
Belows are example code:
Beta Was this translation helpful? Give feedback.
All reactions