-
-
Notifications
You must be signed in to change notification settings - Fork 190
Add RubyLLM::Chat#with_params
to add custom parameters to the underlying API payload
#265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
What an elegant PR! Quick note - I've been playing around with this. Seems like this would also need to allow disabling the For example, the APIs break once trying to supply the web search options as the This may not be exactly related to this particular PR, but I got to this obstacle when playing around with this. EDIT The |
Thanks. Can you try with chat = RubyLLM
.chat(model: "gpt-4o-search-preview", provider: :openai)
.with_temperature(nil)
.with_options(web_search_options: {search_context_size: "medium"})
chat.ask("<your query here>") There's already a comment in:
I don't think we necessarily want to make this PR any more complicated, but we could do something like this: Future proposal: fully general block passingIt's possible that in addition to this chat = RubyLLM
.chat(model: "gpt-4o-search-preview", provider: :openai)
.with_options(web_search_options: {search_context_size: "medium"}) do |payload|
payload.delete(:temperature)
end
chat.ask(...) Not necessary in this case, since But I would personally prefer to merge this PR as-is and discuss the block concept later if a need arises. |
Right, my bad! It's possible to set the temperature to Just a heads up - it still conflicts with the |
Makes sense. Happy to rename it. |
Can't wait for this to be merged thank you @compumike |
RubyLLM::Chat#with_request_options
to add custom parameters to the underlying API payload
I've just pushed a new commit which renames this to Please give this a try and let me know if it works for your use case. Also, if @elvinaspredkelis and @umairabid and others could add a comment specifying what your use case is, and providing a code snippet of how |
hey @compumike tried it here for gemini umairabid/sweaty_wallet@fa8c488, worked flawlessly |
Can confirm this works well for my use case. |
Also works for me, this makes RubyLLM versatile for more use cases |
I need to be able to send metadata along with my RubyLLM calls so I can add tracing that makes it to Langfuse |
This is very useful, seems like a no-brainer to support sending custom params to the LLM. |
+1 on this. A very useful feature that would solve the need for custom params for every thing. Thank you. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #265 +/- ##
==========================================
- Coverage 89.71% 89.65% -0.07%
==========================================
Files 75 75
Lines 2811 2823 +12
Branches 555 557 +2
==========================================
+ Hits 2522 2531 +9
- Misses 289 292 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Hey @compumike really great PR! Since we have to change from
|
@crmne Great! I've just pushed a commit that renames the method to |
Maybe rename the PR? |
RubyLLM::Chat#with_request_options
to add custom parameters to the underlying API payloadRubyLLM::Chat#with_params
to add custom parameters to the underlying API payload
@tpaulshippy Good catch 😂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flawless! Amazing work.
What this does
Implements
with_request_options
(renamed fromwith_options
due to ActiveRecord conflict -- see conversation) with @crmne's suggestions from comment #130 (review) and tested against all providers.This allows users to set arbitrary options on the payload before it's sent to the provider's API endpoint. The render_payload takes precedence.
Demo:
This is a power-user feature, and is specific to each provider (and model, to a lesser extent). I added a brief section to the docs.
For tests: different providers supported different options, so tests are divided by provider.
(Note that
deep_merge
is required for Gemini in particular because it relies on a top-levelgenerationConfig
object.)Type of change
Scope check
Quality check
overcommit --install
and all hooks passmodels.json
,aliases.json
)API changes
Related issues