Skip to content

Add RubyLLM::Chat#with_params to add custom parameters to the underlying API payload #265

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jul 21, 2025

Conversation

compumike
Copy link
Contributor

@compumike compumike commented Jun 29, 2025

What this does

Implements with_request_options (renamed from with_options due to ActiveRecord conflict -- see conversation) with @crmne's suggestions from comment #130 (review) and tested against all providers.

This allows users to set arbitrary options on the payload before it's sent to the provider's API endpoint. The render_payload takes precedence.

Demo:

chat = RubyLLM
  .chat(model: "qwen3", provider: :ollama)
  .with_request_options(response_format: {type: "json_object"})
  .with_instructions("Answer with a JSON object with the key `result` and a numerical value.")
response = chat.ask("What is the square root of 64?")
response.content
=> "{\n  \"result\": 8\n}"

This is a power-user feature, and is specific to each provider (and model, to a lesser extent). I added a brief section to the docs.

For tests: different providers supported different options, so tests are divided by provider.

(Note that deep_merge is required for Gemini in particular because it relies on a top-level generationConfig object.)

Type of change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation
  • Performance improvement

Scope check

  • I read the Contributing Guide
  • This aligns with RubyLLM's focus on LLM communication
  • This isn't application-specific logic that belongs in user code
  • This benefits most users, not just my specific use case

Quality check

  • I ran overcommit --install and all hooks pass
  • I tested my changes thoroughly
  • I updated documentation if needed
  • I didn't modify auto-generated files manually (models.json, aliases.json)

API changes

  • Breaking change
  • New public methods/classes
  • Changed method signatures
  • No API changes

Related issues

@elvinaspredkelis
Copy link

elvinaspredkelis commented Jul 1, 2025

What an elegant PR!

Quick note - I've been playing around with this. Seems like this would also need to allow disabling the temperature option.

For example, the APIs break once trying to supply the web search options as the gpt-4o-search-preview does not support temperature: Model incompatible request argument supplied: temperature.

This may not be exactly related to this particular PR, but I got to this obstacle when playing around with this.

EDIT

The #with_options interfares with ActiveRecord. Perhaps it'd also make sense to explore other method names? E.g. #with_request_options

@compumike
Copy link
Contributor Author

Quick note - I've been playing around with this. Seems like this would also need to allow disabling the temperature option.

For example, the APIs break once trying to supply the web search options as the gpt-4o-search-preview does not support temperature: Model incompatible request argument supplied: temperature.

Thanks. Can you try with with_temperature(nil) and see if it works for you? And if not, can you provide a code snipped that caused the Model incompatible request argument supplied error? This quick test worked for me:

chat = RubyLLM
  .chat(model: "gpt-4o-search-preview", provider: :openai)
  .with_temperature(nil)
  .with_options(web_search_options: {search_context_size: "medium"})
chat.ask("<your query here>")

There's already a comment in:

# Only include temperature if it's not nil (some models don't accept it)
that the temperature is not added to the payload if you do so. (Possibly this nil handling behavior should be extended to other providers as well?)

I don't think we necessarily want to make this PR any more complicated, but we could do something like this:

Future proposal: fully general block passing

It's possible that in addition to this #with_options, we may want the ability to pass in a block that allows modifying the payload after it's deep_merged but before it's actually sent. For example, this might look like:

chat = RubyLLM
  .chat(model: "gpt-4o-search-preview", provider: :openai)
  .with_options(web_search_options: {search_context_size: "medium"}) do |payload|
    payload.delete(:temperature)
  end
chat.ask(...)

Not necessary in this case, since with_temperature(nil) works. But allows future customization for power users.

But I would personally prefer to merge this PR as-is and discuss the block concept later if a need arises.

@elvinaspredkelis
Copy link

Right, my bad!

It's possible to set the temperature to nil with gem "ruby_llm", "1.3.1" and not gem "ruby_llm", "1.3.0".

Just a heads up - it still conflicts with the #with_options that comes with ActiveRecord. So the Rails integration is not as smooth.

@compumike
Copy link
Contributor Author

Just a heads up - it still conflicts with the #with_options that comes with ActiveRecord. So the Rails integration is not as smooth.

Makes sense. Happy to rename it. #with_request_options has a nice ring to it! 😄

@umairabid
Copy link

Can't wait for this to be merged thank you @compumike

@compumike compumike changed the title Add RubyLLM::Chat#with_options to pass options to the underlying API payload Add RubyLLM::Chat#with_request_options to add custom parameters to the underlying API payload Jul 3, 2025
@compumike
Copy link
Contributor Author

I've just pushed a new commit which renames this to #with_request_options to avoid the ActiveRecord conflict that @elvinaspredkelis mentioned!

Please give this a try and let me know if it works for your use case.

Also, if @elvinaspredkelis and @umairabid and others could add a comment specifying what your use case is, and providing a code snippet of how #with_request_options enables that use case, that would be great! ❤️

@umairabid
Copy link

umairabid commented Jul 5, 2025

hey @compumike tried it here for gemini umairabid/sweaty_wallet@fa8c488, worked flawlessly

@adenta
Copy link

adenta commented Jul 7, 2025

Can confirm this works well for my use case.

@bricolage
Copy link

Also works for me, this makes RubyLLM versatile for more use cases

@minorbug
Copy link

I need to be able to send metadata along with my RubyLLM calls so I can add tracing that makes it to Langfuse

@lucasfcunha
Copy link

This is very useful, seems like a no-brainer to support sending custom params to the LLM.

@thedumbtechguy
Copy link

+1 on this. A very useful feature that would solve the need for custom params for every thing.

Thank you.

Copy link

codecov bot commented Jul 16, 2025

Codecov Report

Attention: Patch coverage is 80.00000% with 3 lines in your changes missing coverage. Please review.

Project coverage is 89.65%. Comparing base (0f60067) to head (3cf7943).
Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
lib/ruby_llm/active_record/acts_as.rb 33.33% 2 Missing ⚠️
lib/ruby_llm/provider.rb 85.71% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #265      +/-   ##
==========================================
- Coverage   89.71%   89.65%   -0.07%     
==========================================
  Files          75       75              
  Lines        2811     2823      +12     
  Branches      555      557       +2     
==========================================
+ Hits         2522     2531       +9     
- Misses        289      292       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@crmne
Copy link
Owner

crmne commented Jul 16, 2025

Hey @compumike really great PR!

Since we have to change from with_options, I would instead call this method with_params and here's why:

  1. It's idiomatic: it's ubiquitous in the Rails and wider web community.
  2. It's concise: it's a single clear word that's much tighter than with_request_options
  3. It's unambiguous: in the context of an API client, with_params clearly signals that we're adding parameters to the outgoing request body.

@crmne crmne added the enhancement New feature or request label Jul 16, 2025
@compumike
Copy link
Contributor Author

@crmne Great! I've just pushed a commit that renames the method to #with_params. Thank you for all your work on RubyLLM! ❤️

@tpaulshippy
Copy link
Contributor

Maybe rename the PR?

@compumike compumike changed the title Add RubyLLM::Chat#with_request_options to add custom parameters to the underlying API payload Add RubyLLM::Chat#with_params to add custom parameters to the underlying API payload Jul 20, 2025
@compumike
Copy link
Contributor Author

@tpaulshippy Good catch 😂

Copy link
Owner

@crmne crmne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flawless! Amazing work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants