Skip to content

Prompt Caching #234

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open

Prompt Caching #234

wants to merge 22 commits into from

Conversation

tpaulshippy
Copy link
Contributor

@tpaulshippy tpaulshippy commented Jun 9, 2025

What this does

Support prompt caching config in both Anthropic and Bedrock providers for Claude models that support it. And report prompt caching token counts for OpenAI and Gemini which cache automatically.

Caching system prompts:

chat = RubyLLM.chat
chat.with_instructions("You are a helpful assistant.")
chat.cache_prompts(system: true)

Caching user prompts:

chat = RubyLLM.chat
chat.with_instructions("You are a helpful assistant.")
chat.cache_prompts(system: true, user: true)
chat.ask("What is the capital of France?")

Caching tool definitions:

chat = RubyLLM.chat
chat.with_instructions("You are a helpful assistant.")
chat.with_tool(MyTool)
chat.cache_prompts(tools: true)
chat.ask("What is the capital of France?")

Type of change

  • New feature

Scope check

  • I read the Contributing Guide
  • This aligns with RubyLLM's focus on LLM communication
  • This isn't application-specific logic that belongs in user code
  • This benefits most users, not just my specific use case

Quality check

  • I ran overcommit --install and all hooks pass
  • I tested my changes thoroughly
  • I updated documentation if needed
  • I didn't modify auto-generated files manually (models.json, aliases.json)

API changes

  • New public methods/classes

Related issues

Resolves #13

@tpaulshippy tpaulshippy changed the title Prompt caching Prompt caching for Claude Jun 9, 2025
@tpaulshippy tpaulshippy marked this pull request as ready for review June 9, 2025 21:44
@tpaulshippy
Copy link
Contributor Author

@crmne As I don't have an Anthropic key, I'll need you to generate the VCR cartridges for that provider. Hoping everything just works, but let me know if not.

@crmne
Copy link
Owner

crmne commented Jun 11, 2025

@tpaulshippy this would be great to have! Will you be willing to enable it on all providers?

I'll do a proper review when I can.

@tpaulshippy
Copy link
Contributor Author

My five minutes of research indicates that at least OpenAI and Gemini take the approach of automatically caching for you based on the size and structure of your request. So the only support I think we'd really need for those two is to populate the cached token counts on the response messages. Unless we want to try to support explicit caching on the Gemini API but that looks complex and not as commonly needed.

Do you know of other providers that require payload changes for prompt caching?

def with_cache_control(hash, cache: false)
return hash unless cache

hash.merge(cache_control: { type: 'ephemeral' })
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realizing this might cause errors on older models that do not support caching. If it does, we could raise here, or just let the API validation handle it. I'm torn on whether the capabilities check complexity is worth it as these models are probably so rarely used.

@tpaulshippy
Copy link
Contributor Author

@crmne As I don't have an Anthropic key, I'll need you to generate the VCR cartridges for that provider. Hoping everything just works, but let me know if not.

Scratch that. I decided to stop being a cheapskate and just pay Anthropic their $5.

@tpaulshippy
Copy link
Contributor Author

Looking to implement this in our project and now I'm wondering if it should be an opt out rather than an opt in. If you are using unique prompts every time I guess it adds some cost to cache them but my guess is in most applications prompts will get repeated, especially system prompts.

Copy link
Owner

@crmne crmne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this feature @tpaulshippy, however there are several improvements I'd like you to make before we merge this.

On top of the ones made in the comments, and the most important one, I'd like to have prompt caching implemented in all providers.

Plus I have not fully checked the logic in providers/anthropic but the patch seems a bit heavy-handed with the amount of changes needed at first glance. Where all changes necessary or could it be done in a simpler manner?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel like it's necessary to make a new CompletionParams class. Let's remove complexity.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel like it's necessary to make a new CompletionParams class. Let's remove complexity.

@crmne crmne added the enhancement New feature or request label Jul 16, 2025
@tpaulshippy
Copy link
Contributor Author

tpaulshippy commented Jul 16, 2025

I'd like to have prompt caching implemented in all providers.

Did you see this? Is the request to populate the cached token counts on the response messages for OpenAI and Gemini?

@crmne
Copy link
Owner

crmne commented Jul 16, 2025

Did you see this? Is the request to populate the cached token counts on the response messages for OpenAI and Gemini?

Thank you for pointing that out, I had missed it. I think it would certainly be a nice addition to RubyLLM to have all providers have almost the same level of support of caching.

@tpaulshippy
Copy link
Contributor Author

Did you see this? Is the request to populate the cached token counts on the response messages for OpenAI and Gemini?

Thank you for pointing that out, I had missed it. I think it would certainly be a nice addition to RubyLLM to have all providers have almost the same level of support of caching.

Ok we have a bit of a naming issue. Here's the property names we get from each provider:

Anthropic
cache_creation_input_tokens
cache_read_input_tokens

OpenAI
cached_tokens

Gemini
cached_content_token_count

My reading of the docs indicates that the OpenAI and Gemini values correspond pretty closely with the cache_read_input_tokens in Anthropic.

What should we call these properties in the Message?

@crmne
Copy link
Owner

crmne commented Jul 16, 2025

For the naming, let's go with:

  • cached_tokens - maps to the cache read values from all providers (the main property developers will use)
  • cache_creation_tokens - Anthropic-specific cache creation cost (nil for other providers)

This keeps it consistent with our existing input_tokens/output_tokens pattern while handling the provider differences cleanly.

Can you update the Message properties to use these names? Thanks Paul!

NOTE: It is a bit scary that the tests all passed with this bug
- Could not get APIs to actually report cached tokens so tests are commented
@tpaulshippy
Copy link
Contributor Author

@crmne Ok I addressed all the feedback. Only issue I'm running into is that I'm unable to get openai and gemini to actually report any cached tokens. I may need help from someone who knows those APIs better than me. The code to report their cached token counts is in place but I commented out the tests since I could not get them to work.

Gemini docs on this
OpenAI docs on this

@tpaulshippy tpaulshippy changed the title Prompt caching for Claude Prompt Caching Jul 19, 2025
@smerickson
Copy link

@tpaulshippy thanks so much for working on this! I'd love to be able to switch to RubyLLM for my AI library needs and this was one of the features I was still waiting on. Really appreciate you leading the charge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support prompt caching
3 participants