Skip to content

Conversation

vladimirrotariu
Copy link

Adds global retry defaults for LM calls and docs/tutorial updates.

What’s included

  • Settings: default_num_retries, retry_strategy
  • LM: falls back to global default when num_retries=None
  • LiteLLM: retry_strategy forwarded in chat/text (sync/async) paths
  • Tests for fallback/override/strategy passthrough
  • Docs: cheatsheet paragraph; tutorial notebook (global_retries)

Example

import dspy

dspy.configure(default_num_retries=4, retry_strategy="exponential_backoff_retry")

lm_default = dspy.LM(model="openai/gpt-4o-mini", cache=False, num_retries=None)
print(lm_default.dump_state()["num_retries"])  # 4

with dspy.context(default_num_retries=6):
    lm_scoped = dspy.LM(model="openai/gpt-4o-mini", cache=False, num_retries=None)
    print(lm_scoped.dump_state()["num_retries"])  # 6

lm_override = dspy.LM(model="openai/gpt-4o-mini", cache=False, num_retries=1)
print(lm_override.dump_state()["num_retries"])  # 1

Notes

  • Backwards compatible; per-instance num_retries takes precedence

@vladimirrotariu
Copy link
Author

@okhat @chenmoneygithub could you please have a look at this?

@okhat
Copy link
Collaborator

okhat commented Aug 19, 2025

Thanks @vladimirrotariu !

My take is that retry_strategy is a very low-level concept. I'd remove that from the global settings.

And I'd rename default_num_retries to lm_num_retries.

That said, my hunch is that these add some complexity to reasoning about the default LM settings and don't serve a huge value (it's easy to just configure them per LM or to create an LM wrapper in your program that sets those defaults).

Do you think we really need these?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants