LiteLLM / LiteLLM

LiteLLM 429 Errors Bypass Cooldown When Using OpenAI-like Providers (Wrapped as APIConnectionError)

Fix rate limit handling in LiteLLM proxy where upstream 429s are incorrectly re-wrapped as connection errors and leak security-sensitive token hashes Includes evidence for LiteLLM troubleshooting demand.

Category
LiteLLM
Error signature
Rate limit error message body leaks full SHA-256 token hash on 429 responses; wrapped as APIConnectionError instead of RateLimitError
Quick fix
Reduce request pressure, check quota or plan limits, and retry with backoff instead of immediate repeated requests.
Updated

What this error means

Rate limit error message body leaks full SHA-256 token hash on 429 responses; wrapped as APIConnectionError instead of RateLimitError is a LiteLLM failure pattern reported for developers trying to fix rate limit handling in litellm proxy where upstream 429s are incorrectly re-wrapped as connection errors and leak security-sensitive token hashes. Based on the imported evidence, treat this as a tool-specific troubleshooting page rather than a generic API error.

Why this happens

Two related bugs reported on BerriAI/litellm repo: #24366 (429 errors bypass rate-limit cooldown because they’re treated as APIConnectionError, not RateLimitError) and #27884 (429 error body leaks full SHA-256 provider API key hash). Both affect production LiteLLM proxy deployments. Maps to LiteLLM category.

Common causes

Quick fixes

  1. Confirm the exact error signature matches Rate limit error message body leaks full SHA-256 token hash on 429 responses; wrapped as APIConnectionError instead of RateLimitError.
  2. Check the LiteLLM account, local tool state, and provider configuration involved in the failing workflow.
  3. Reduce request pressure, check quota or plan limits, and retry with backoff instead of immediate repeated requests.

Platform/tool-specific checks

Step-by-step troubleshooting

  1. Capture the exact error message and the command, editor action, or request that triggered it.
  2. Check whether the failure is account/auth, quota/rate, model/provider, local runtime, or deployment configuration.
  3. Review the source evidence below and compare it with your environment.
  4. Apply one change at a time and rerun the smallest failing action.
  5. Keep the working fix documented for the team or deployment environment.

How to prevent it

Sources checked

Evidence note: Two related bugs reported on BerriAI/litellm repo: #24366 (429 errors bypass rate-limit cooldown because they’re treated as APIConnectionError, not RateLimitError) and #27884 (429 error body leaks full SHA-256 provider API key hash). Both affect production LiteLLM proxy deployments. Maps to LiteLLM category.

FAQ

What should I check first?

Start with the exact Rate limit error message body leaks full SHA-256 token hash on 429 responses; wrapped as APIConnectionError instead of RateLimitError text and the smallest action that reproduces it.

Can I ignore this error?

No. Treat it as a failed LiteLLM workflow until the root cause is understood.

Is this guaranteed to have one fix?

No. The imported evidence supports the troubleshooting path above, but tool behavior can vary by account, plan, version, provider, and local configuration.

How do I know the fix worked?

Rerun the same command, editor action, or request. The fix is working when that action completes without Rate limit error message body leaks full SHA-256 token hash on 429 responses; wrapped as APIConnectionError instead of RateLimitError.