Continue browsing this topic cluster with SEO-safe static pagination.
LiteLLM LiteLLM Updated May 14, 2026
Fix CerebrasException missing credentials error in CI pipeline when using LiteLLM to route to Cerebras provider Includes evidence for LiteLLM troubleshooting demand.
CerebrasException - Missing credentials. Please pass an api_key, workload_identity, admin_api_key, or set CEREBRAS_API_KEY LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM Proxy /metrics endpoint returning unauthorized error after upgrading to version 1.84.0 when behind a reverse proxy Includes evidence for LiteLLM troubleshooting demand.
LiteLLM Proxy /metrics endpoint returns unauthorized error after upgrade to 1.84.0 LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM proxy 400 validation error when Claude Code sends multi-turn conversations Includes evidence for LiteLLM troubleshooting demand.
177 validation errors: Input should be a valid string LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM budget exceeded blocking model discovery endpoints Includes evidence for LiteLLM troubleshooting demand.
LiteLLM HTTP 429 Budget Exceeded on GET /v1/models LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM additional_drop_params not working on /v1/messages endpoint for Bedrock Includes evidence for LiteLLM troubleshooting demand.
LiteLLM additional_drop_params ignored: context_management field forwarded to Bedrock causing 400 LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM wrong Vertex AI URL construction for us/eu multi-region endpoints Includes evidence for LiteLLM troubleshooting demand.
LiteLLM Vertex AI: wrong base URL for us/eu multi-region endpoints LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM GitHub Copilot auth loop when access token becomes stale Includes evidence for LiteLLM troubleshooting demand.
LiteLLM GitHub Copilot authenticator: stale access token causes unrecoverable auth loop LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM rate limit response leaking API key hash in error messages Includes evidence for LiteLLM troubleshooting demand.
LiteLLM rate limit 429 response leaks full SHA-256 token hash in error.message LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM breaking DeepSeek V4 Pro multi-turn conversations by stripping reasoning_content Includes evidence for LiteLLM troubleshooting demand.
LiteLLM BadRequestError: reasoning_content must be passed back to the API (DeepSeek V4 Pro multi-turn) LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM structured output not working with Anthropic models on Bedrock Converse API Includes evidence for LiteLLM troubleshooting demand.
LiteLLM Structured Output fails for Anthropic models on bedrock/converse: Extra inputs are not permitted LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM false BudgetExceededError caused by team key spend being attributed to personal user spend Includes evidence for LiteLLM troubleshooting demand.
LiteLLM BudgetExceededError: Current cost exceeds Max budget due to team key spend polluting personal spend LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM budget tracking double-counting usage from previous month in current month Includes evidence for LiteLLM troubleshooting demand.
Duplicate Usage Aggregation Across Billing Cycles — April usage data double-counted in May budget LiteLLM LiteLLM Updated May 14, 2026
Fix Claude Code multi-turn conversation failing through LiteLLM proxy gateway Includes evidence for LiteLLM troubleshooting demand.
API Error: 400 litellm.BadRequestError: 178 validation errors on multi-turn conversation LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM not tracking cost and token usage for Vertex AI batch jobs Includes evidence for LiteLLM troubleshooting demand.
Vertex AI batch jobs always record spend=0, prompt_tokens=0, completion_tokens=0 LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM rate limit error exposing API key hash in response body Includes evidence for LiteLLM troubleshooting demand.
Rate limit exceeded for api_key: <sha256_hash>. Limit type: parallel_request_limit LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM Redis user_api_key_cache deserialization error for team-scoped virtual keys Includes evidence for LiteLLM troubleshooting demand.
CacheCodec.deserialize: validation failed for LiteLLM_UserTable (1 validation error: user_id Field required) LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM 429 error response leaking token hash in rate limit error message Includes evidence for LiteLLM troubleshooting demand.
LiteLLM rate limit error message body leaks full SHA-256 token hash on 429 responses LiteLLM LiteLLM Updated May 14, 2026
Fix LiteLLM Redis user_api_key_cache deserialization failure for team-scoped keys Includes evidence for LiteLLM troubleshooting demand.
LiteLLM Redis user_api_key_cache deserialization always fails for team-scoped keys (LiteLLM_UserTable.user_id required) LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM asyncio.CancelledError when listing tools for MCP server with HTTP transport and OAuth2 Includes evidence for LiteLLM troubleshooting demand.
LiteLLM asyncio.CancelledError listing tools for MCP server with HTTP + OAuth2 LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM returning empty tools list when upstream MCP server returns 401 Includes evidence for LiteLLM troubleshooting demand.
LiteLLM returns 200 {tools:[]} for 401 upstream OAuth errors