Continue browsing this topic cluster with SEO-safe static pagination.
LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM proxy 401 authentication error when virtual API key expires or budget is exhausted Includes evidence for LiteLLM troubleshooting demand.
litellm.AuthenticationError: 401 — Proxy virtual key expired or budget exhausted LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM performance degradation after upgrading to v1.81.x Includes evidence for LiteLLM troubleshooting demand.
Significant performance regression after upgrading from 1.80.5 to 1.81.x (UI + API slowness) LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM proxy memory leak causing high RAM usage over extended operation Includes evidence for LiteLLM troubleshooting demand.
Heavy RAM Usage over time LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM bulk invite generated API keys that don't have sk- prefix and get rejected Includes evidence for LiteLLM troubleshooting demand.
Bulk invite API keys missing sk- prefix, rejected on any call LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM llm_requests_hanging alert spam when requests are not actually hanging Includes evidence for LiteLLM troubleshooting demand.
llm_requests_hanging alert fires continuously for requests below 600s threshold LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM proxy where deleted models remain available on some workers after deletion via /model/delete API Includes evidence for LiteLLM troubleshooting demand.
Deleted models persist in other workers' local cache when running LiteLLM with --num_workers > 1 LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM guardrail evaluation errors when using self-hosted vLLM models like Gemma Includes evidence for LiteLLM troubleshooting demand.
litellm.BadRequestError when using llm_as_a_judge guardrail with self-hosted vllm model LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM MCP gateway returning empty tools instead of propagating upstream 401 authentication errors Includes evidence for LiteLLM troubleshooting demand.
LiteLLM returns 200 {"tools":[]} instead of upstream 401 for token-forwarding MCP servers LiteLLM LiteLLM Updated May 13, 2026
Fix Claude Code hanging on multi-turn conversations when using LiteLLM as API gateway with 178 validation errors Includes evidence for LiteLLM troubleshooting demand.
litellm.BadRequestError: OpenAIException - 178 validation errors: Input should be a valid string LiteLLM LiteLLM Updated May 13, 2026
Fix Fireworks AI tool call failures when JSON Schema properties contain default:null and title fields that drop_params doesn't sanitize Includes evidence for LiteLLM troubleshooting demand.
Fireworks AI rejects tool schemas with 'default': null and 'title' in JSON Schema properties LiteLLM LiteLLM Updated May 13, 2026
Fix missing Retry-After header in LiteLLM RouterRateLimitError so downstream clients can properly handle rate limit cooldown Includes evidence for LiteLLM troubleshooting demand.
RouterRateLimitError: No deployments available for selected model, Try again in X seconds (no Retry-After header) LiteLLM LiteLLM Updated May 13, 2026
Fix structured output JSON schema errors when using Anthropic models via LiteLLM on AWS Bedrock Converse API Includes evidence for LiteLLM troubleshooting demand.
BedrockException - The model returned the following errors: output_config.format: Extra inputs are not permitted LiteLLM LiteLLM Updated May 13, 2026
Fix Claude Code getting stuck/hanging during multi-turn conversations when routed through LiteLLM gateway Includes evidence for LiteLLM troubleshooting demand.
API Error: 400 litellm bad request — Claude Code multi-turn conversation hangs via LiteLLM gateway LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM bulk user invite generating API tokens without sk- prefix that are rejected on API calls Includes evidence for LiteLLM troubleshooting demand.
LiteLLM bulk invite API keys missing sk- prefix — tokens rejected on API calls LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM MCP server returning opaque error when upstream rejects forwarded Bearer token with 401 Includes evidence for LiteLLM troubleshooting demand.
fix(mcp): surface upstream 401 for token-forwarding MCP servers LiteLLM LiteLLM Updated May 13, 2026
Fix structured output / JSON schema responses failing when using Anthropic models via AWS Bedrock through LiteLLM Includes evidence for LiteLLM troubleshooting demand.
BedrockException - structured output (JSON schema) does not work correctly for Anthropic models on Bedrock/converse LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM RouterRateLimitError without Retry-After header causing clients to retry immediately without backoff Includes evidence for LiteLLM troubleshooting demand.
RouterRateLimitError — Retry-After header missing from LiteLLM proxy responses LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM tool registry not populating for /v1/messages anthropic_messages endpoint, causing tool/function calling failures Includes evidence for LiteLLM troubleshooting demand.
Tool registry (LiteLLM_ToolTable / LiteLLM_SpendLogToolIndex) not populated for /v1/messages (anthropic_messages) path LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM proxy InternalServerError when extra headers contain float values Includes evidence for LiteLLM troubleshooting demand.
InternalServerError: Hosted_vllmException — Header value must be str or bytes, not float LiteLLM LiteLLM Updated May 13, 2026
Fix LiteLLM confusing rate limit error message showing 'No deployments available' instead of clear 429 rate limit Includes evidence for LiteLLM troubleshooting demand.
RateLimitError: Error code: 429 - {'error': {'message': 'No deployments available for selected model.', 'type': 'None', 'param': 'None', 'code': '429'}}