Continue browsing this topic cluster with SEO-safe static pagination.
LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM virtual key incorrectly rejecting requests with BudgetExceededError despite /key/info showing spend below max_budget Includes evidence for LiteLLM troubleshooting demand.
BudgetExceededError: spend exceeds max_budget for virtual key LiteLLM LiteLLM Updated May 12, 2026
Fix missing rate limit headers on LiteLLM streaming API responses so clients can properly throttle requests Includes evidence for LiteLLM troubleshooting demand.
x-ratelimit-remaining / x-ratelimit-limit headers missing on streaming responses LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM deployment-level TPM rate limit being enforced per-pod instead of across all replicas in multi-replica Kubernetes deployments Includes evidence for LiteLLM troubleshooting demand.
Deployment-level TPM limit enforced per-pod, not cross-pod — effective limit = tpm_limit × N_replicas LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM crash with TypeError 'async for requires an object with aiter method' when streaming from models with reasoning/thinking output Includes evidence for LiteLLM troubleshooting demand.
TypeError: 'async for' requires an object with __aiter__ method, got NoneType LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM BudgetExceededError rejecting requests when actual spend is below budget limit Includes evidence for LiteLLM troubleshooting demand.
BudgetExceededError: stale spend reported while /key/info shows spend below max_budget LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM BudgetExceededError falsely rejecting requests when key spend is below max_budget Includes evidence for LiteLLM troubleshooting demand.
BudgetExceededError LiteLLM LiteLLM Updated May 12, 2026
Check if LiteLLM package is compromised / fix compromised LiteLLM installation Includes evidence for LiteLLM troubleshooting demand.
[Security]: litellm PyPI package (v1.82.7 + v1.82.8) compromised LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM websearch_interception truncating Claude Code streaming responses with no error logs Includes evidence for LiteLLM troubleshooting demand.
websearch_interception silently truncates streaming response on /v1/messages — follow-up call always uses stream=False LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM proxy cross-team authorization bypass on memory CRUD endpoints Includes evidence for LiteLLM troubleshooting demand.
Team 2 can do RUD on the key team-1:test without any authorization LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM /v1/messages endpoint failing with Bedrock InvokeModel due to missing compact beta header Includes evidence for LiteLLM troubleshooting demand.
LiteLLM /v1/messages passes context_management to Bedrock without compact-2026-01-12 beta header LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM proxy failing with OpenAI Codex CLI when using gpt-5-codex model with tool/function calls Includes evidence for LiteLLM troubleshooting demand.
Codex CLI does not respond correctly when using gpt-5-codex model with tool calls via LiteLLM proxy LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM proxy server startup failure caused by Prisma NotConnectedError database connection issue Includes evidence for LiteLLM troubleshooting demand.
LiteLLM Proxy fails to start: Prisma NotConnectedError LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM Bedrock Converse injecting tools into requests for non-tool-supporting models Includes evidence for LiteLLM troubleshooting demand.
BedrockException - {"message":"This model doesn't support tool use."} LiteLLM LiteLLM Updated May 12, 2026
Fix 'No module named proxy_server' error after updating LiteLLM to 1.72.x Includes evidence for LiteLLM troubleshooting demand.
No module named 'proxy_server' LiteLLM LiteLLM Updated May 12, 2026
Fix TypeError in check_view_exists() when starting LiteLLM proxy server Includes evidence for LiteLLM troubleshooting demand.
TypeError in check_view_exists() during LiteLLM Proxy startup LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM fallback failure when /v1/responses/compact endpoint returns 400 Unknown parameter metadata Includes evidence for LiteLLM troubleshooting demand.
/v1/responses/compact fails during router_settings.fallbacks — Unknown parameter: metadata LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM Presidio PII guardrail not working with Anthropic native API path Includes evidence for LiteLLM troubleshooting demand.
Presidio guardrail: PII masked on input but never unmasked in responses, 400 error with tools LiteLLM LiteLLM Updated May 12, 2026
Fix LiteLLM spend logs too large from verbose OpenAI RateLimitError payloads Includes evidence for LiteLLM troubleshooting demand.
OpenAI RateLimitError with large pydantic error payload (>12 MB spend log entry) LiteLLM LiteLLM Updated May 11, 2026
Fix LiteLLM returning 401 Authentication Error when connecting to Azure OpenAI Includes evidence for LiteLLM troubleshooting demand.
litellm.AuthenticationError: 401 LiteLLM LiteLLM Updated May 11, 2026
Fix LiteLLM BadRequestError when MCP server tools array exceeds Azure OpenAI's 128 tool limit Includes evidence for LiteLLM troubleshooting demand.
LiteLLM BadRequestError: tools array exceeds Azure limit of 128