Topic hub pagination

LiteLLM errors - page 6

Continue browsing this topic cluster with SEO-safe static pagination.

LiteLLM LiteLLM Updated May 13, 2026

LiteLLM Fireworks AI Tool Schema Rejection default null drop_params Error

Fix LiteLLM Fireworks AI tool call failures when JSON Schema contains default null in nested properties Includes evidence for LiteLLM troubleshooting demand.

Fireworks AI rejects tool schemas with "default": null — drop_params doesn't sanitize nested schemas
LiteLLM LiteLLM Updated May 13, 2026

LiteLLM Blocks Free Models When User Budget Is Exceeded

Fix LiteLLM blocking self-hosted free models when user exceeds budget for paid models Includes evidence for LiteLLM troubleshooting demand.

ExceededBudget: API blocks free (no cost) models when budget is exceeded
LiteLLM LiteLLM Updated May 13, 2026

LiteLLM OpenAI Inference Broken After v1.81.x Upgrade

Fix LiteLLM OpenAI completions API failures after upgrading to v1.81.x Includes evidence for LiteLLM troubleshooting demand.

OpenAI inference broken in SDK on v1.81.x — completions API failure with GPT models
LiteLLM LiteLLM Updated May 13, 2026

LiteLLM Embedding Failover Missing num_retries Parameter

Fix LiteLLM embedding router not retrying or failing over when embedding host is unreachable Includes evidence for LiteLLM troubleshooting demand.

aembedding() missing num_retries kwarg — no failover for embedding model groups
LiteLLM LiteLLM Updated May 12, 2026

LiteLLM Proxy Drops Rate Limit Headers on Streaming Responses

Fix missing x-ratelimit headers on LiteLLM proxy streaming responses Includes evidence for LiteLLM troubleshooting demand.

x-ratelimit-* headers dropped on streaming and plain-dict responses (v3 parallel_request_limiter)
LiteLLM LiteLLM Updated May 12, 2026

LiteLLM 400 Error with Codex CLI and Domestic LLM Models

Fix 400 error when using LiteLLM with Codex CLI and Chinese/domestic LLM models Includes evidence for LiteLLM troubleshooting demand.

400 Bad Request — unsupported parameter passed to model API