LiteLLM Bedrock serviceTier Parameter Error - Malformed Input Request
Fix LiteLLM Bedrock serviceTier parameter causing Malformed input request error Includes evidence for LiteLLM troubleshooting demand.
Malformed input request Topic hub pagination
Continue browsing this topic cluster with SEO-safe static pagination.
Fix LiteLLM Bedrock serviceTier parameter causing Malformed input request error Includes evidence for LiteLLM troubleshooting demand.
Malformed input request Fix LiteLLM 400 error when using tool_choice named function format with GPT-5.4/5.5 models Includes evidence for LiteLLM troubleshooting demand.
400 Bad Request - tool_choice named function format Fix LiteLLM /v1/messages/count_tokens returning wrong token count for Bedrock-backed Anthropic models Includes evidence for LiteLLM troubleshooting demand.
Provider token counting failed (400): messages.N.content: Field required. Falling back to local tokenizer Fix LiteLLM 400 error requiring chunking_strategy for gpt-4o-transcribe-diarize model Includes evidence for LiteLLM troubleshooting demand.
400 litellm.BadRequestError: OpenAIException - chunking_strategy is required for diarization models Fix LiteLLM max_budget being ignored after ResetBudgetJob resets key spending to zero Includes evidence for LiteLLM troubleshooting demand.
max_budget is ignored after reset Fix random LiteLLM BudgetExceededError (429) when actual spend is near zero Includes evidence for LiteLLM troubleshooting demand.
BudgetExceededError (HTTP 429) Fix LiteLLM model_info cost_per_token override being ignored when calling upstream LiteLLM proxy Includes evidence for LiteLLM troubleshooting demand.
model_info cost override (input_cost_per_token/output_cost_per_token) ignored when using litellm_proxy/ prefix Fix LiteLLM proxy randomly returning BudgetExceededError 429 despite zero actual spend after upgrade Includes evidence for LiteLLM troubleshooting demand.
BudgetExceededError (HTTP 429) — phantom budget exceeded despite actual spend near $0 Fix LiteLLM TypeError crash when streaming models that return reasoning field in delta Includes evidence for LiteLLM troubleshooting demand.
TypeError: 'async for' requires an object with __aiter__ method, got NoneType when streaming models with reasoning field in delta Fix or investigate malicious code execution from litellm package on Python startup Includes evidence for LiteLLM troubleshooting demand.
malicious litellm_init.pth in litellm 1.82.8 — credential stealer Fix LiteLLM virtual key MCP endpoints ignoring team access group restrictions Includes evidence for LiteLLM troubleshooting demand.
Virtual key MCP access ignores team access groups — discovery and enforcement failure