ποΈ Quick Start
Quick start CLI, Config, Docker
ποΈ π³ Docker, Deploying LiteLLM Proxy
You can find the Dockerfile to build litellm proxy here
ποΈ β‘ Best Practices for Production
Expected Performance in Production
ποΈ Proxy Config.yaml
Set model list, apibase, apikey, temperature & proxy server settings (master-key) on the config.yaml.
π π All Endpoints
ποΈ β¨ Enterprise Features - Content Mod
Features here are behind a commercial license in our /enterprise folder. See Code
ποΈ Use with Langchain, OpenAI SDK, LlamaIndex, Curl
Input, Output, Exceptions are mapped to the OpenAI format for all supported models
ποΈ π Virtual Keys
Track Spend, and control model access via virtual keys for the proxy
ποΈ π° Budgets, Rate Limits
Requirements:
ποΈ π₯ Team-based Routing + Logging
Routing
ποΈ [BETA] Proxy UI
Create + delete keys through a UI
ποΈ π¨ Budget Alerting
Alerts when a project will exceed itβs planned limit
ποΈ Cost Tracking - Azure
Set base model for cost tracking azure image-gen call
ποΈ [BETA] JWT-based Auth
Use JWT's to auth admins / projects into the proxy.
ποΈ π₯ Load Balancing
2 items
ποΈ Model Management
Add new models + Get model info without restarting proxy.
ποΈ Health Checks
Use this to health check all LLMs defined in your config.yaml
ποΈ Debugging
2 levels of debugging supported.
ποΈ PII Masking
LiteLLM supports Microsoft Presidio for PII masking.
ποΈ Prompt Injection
LiteLLM supports similarity checking against a pre-generated list of prompt injection attacks, to identify if a request contains an attack.
ποΈ Caching
Cache LLM Responses
ποΈ Logging, Alerting
3 items
ποΈ Modify / Reject Incoming Requests
- Modify data before making llm api calls on proxy
ποΈ Post-Call Rules
Use this to fail a request based on the output of an llm api call.
ποΈ CLI Arguments
Cli arguments, --host, --port, --num_workers