DirectorySecurity AdvisoriesPricing
/
Sign in
Directory
litellm logo

litellm

Last changed

Request a free trial

Contact our team to test out this image for free. Please also indicate any other images you would like to evaluate.

Tags
Overview
Comparison
Provenance
Specifications
SBOM
Vulnerabilities
Advisories

Chainguard Container for litellm

LiteLLM is a unified interface to call 100+ LLMs using the OpenAI format, providing a proxy server for multiple LLM providers.

Chainguard Containers are regularly-updated, secure-by-default container images.

Download this Container Image

For those with access, this container image is available on cgr.dev:

docker pull cgr.dev/ORGANIZATION/litellm:latest

Be sure to replace the ORGANIZATION placeholder with the name used for your organization's private repository within the Chainguard Registry.

Compatibility Notes

This image is modeled after the upstream litellm-non_root variant and maintains full compatibility with its configuration, authentication, and deployment patterns. It runs as user nonroot (UID 65532) and uses the same authentication model (username admin, password as configured master_key).

Tag Differences

LiteLLM's public image uses main-latest for current builds and main-stable for tested releases. The Chainguard litellm:latest image tracks the current non-root stable release.

Getting Started

The litellm image provides a proxy server that translates requests between different LLM providers using a unified OpenAI-compatible API format. You can run it standalone or with external dependencies like PostgreSQL for persistent storage.

Basic Usage

Start a basic LiteLLM proxy server:

docker run -p 4000:4000 cgr.dev/ORGANIZATION/litellm:latest

The server will be available at http://localhost:4000 with the UI accessible at http://localhost:4000/ui.

Using with Configuration File

Create a configuration file to define your LLM models:

cat > litellm-config.yaml <<EOF
model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: azure/chatgpt-v-2
      api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
      api_version: "2023-05-15"
      api_key: os.environ/AZURE_API_KEY

  - model_name: gpt-4
    litellm_params:
      model: openai/gpt-4
      api_key: os.environ/OPENAI_API_KEY

general_settings:
  master_key: sk-1234
  database_url: "postgresql://user:password@localhost:5432/litellm"
EOF

Run LiteLLM with your configuration:

docker run -p 4000:4000 \
  -v $(pwd)/litellm-config.yaml:/app/config.yaml \
  -e AZURE_API_KEY=your_azure_key \
  -e OPENAI_API_KEY=your_openai_key \
  cgr.dev/ORGANIZATION/litellm:latest \
  --config /app/config.yaml

Testing the Proxy

Once running, test the proxy with a curl request:

curl -X POST 'http://localhost:4000/v1/chat/completions' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer sk-1234' \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Environment Variables

Key environment variables for configuration:

  • DATABASE_URL: PostgreSQL connection string for persistent storage
  • LITELLM_MASTER_KEY: Authentication key for admin access
  • LITELLM_LOG_LEVEL: Logging level (default: INFO)
  • LITELLM_PORT: Port to run the server on (default: 4000)

Configuration

LiteLLM uses YAML configuration files to define models, authentication, and general settings. The configuration supports:

  • Model definitions: Map model names to different LLM providers
  • Authentication: API keys, master keys, and user management
  • Database settings: PostgreSQL for logging and user management
  • UI settings: Enable/disable the web interface

Example production configuration with PostgreSQL:

cat > production-config.yaml <<EOF
model_list:
  - model_name: production-gpt-4
    litellm_params:
      model: azure/gpt-4
      api_base: https://your-instance.openai.azure.com/
      api_version: "2023-05-15"
      api_key: os.environ/AZURE_API_KEY

general_settings:
  master_key: os.environ/LITELLM_MASTER_KEY
  database_url: os.environ/DATABASE_URL
  ui: true
  ui_host: "0.0.0.0"
  ui_port: 4000
  disable_spend_logs: false

  # Rate limiting
  max_budget: 100.0
  budget_duration: "30d"
EOF

Database Integration

For persistent storage and user management, configure PostgreSQL:

docker run -d --name postgres \
  -e POSTGRES_DB=litellm \
  -e POSTGRES_USER=litellm \
  -e POSTGRES_PASSWORD=password \
  -p 5432:5432 \
  cgr.dev/ORGANIZATION/postgres:latest

docker run -p 4000:4000 \
  -v $(pwd)/production-config.yaml:/app/config.yaml \
  -e DATABASE_URL="postgresql://litellm:password@host.docker.internal:5432/litellm" \
  -e LITELLM_MASTER_KEY="sk-production-key" \
  -e AZURE_API_KEY="your_azure_key" \
  cgr.dev/ORGANIZATION/litellm:latest \
  --config /app/config.yaml

Documentation and Resources

For more information about LiteLLM configuration and usage, see the following resources:

  • LiteLLM Official Documentation
  • LiteLLM GitHub Repository
  • Supported LLM Providers
  • Configuration Reference
  • Chainguard Images Overview

What are Chainguard Containers?

Chainguard's free tier of Starter container images are built with Wolfi, our minimal Linux undistro.

All other Chainguard Containers are built with Chainguard OS, Chainguard's minimal Linux operating system designed to produce container images that meet the requirements of a more secure software supply chain.

The main features of Chainguard Containers include:

For cases where you need container images with shells and package managers to build or debug, most Chainguard Containers come paired with a development, or -dev, variant.

In all other cases, including Chainguard Containers tagged as :latest or with a specific version number, the container images include only an open-source application and its runtime dependencies. These minimal container images typically do not contain a shell or package manager.

Although the -dev container image variants have similar security features as their more minimal versions, they include additional software that is typically not necessary in production environments. We recommend using multi-stage builds to copy artifacts from the -dev variant into a more minimal production image.

Need additional packages?

To improve security, Chainguard Containers include only essential dependencies. Need more packages? Chainguard customers can use Custom Assembly to add packages, either through the Console, chainctl, or API.

To use Custom Assembly in the Chainguard Console: navigate to the image you'd like to customize in your Organization's list of images, and click on the Customize image button at the top of the page.

Learn More

Refer to our Chainguard Containers documentation on Chainguard Academy. Chainguard also offers VMs and Librariescontact us for access.

Trademarks

This software listing is packaged by Chainguard. The trademarks set forth in this offering are owned by their respective companies, and use of them does not imply any affiliation, sponsorship, or endorsement by such companies.

Licenses

Chainguard's container images contain software packages that are direct or transitive dependencies. The following licenses were found in the "latest" tag of this image:

  • Apache-2.0

  • Artistic-2.0

  • BSD-1-Clause

  • BSD-2-Clause

  • BSD-3-Clause

  • BSD-4-Clause-UC

  • CC-PDDC

For a complete list of licenses, please refer to this Image's SBOM.

Software license agreement

Compliance

Chainguard Containers are SLSA Level 3 compliant with detailed metadata and documentation about how it was built. We generate build provenance and a Software Bill of Materials (SBOM) for each release, with complete visibility into the software supply chain.

SLSA compliance at Chainguard

This image helps reduce time and effort in establishing PCI DSS 4.0 compliance with low-to-no CVEs.

PCI DSS at Chainguard

Category
AI

The trusted source for open source

Talk to an expert
© 2025 Chainguard. All Rights Reserved.
PrivacyTerms

Product

Chainguard ContainersChainguard LibrariesChainguard VMsIntegrationsPricing