/
DirectorySecurity Advisories
Sign In
Directory
tritonserver logo

tritonserver

Last changed

Create your Free Account

Be the first to hear about exciting product updates, critical vulnerability alerts, compare alternative images, and more.

Sign Up
Tags
Overview
Provenance
Specifications
SBOM
Vulnerabilities
Advisories

Chainguard Container for tritonserver

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Chainguard Containers are regularly-updated, secure-by-default container images.

Download this Container Image

For those with access, this container image is available on cgr.dev:

docker pull cgr.dev/ORGANIZATION/tritonserver:latest

Be sure to replace the ORGANIZATION placeholder with the name used for your organization's private repository within the Chainguard Registry.

Compatibility Notes

This image supports the Python backend for Triton Server only. Please see our tritonserver-trtllm-backend Chainguard Image for Triton TensorRT-LLM backend support and our tritonserver-vllm-backend Chainguard Image for vLLM backend support.

Getting Started

Example

The following example runs a variant of the add_sub example for the Triton Server Python backend.

Create a project directory and navigate into it:

mkdir -p ~/triton-python-add-sub && cd $_

Download the example model server, client script, and configuration files:

curl https://codeload.github.com/chainguard-dev/triton-examples/tar.gz/main | \
 tar -xz --strip=1 triton-examples-main/python-backend

After downloading these files, your folder structure should be as follows:

triton-python-add-sub
└── python-backend
    └── add_sub
        ├── 1
        │   └── model.py
        ├── client.py
        └── config.pbtxt

Change your working directory to the python-backend directory. This directory will be mounted on our image as our model repository:

cd ~/triton-python-add-sub/python-backend

Run the following command to mount the model repository and run the server specified in the model.py file:

 docker run -it --user root \
 --shm-size=1G \
 --ulimit memlock=-1 --ulimit stack=67108864 \
 -p 8000:8000 -p 8001:8001 \
 -v $PWD:/opt/tritonserver/model_repository \
 --gpus all \
 cgr.dev/ORGANIZATION/tritonserver:latest-dev \
 --model-repository=/opt/tritonserver/model_repository

If you wish to run the server on CPU, omit the --gpus all \ line.

You should see output detailing the running Triton Inference Server process. Included in this output should be the status of the add_sub model:

+---------+---------+--------+
| Model   | Version | Status |
+---------+---------+--------+
| add_sub | 1       | READY  |
+---------+---------+--------+

You can now connect to the server using a client. For simplicity, we will run a client script on the host machine, but client inference can be containerized using the Python Chainguard Container Image for inclusion in your orchestration setup.

Assuming that you have Python on your system's path as python, create a virtual environment:

python -m venv venv && source venv/bin/activate

Install the Triton client library using pip:

pip install 'tritonclient[all]'

Then run the client script:

python client.py

If the test is successful, you should receive output similar to the following:

INPUT0 ([0.9532459  0.5754496  0.9107486  0.48888752]) + INPUT1 ([0.33482173 0.2486278  0.0536577  0.6127271 ]) = OUTPUT0 ([1.2880676  0.82407737 0.9644063  1.1016146 ])
INPUT0 ([0.9532459  0.5754496  0.9107486  0.48888752]) - INPUT1 ([0.33482173 0.2486278  0.0536577  0.6127271 ]) = OUTPUT1 ([ 0.6184242   0.3268218   0.8570909  -0.12383959])
PASS: add_sub

This shows that the client successfully connected to the model server and executed elementwise addition and subtraction operations on two sample vectors.

Documentation and Resources

What are Chainguard Containers?

Chainguard Containers are minimal container images that are secure by default.

In many cases, the Chainguard Containers tagged as :latest contain only an open-source application and its runtime dependencies. These minimal container images typically do not contain a shell or package manager. Chainguard Containers are built with Wolfi, our Linux undistro designed to produce container images that meet the requirements of a more secure software supply chain.

The main features of Chainguard Containers include:

For cases where you need container images with shells and package managers to build or debug, most Chainguard Containers come paired with a -dev variant.

Although the -dev container image variants have similar security features as their more minimal versions, they feature additional software that is typically not necessary in production environments. We recommend using multi-stage builds to leverage the -dev variants, copying application artifacts into a final minimal container that offers a reduced attack surface that won’t allow package installations or logins.

Learn More

To better understand how to work with Chainguard Containers, please visit Chainguard Academy and Chainguard Courses.

In addition to Containers, Chainguard offers VMs and Libraries. Contact Chainguard to access additional products.

Trademarks

This software listing is packaged by Chainguard. The trademarks set forth in this offering are owned by their respective companies, and use of them does not imply any affiliation, sponsorship, or endorsement by such companies.

Licenses

Chainguard container images contain software packages that are direct or transitive dependencies. The following licenses were found in the "latest" tag of this image:

  • Apache-2.0

  • BSD-2-Clause

  • BSD-3-Clause

  • GCC-exception-3.1

  • GPL-2.0-only

  • GPL-2.0-or-later

  • GPL-3.0-or-later

For a complete list of licenses, please refer to this Image's SBOM.

Software license agreement

Category
featured
AI

Safe Source for Open Sourceâ„¢
Media KitContact Us
© 2025 Chainguard. All Rights Reserved.
Private PolicyTerms of Use

Products

Chainguard ContainersChainguard LibrariesChainguard VMs