Last changed
Contact our team to test out this image for free. Please also indicate any other images you would like to evaluate.
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Chainguard Containers are regularly-updated, secure-by-default container images.
For those with access, this container image is available on cgr.dev:
Be sure to replace the ORGANIZATION placeholder with the name used for your organization's private repository within the Chainguard Registry.
This image supports the Python, ONNX Runtime, OpenVINO and TensorRT backends only.
The tritonserver-fips Chainguard Image ships with a validated redistribution of the OpenSSL's FIPS provider module. For more on FIPS support in Chainguard Images, consult the guide on FIPS-enabled Chainguard Images on Chainguard Academy
You can test this image locally with docker:
If you wish to run the server on CPU only, omit the --gpus all line.
The following examples will use a shared repository for all the backends, you can get started by create a project directory and navigate into it:
Then download the example model server, client script, and configuration files:
After downloading these files, your folder structure should be as follows:
You can now connect to the server using a client for each of the examples. For simplicity, we will run a client script on the host machine, but client inference can be containerized using the Python Chainguard Container Image for inclusion in your orchestration setup.
Assuming that you have Python on your system's path as python, create a virtual environment:
Install the Triton client library using pip:
The client now should be runnable under the current directory:
The following example runs a variant of the add_sub example for the Triton Server Python backend.
Change your working directory to the python-backend directory. This directory will be mounted on our image as our model repository:
Run the following command to mount the model repository and run the server specified in the model.py file:
You should see output detailing the running Triton Inference Server process. Included in this output should be the status of the python model:
Then run the client script:
If the test is successful, you should receive output similar to the following:
This shows that the client successfully connected to the model server and executed elementwise addition and subtraction operations on two sample vectors.
Change your working directory to the onnxruntime-backend directory. This directory will be mounted on our image as our model repository:
This model requires an onnx model that will be fetched from the internet, you can run the script on the current directory to fetch it to the model storage location for the onnxruntime model:
Run the following command to mount the model repository and run the server:
You should see output detailing the running Triton Inference Server process. Included in this output should be the status of the onnxruntime model:
Then run the client script:
If the test is successful, you should receive output similar to the following:
This shows that the client successfully connected to the model server and executed the scalar multiplication of a vector by the scalar 2.
Change your working directory to the openvino-backend directory. This directory will be mounted on our image as our model repository:
This model can run using an onnx model which will be fetched from the internet, you can run the script on the current directory to fetch it to the model storage location for the openvino model:
Run the following command to mount the model repository and run the server:
You should see output detailing the running Triton Inference Server process. Included in this output should be the status of the onnxruntime model:
Then run the client script:
If the test is successful, you should receive output similar to the following:
This shows that the client successfully connected to the model server and executed a translation of a vector to a specific 1 by 1000 matrix shape
Change your working directory to the tensorrt-backend directory. This directory will be mounted on our image as our model repository:
This model requires the translation of an onnx model into a plan TensorRT engine, first, fetch the onnx model from the internet with the following command:
Then, translate the model to a model.plan file by using the next command, it will place the file in your current directory:
Move the model to the tensorrt model repository:
Run the following command to mount the model repository and run the server:
You should see output detailing the running Triton Inference Server process. Included in this output should be the status of the tensorrt model:
Then run the client script:
If the test is successful, you should receive output similar to the following:
This shows that the client successfully connected to the model server and executed an element-wise addition of the scalar 1 to all elements on a random matrix
Chainguard's free tier of Starter container images are built with Wolfi, our minimal Linux undistro.
All other Chainguard Containers are built with Chainguard OS, Chainguard's minimal Linux operating system designed to produce container images that meet the requirements of a more secure software supply chain.
The main features of Chainguard Containers include:
For cases where you need container images with shells and package managers to build or debug, most Chainguard Containers come paired with a development, or -dev, variant.
In all other cases, including Chainguard Containers tagged as :latest or with a specific version number, the container images include only an open-source application and its runtime dependencies. These minimal container images typically do not contain a shell or package manager.
Although the -dev container image variants have similar security features as their more minimal versions, they include additional software that is typically not necessary in production environments. We recommend using multi-stage builds to copy artifacts from the -dev variant into a more minimal production image.
To improve security, Chainguard Containers include only essential dependencies. Need more packages? Chainguard customers can use Custom Assembly to add packages, either through the Console, chainctl, or API.
To use Custom Assembly in the Chainguard Console: navigate to the image you'd like to customize in your Organization's list of images, and click on the Customize image button at the top of the page.
Refer to our Chainguard Containers documentation on Chainguard Academy. Chainguard also offers VMs and Libraries — contact us for access.
This software listing is packaged by Chainguard. The trademarks set forth in this offering are owned by their respective companies, and use of them does not imply any affiliation, sponsorship, or endorsement by such companies.
Chainguard's container images contain software packages that are direct or transitive dependencies. The following licenses were found in the "latest" tag of this image:
(GPL-2.0-only
Apache-2.0
BSD-1-Clause
BSD-2-Clause
BSD-3-Clause
BSD-3-Clause)
BSD-4-Clause-UC
For a complete list of licenses, please refer to this Image's SBOM.
Software license agreementChainguard Containers are SLSA Level 3 compliant with detailed metadata and documentation about how it was built. We generate build provenance and a Software Bill of Materials (SBOM) for each release, with complete visibility into the software supply chain.
SLSA compliance at ChainguardThis image helps reduce time and effort in establishing PCI DSS 4.0 compliance with low-to-no CVEs.
PCI DSS at ChainguardThis is a FIPS validated image for FedRAMP compliance.
This image is STIG hardened and scanned against the DISA General Purpose Operating System SRG with reports available.
Learn more about STIGsGet started with STIGs