Don't see what you need? Contact our team.
vLLM is a high-throughput and memory-efficient inference engine for Large Language Models (LLMs). This FIPS-validated variant provides OpenSSL FIPS 140-3 compliance for secure, production LLM deployments.
Asynchronous data replication for Kubernetes volumes
Minimal image with the Virus Total CLI - vt-cli.
vt-cli
Container image for testing whether a service is listening on an address/port combination.
Minimal wait-for-it-fips image.
1,910 images