DirectorySecurity AdvisoriesPricing
Sign in

Search. Pull. Build.

lmcache-vllm-openai logo
lmcache-vllm-openai
Last changed

LMCache is an LLM serving engine extension that stores and reuses KV caches across requests to reduce time-to-first-token (TTFT) and increase throughput. It integrates with vLLM to provide GPU-accelerated inference with shared KV cache management.

Latest tag: v0.4.4 + 13 more tags
ollama logo
ollama
Last changed

Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.

Latest tag: v0.20.7 + 251 more tags

33 images


The trusted source for open source

Talk to an expert
PrivacyTerms

Product

Chainguard ContainersChainguard LibrariesChainguard VMsChainguard OS PackagesChainguard ActionsChainguard Agent SkillsIntegrationsPricing
© 2026 Chainguard, Inc. All Rights Reserved.
Chainguard® and the Chainguard logo are registered trademarks of Chainguard, Inc. in the United States and/or other countries.
The other respective trademarks mentioned on this page are owned by the respective companies and use of them does not imply any affiliation or endorsement.