/
DirectorySecurity AdvisoriesPricing
Sign in
Security Advisories

CVE-2025-48944

Published

Last updated

NVD

https://nvd.nist.gov/vuln/detail/CVE-2025-48944

Severity

6.5

Medium

CVSS V3

Description

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.

References

  • https://images.chainguard.dev/security/CGA-h4cp-mqqw-m8g3

Affected packages


Safe Source for Open Sourceâ„¢
Contact us
© 2025 Chainguard. All Rights Reserved.
Private PolicyTerms of Use

Product

Chainguard ContainersChainguard LibrariesChainguard VMsIntegrationsPricing