8.8
CVSS V3
Build, ship, and run secure software with minimal, hardened container images — rebuilt from source daily and guarded under our industry-leading remediation SLA.
Start for freevLLM affected by RCE via auto_map dynamic module loading during model initialization
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face auto_map dynamic modules during model resolution without gating on trust_remote_code, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.