First published: Wed Apr 23 2025(Updated: )
## Description https://github.com/vllm-project/vllm/security/advisories/GHSA-rh4j-5rhw-hr54 reported a vulnerability where loading a malicious model could result in code execution on the vllm host. The fix applied to specify `weights_only=True` to calls to `torch.load()` did not solve the problem prior to PyTorch 2.6.0. PyTorch has issued a new CVE about this problem: https://github.com/advisories/GHSA-53q9-r3pm-6pq6 This means that versions of vLLM using PyTorch before 2.6.0 are vulnerable to this problem. ## Background Knowledge When users install VLLM according to the official manual  But the version of PyTorch is specified in the requirements. txt file  So by default when the user install VLLM, it will install the PyTorch with version 2.5.1  In CVE-2025-24357, weights_only=True was used for patching, but we know this is not secure. Because we found that using Weights_only=True in pyTorch before 2.5.1 was unsafe Here, we use this interface to prove that it is not safe.  ## Fix update PyTorch version to 2.6.0 ## Credit This vulnerability was found By Ji'an Zhou and Li'shuo Song
Affected Software | Affected Version | How to fix |
---|---|---|
pip/vllm | <0.8.0 | 0.8.0 |
Sign up to SecAlerts for real-time vulnerability data matched to your software, aggregated from hundreds of sources.
The severity of GHSA-ggpf-24jw-3fcw is critical due to the potential for code execution on the vllm host.
To mitigate GHSA-ggpf-24jw-3fcw, upgrade to vllm version 0.8.0 or later.
The impact of GHSA-ggpf-24jw-3fcw includes potential execution of arbitrary code by loading malicious models.
Users of the vllm library prior to version 0.8.0 are affected by GHSA-ggpf-24jw-3fcw.
As of now, there is no public information indicating that GHSA-ggpf-24jw-3fcw is actively exploited.