CWE
770
EPSS
0.050%
Advisory Published
Advisory Published
Updated

CVE-2025-29770: vLLM denial of service via outlines unbounded cache on disk

First published: Wed Mar 19 2025(Updated: )

### Impact The [outlines](https://dottxt-ai.github.io/outlines/latest/) library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is [vllm/model_executor/guided_decoding/outlines_logits_processors.py](https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py), which unconditionally uses the cache from outlines. vLLM should have this off by default and allow administrators to opt-in due to the potential for abuse. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the `guided_decoding_backend` key of the `extra_body` field of the request. This issue applies to the V0 engine only. The V1 engine is not affected. ### Patches * https://github.com/vllm-project/vllm/pull/14837 The fix is to disable this cache by default since it does not provide an option to limit its size. If you want to use this cache anyway, you may set the `VLLM_V0_USE_OUTLINES_CACHE` environment variable to `1`. ### Workarounds There is no way to workaround this issue in existing versions of vLLM other than preventing untrusted access to the OpenAI compatible API server. ### References

Credit: security-advisories@github.com

Affected SoftwareAffected VersionHow to fix
vLLM>=0.0<0.8.0
pip/vllm<0.8.0
0.8.0

Never miss a vulnerability like this again

Sign up to SecAlerts for real-time vulnerability data matched to your software, aggregated from hundreds of sources.

Frequently Asked Questions

  • What is the severity of CVE-2025-29770?

    CVE-2025-29770 is classified as a high-severity vulnerability due to its impact on the vLLM library.

  • How do I fix CVE-2025-29770?

    To fix CVE-2025-29770, users should upgrade to vLLM version 0.8.0 or higher immediately.

  • What effect does CVE-2025-29770 have on the vLLM library?

    CVE-2025-29770 can potentially lead to unintentional data exposure through the outlines caching mechanism.

  • Which versions of vLLM are affected by CVE-2025-29770?

    CVE-2025-29770 affects all versions of vLLM from 0.0 up to, but not including, 0.8.0.

  • Is there a way to mitigate the risks associated with CVE-2025-29770 without upgrading?

    While upgrading is the best solution, disabling the outlines caching feature can mitigate some risks associated with CVE-2025-29770.

Contact

SecAlerts Pty Ltd.
132 Wickham Terrace
Fortitude Valley,
QLD 4006, Australia
info@secalerts.co
By using SecAlerts services, you agree to our services end-user license agreement. This website is safeguarded by reCAPTCHA and governed by the Google Privacy Policy and Terms of Service. All names, logos, and brands of products are owned by their respective owners, and any usage of these names, logos, and brands for identification purposes only does not imply endorsement. If you possess any content that requires removal, please get in touch with us.
© 2025 SecAlerts Pty Ltd.
ABN: 70 645 966 203, ACN: 645 966 203