Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution.
Tracked as CVE-2024-37032, the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024.
Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices.
At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution.
The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation.
It specifically takes advantage of the API endpoint "/api/pull" – which is used to download a model from the official registry or from a private repository – to provide a malicious model manifest file that contains a path traversal payload in the digest field.
This issue could be abused not only to corrupt arbitrary files on the system, but also to obtain code execution remotely by overwriting a configuration file ("etc/ld.so.preload") associated with the dynamic linker ("ld.so") to include a rogue shared library and launch it every time prior to executing any program.
While the risk of remote code execution is reduced to a great extent in default Linux installations due to the fact that the API server binds to localhost, it's not the case with docker deployments, where the API server is publicly exposed.
"This issue is extremely severe in Docker installations, as the server runs with `root` privileges and listens on `0.0.0.0` by default – which enables remote exploitation of this vulnerability," security researcher Sagi Tzadik said.
Compounding matters further is the inherent lack of authentication associated with Ollama, thereby allowing an attacker to exploit a publicly-accessible server to steal or tamper with AI models, and compromise self-hosted AI inference servers.
This also requires that such services are secured using middleware like reverse proxies with authentication. Wiz said it identified over 1,000 Ollama exposed instances hosting numerous AI models without any protection.
"CVE-2024-37032 is an easy-to-exploit remote code execution that affects modern AI infrastructure," Tzadik said. "Despite the codebase being relatively new and written in modern programming languages, classic vulnerabilities such as path traversal remain an issue."
The development comes as AI security company Protect AI warned of over 60 security defects affecting various open-source AI/ML tools, including critical issues that could lead to information disclosure, access to restricted resources, privilege escalation, and complete system takeover.
The most severe of these vulnerabilities is CVE-2024-22476 (CVSS score 10.0), an SQL injection flaw in Intel Neural Compressor software that could allow attackers to download arbitrary files from the host system. It was addressed in version 2.5.0.
Source: https://thehackernews.com/2024/06/critical-rce-vulnerability-discovered.html