Be careful if you are running an
#Ollama web server
According to this article if you run Ollama as a web server, meaning you are running an LLM model locally on your server or home computer, but you have a web portal open to it so people in your organization or home can connect to your server and ask the LLM questions, the Ollama web server is apparently full of security holes. The article mentions three problems:
- It can leave your computer vulnerable to DDoS attacks from the public Internet
- The push/pull feature for uploading/downloading models is vulnerable to man-in-the-middle attacks (possibly? as is my understanding)
- DeepSeek is not a security issue in and of itself, but since DeepSeek is so easy for hobbyists to use, this is causing a larger number of people to use Ollama, increasing the number of people who are vulnerable.
Quoting the article:
the API can be exposed to the public internet; its functions to push, pull, and delete models can put data at risk and unauthenticated users can also bombard models with requests, potentially causing costs for cloud computing resource owners. Existing vulnerabilities within Ollama could also be exploited.