#Ollama #Models #LLM #Hosting #remembering #Smarter I found a way to make my Ollama environment with multiple LLMs really a lot smarter. And even better ... Ollama is not forgetting anything anymore! Article: lkjp.me/78m
The #ollama #opensource #software that makes it easy to run #Llama3, #DeepSeekR1, #Gemma3, and other large language models (#LLM) is out with its newest release. The ollama software makes it easy to leverage the llama.cpp back-end for running a variety of LLMs and enjoying convenient integration with other desktop software.
The new ollama 0.6.2 Release Features Support For #AMD #StrixHalo, a.k.a. #RyzenAI Max+ laptop / SFF desktop SoC.
https://www.phoronix.com/news/ollama-0.6.2
So, following a blog post[1] from @webology and some guides about #VSCode plugins online, I set up #ollama with a number of models and connected that to the Continue plugin.
My goal: see if local-laptop #llm code assistants are viable.
My results: staggeringly underwhelming, mostly in terms of speed. I tried gemma3, qwen2.5, and deepseek-r1; none of them performed fast enough to be a true help for coding.
So, I did it. I hooked up the #HomeAssistant Voice to my #Ollama instance. As @ianjs suggested, it's much better at recognizing the intent of my requests. As @chris_hayes suggested, I'm using the new #Gemma3 model. It now knows "How's the weather" and "What's the weather" are the same thing, and I get an answer for both. Responses are a little slower than without the LLM, but honestly it's pretty negligible. It's a very little bit slower again if I use local #Piper vs HA's cloud service.
Testing out the newly released #Gemma3 model locally on #ollama. This is one of the more frustration aspects of these LLMs. It must be said that LLMs are fine for what they are, and what they are is a glorified autocomplete. They have their uses (just like autocomplete does), but if you try to use them outside of their strengths your results are going to be less than reliable.
Your data. Your computer. Your choice #devonthink #chatgpt #claude #gemini #mistral #ollama #lmstudio #gpt4all #comingsoon
Remember Aaron Swartz, because Zucc won't be getting any jail time for this.
https://www.youtube.com/watch?v=bBa5TO_nBJ0
Meta leeched over 80 terabytes of books of off torrents for commercial purposes. Not personal use. They made sure to not seed the books to cover their behinds.
When you do it, it's 30 years in prison, when they do it it's a fine.
According to this article if you run Ollama as a web server, meaning you are running an LLM model locally on your server or home computer, but you have a web portal open to it so people in your organization or home can connect to your server and ask the LLM questions, the Ollama web server is apparently full of security holes. The article mentions three problems:
Quoting the article:
the API can be exposed to the public internet; its functions to push, pull, and delete models can put data at risk and unauthenticated users can also bombard models with requests, potentially causing costs for cloud computing resource owners. Existing vulnerabilities within Ollama could also be exploited.
New: Exposed #Ollama APIs, impacting AI model owners & cloud costs. Over 7,000 IPs are affected, with #DeepSeek models widely used. The highest concentrations? China, the US & Germany.
Read: https://hackread.com/exposed-ollama-apis-leave-deepseek-ai-models-attack/
The performance of an LLM through termux is not great. It takes anywhere from four to seven minutes to generate a response on this supposedly top-of-the-line tablet.
After spending far too much time trying to manually install ollama on an android tablet, I came across a comment on reddit that noted termux has ollama in their repo.
It doesn't have to be manually installed at all.
@skykiss That's why you download the model and use it on your local system only. #deepseek #ollama #ollamacuda #docker #openwebui
I was sooo inspired by just HOW easy it was to set up a local LLM with a nice web UI on #nixos that I made a quick video.
Seriously, well done #ollama #nixos #openwebui
@quixoticgeek #deepseek LLM is quite fine to discuss these topics while running within #ollama. I would assume the bias, aka guidelines are defined within the deepseek applications.