I’ve become a big fan of using a locally installed instance of Ollama AI, a tool to run large language models (LLMs) on your own computer. Part of the reason for that is because of how much energy AI consumes when it’s used via the standard methods.
For a while, I was using Ollama on my desktop machine, but discovered there were a few reasons why that wasn’t optimal. First, Ollama was consuming too many resources, which led to slowdowns on my desktop. Second, I was limited to only using Ollama on my desktop — unless I wanted to SSH…