local ai
Local AI to run AI locally in your machine
What I learned
Today I was looking an alternative to Ollama to run LLM model locally that is lightweight and easy to use. Ollama is great however if we want to deploy it to a server for example, it may require quite heavy resources and I need something lighter if possible.
From my research it seems this: https://github.com/mudler/LocalAI looks like a good alternative. It claims that it doesn’t need a GPU to run and seems to be more lightweight than Ollama. I will give it a try and see how it goes.