Ollama
It's a very lightweight local LLM piece of software. It has very easy installation & configuring process.
In a nutshell, it is useful for local testing & development. If you want a full-grown self-hosted LLM solution, refer to LocalAI.
To install & configure it refer to Ollama documentation.
Once you have the Ollama up & running, you'll need to configure the Unctl:
llm_config:
- provider: Ollama
models:
- name: codellama:13b
config:
temperature: 0.01
tokenizer_type: llama
llm_provider: Ollama
llm_model: codellama:13b
Also, note the model name. The name should be the same as installed in the Ollama.
Last updated