AI Providers
Currently, we support 3 different AI providers:
OpenAI - hosted ChatGPT
LocalAI - locally running LLM provider (should be installed & configured by the user)
Ollama - locally running LLM provider (should be installed & configured by the user)
By default, you'll get an OpenAI provider.
Context summarization
Self-hosted LLM providers have automatic context summarization. We need that to bypass model token limits, by summarizing chat history into a single & smaller message.
By default it is disabled.
To enable it, set llm_summarization_enabled
to True
in the config file.
By default, unCtl will use whatever model is defined in llm_model
config section to do the summarization. If you want to use a different model for summarization, set the desired model in the llm_summarization_model
.
Last updated