Config file

By default unCtl looks at ~/.config/unctl/config.yaml. Otherwise, it would use default values.

To specify the path to the config file use --config option:

unctl --config <path to file> {provider} ...

Note: CLI options will overwrite values in the config file

Anonymisation

This section allows handling data manipulation sent to 3rd party services in order to hide PII or any other sensitive data:

  • masks - list of rules to mask data. Can be extended with more regex patterns. By default, there are 2 rules present in the config: email and ip_address.

Example:

anonymisation:
  masks:
    - name: email
      pattern: \b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b
    - name: ip_address
      pattern: \b(?:\d{1,3}\.){3}\d{1,3}\b

Filter

This section allows you to set filters for your scan options:

  • failed_only - shows only failed checks in the report when true. Default is false

  • checks - list of check IDs to run scan for specific checks only. If the value is set to[] or missing then the filter is considered disabled. Default is []

  • categories - list of checks categories to run the scan. If the value is set to[] or missing then the filter is considered disabled. Default is []

  • services - list of checks services to run the scan. If the value is set to[] or missing then the filter is considered disabled. Default is []

  • k8s.namespaces - Kubernetes-specific filter, list of namespaces to filter cluster resources. If value is [] or missed then the filter is considered disabled. Default is []

Example:

filter:
  failed_only: false
  checks:
    - K8S101
    - K8S203
  categories:
    - Health
  services:
    - pod
    - deployment
  k8s:
    namespaces:
      - my_namespace

Interactive mode

The section is responsible for interactive mode configuration:

  • prompt - defines whether to ask the user to enter in the interactive mode after the report is done. Default is true

Example:

interactive:
  prompt: true

Muting

The section is taking care of muting particular checks and objects:

  • checks - list of check IDs which will be ignored by the scan. If the value is set to[] or missing then the filter is considered disabled. Defaults to []

  • objects - set of objects with a list of specific check IDs which should be ignored. If value is {} or missed then the filter is considered disabled. Default is {}

Example:

ignore:
  checks: 
    - K8S101
  objects:
    "some-object-name-1": [] # ignore object for all checks
    "some-object-name-2":    # ignore object for the specific checks
      - K8S203

Here we'll go over all the LLM-related configuration options.

  • provider - LLM provider to use. LocalAI, Ollama, and OpenAI are available.

  • models - a list of model configurations for specific LLM providers.

    • name - model name to use. In the case of LocalAI or Ollama, should be set to a model name that was used to install the model.

    • config - model-specific configuration.

      • temperature - sets the model temperature.

      • tokenizer_type - select model tokenization type. 3 types are available:

        • gpt - to use with GPT-based models.

        • llama - to use with llama-based models, e.g. llama2 or codellama.

        • mixtral - a tokenizer specific to mistral and mixtral models.

  • config - global LocalAI-related configuration.

    • endpoint - URL of LocalAI instance.

Globally available options:

  • llm_provider - sets global LLM provider.

  • llm_model - sets global LLM model.

  • llm_debug - enables global LLM debug mode.

  • llm_summarization_enabled - enables context summarization (in case we're exceeding the model token limit).

  • llm_summarizing_model - model to use for summarization, by default same as llm_model will be used.

For examples, refer to AI Providers documentation.

Last updated