Start with LMStudio#

FYI This config pattern works with any LLM inference server that supports the OpenAI /chat/completion API and protocol.

If you have LMStudio running, create a file enkaidu.yaml and copy in the following configuration, then run enkaidu from the command line in the same folder as the config file.

session:
  model: qwen3                       # <---
auto_load:
  toolsets:
    - DateAndTime
llms:
  my_lmstudio:
    provider: openai
    env:
      OPENAI_ENDPOINT: 'http://localhost:1234'
      OPENAI_API_KEY: n/a
    models:
      - name: qwen3                 # <---
        model: qwen/qwen3-4b        # <---

If you’re using a different local model, change the values pointed to by the # <--- comment