Skip to main content
Version: 0.16

AI Models

Stalwart can call out to large language models for tasks such as spam classification, threat detection, and message categorization. Any endpoint that exposes an OpenAI-compatible chat or text-completion API is supported, whether hosted by a provider such as OpenAI or Anthropic or run locally through a tool such as LocalAI. This choice lets operators balance cost, latency, and privacy according to the deployment.

Enterprise feature

This feature is available exclusively in the Enterprise Edition of Stalwart and is not included in the Community Edition.

Configuration

Each AI endpoint is represented by an AiModel object (found in the WebUI under Settings › AI). The relevant fields are:

  • name: short identifier for the model within Stalwart.
  • url: full URL of the OpenAI-compatible endpoint (for example https://api.openai.com/v1/chat/completions).
  • model: the model name to send to the endpoint, such as gpt-4.
  • modelType: Chat for chat completions or Text for text completions. Default Chat.
  • timeout: maximum time to wait for a response. Default "2m".
  • temperature: randomness of the response, in the range 0.0 to 1.0. Default 0.7.
  • allowInvalidCerts: whether to accept invalid TLS certificates. Default false. Recommended only for local or self-signed endpoints.
  • httpAuth: authentication method, either Unauthenticated, Basic, or Bearer.
  • httpHeaders: additional HTTP headers sent with every request.

For example, a chat endpoint authenticated with a bearer token and an extra custom header:

{
"name": "chat",
"url": "https://api.openai.com/v1/chat/completions",
"model": "gpt-4",
"modelType": "Chat",
"timeout": "2m",
"temperature": 0.7,
"allowInvalidCerts": false,
"httpAuth": {
"@type": "Bearer",
"bearerToken": {
"@type": "Value",
"secret": "my-secret-token"
}
},
"httpHeaders": {
"X-My-Header": "my-value"
}
}