AI Models
Stalwart can call out to large language models for tasks such as spam classification, threat detection, and message categorization. Any endpoint that exposes an OpenAI-compatible chat or text-completion API is supported, whether hosted by a provider such as OpenAI or Anthropic or run locally through a tool such as LocalAI. This choice lets operators balance cost, latency, and privacy according to the deployment.
Enterprise feature
This feature is available exclusively in the Enterprise Edition of Stalwart and is not included in the Community Edition.
Configuration
Each AI endpoint is represented by an AiModel object (found in the WebUI under Settings › AI). The relevant fields are:
name: short identifier for the model within Stalwart.url: full URL of the OpenAI-compatible endpoint (for examplehttps://api.openai.com/v1/chat/completions).model: the model name to send to the endpoint, such asgpt-4.modelType:Chatfor chat completions orTextfor text completions. DefaultChat.timeout: maximum time to wait for a response. Default"2m".temperature: randomness of the response, in the range0.0to1.0. Default0.7.allowInvalidCerts: whether to accept invalid TLS certificates. Defaultfalse. Recommended only for local or self-signed endpoints.httpAuth: authentication method, eitherUnauthenticated,Basic, orBearer.httpHeaders: additional HTTP headers sent with every request.
For example, a chat endpoint authenticated with a bearer token and an extra custom header:
{
"name": "chat",
"url": "https://api.openai.com/v1/chat/completions",
"model": "gpt-4",
"modelType": "Chat",
"timeout": "2m",
"temperature": 0.7,
"allowInvalidCerts": false,
"httpAuth": {
"@type": "Bearer",
"bearerToken": {
"@type": "Value",
"secret": "my-secret-token"
}
},
"httpHeaders": {
"X-My-Header": "my-value"
}
}