AI Nodes
AI nodes bring intelligence to your workflows. Use them to classify messages, route conversations to the right branch, and generate natural language responses.
Agentix uses the OpenAI Responses API with the GPT-4.1 model family by default.
ai.router
Classify incoming messages and route to different branches.
The AI Router uses structured output (JSON mode) to deterministically classify a message into one of your defined categories. Each category becomes an output handle, so you can build branching logic based on AI classification.
Configuration
| Option | Type | Required | Description |
|---|
categories | array | Yes | Classification categories, each with a label and description |
categories[].label | string | Yes | Short identifier for the category (e.g., “billing”, “support”, “sales”) |
categories[].description | string | Yes | Description that helps the AI understand what belongs in this category |
model | string | No | Model to use (default: gpt-4.1-nano — fast and cost-effective for classification) |
systemPrompt | string | No | Additional system instructions for classification context |
Outputs
One output handle per category. The AI selects exactly one category per message.
| Output | Description |
|---|
[category label] | One handle per defined category |
Connections
- Inputs: Single input handle receiving message context
- Outputs: One handle per category (mutually exclusive routing)
Example Use Case
Define categories like “pricing_question”, “technical_support”, “complaint”, and “general_inquiry”. The router reads the inbound message, classifies it, and routes to specialized AI Respond nodes — each with a tailored system prompt for that topic.
The AI Router uses JSON schema structured output, ensuring the response is always a valid category label. This makes routing deterministic — no string parsing or fuzzy matching needed.
ai.respond
Generate a natural language response.
The AI Respond node is the core content generation node. It takes the conversation context, applies your system prompt, and produces a response that can be sent back to the customer via a WhatsApp Send Message node.
Configuration
| Option | Type | Required | Description |
|---|
systemPrompt | string | Yes | Instructions that define the AI’s personality, knowledge, and behavior |
model | string | No | Model to use (default: gpt-4.1-mini — balanced quality and cost) |
temperature | number | No | Response randomness, 0.0 to 2.0 (default: 0.7). Lower = more deterministic |
maxTokens | number | No | Maximum tokens in the generated response. Recommended to set a limit |
tools | array | No | Optional tool definitions for tool-augmented generation |
AI Respond nodes can be augmented with tools for more capable responses:
| Tool Type | Description |
|---|
| Web Search | OpenAI built-in web search for real-time information |
| File Search | Search through uploaded knowledge bases and documents |
| HTTP Tools | Call external APIs during response generation |
When tools are configured, the AI can decide to use them during generation to fetch information before composing its response.
Outputs
| Output | Type | Description |
|---|
response.text | string | The generated response text |
response.tokensUsed | number | Tokens consumed by this generation |
Connections
- Inputs: Single input handle receiving conversation context
- Outputs: Single output handle passing the generated response to the next node
Example Use Case
Configure a customer support AI with a system prompt like: “You are a helpful support agent for Acme Corp. You know about our products, pricing, and return policy. Be concise and friendly. If you cannot help, suggest the customer speak to a human agent.” Connect the output to a wa.send_message node to deliver the response.
Token costs: AI response generation consumes tokens billed by OpenAI. Always set a maxTokens limit to control costs. The platform enforces a per-run safety cap of 150,000 tokens to prevent runaway agent loops.
Multi-Turn Conversations
AI Respond nodes automatically leverage the OpenAI Conversations API for persistent multi-turn memory. Each WhatsApp contact gets a dedicated conversation thread, so the AI remembers previous messages in the same thread without you needing to manage context manually.