Skip to main content

AI Nodes

AI nodes bring intelligence to your workflows. Use them to classify messages, route conversations to the right branch, and generate natural language responses. Agentix uses the OpenAI Responses API with the GPT-4.1 model family by default.

ai.router

Classify incoming messages and route to different branches. The AI Router uses structured output (JSON mode) to deterministically classify a message into one of your defined categories. Each category becomes an output handle, so you can build branching logic based on AI classification.

Configuration

OptionTypeRequiredDescription
categoriesarrayYesClassification categories, each with a label and description
categories[].labelstringYesShort identifier for the category (e.g., “billing”, “support”, “sales”)
categories[].descriptionstringYesDescription that helps the AI understand what belongs in this category
modelstringNoModel to use (default: gpt-4.1-nano — fast and cost-effective for classification)
systemPromptstringNoAdditional system instructions for classification context

Outputs

One output handle per category. The AI selects exactly one category per message.
OutputDescription
[category label]One handle per defined category

Connections

  • Inputs: Single input handle receiving message context
  • Outputs: One handle per category (mutually exclusive routing)

Example Use Case

Define categories like “pricing_question”, “technical_support”, “complaint”, and “general_inquiry”. The router reads the inbound message, classifies it, and routes to specialized AI Respond nodes — each with a tailored system prompt for that topic.
The AI Router uses JSON schema structured output, ensuring the response is always a valid category label. This makes routing deterministic — no string parsing or fuzzy matching needed.

ai.respond

Generate a natural language response. The AI Respond node is the core content generation node. It takes the conversation context, applies your system prompt, and produces a response that can be sent back to the customer via a WhatsApp Send Message node.

Configuration

OptionTypeRequiredDescription
systemPromptstringYesInstructions that define the AI’s personality, knowledge, and behavior
modelstringNoModel to use (default: gpt-4.1-mini — balanced quality and cost)
temperaturenumberNoResponse randomness, 0.0 to 2.0 (default: 0.7). Lower = more deterministic
maxTokensnumberNoMaximum tokens in the generated response. Recommended to set a limit
toolsarrayNoOptional tool definitions for tool-augmented generation

Tools (Optional)

AI Respond nodes can be augmented with tools for more capable responses:
Tool TypeDescription
Web SearchOpenAI built-in web search for real-time information
File SearchSearch through uploaded knowledge bases and documents
HTTP ToolsCall external APIs during response generation
When tools are configured, the AI can decide to use them during generation to fetch information before composing its response.

Outputs

OutputTypeDescription
response.textstringThe generated response text
response.tokensUsednumberTokens consumed by this generation

Connections

  • Inputs: Single input handle receiving conversation context
  • Outputs: Single output handle passing the generated response to the next node

Example Use Case

Configure a customer support AI with a system prompt like: “You are a helpful support agent for Acme Corp. You know about our products, pricing, and return policy. Be concise and friendly. If you cannot help, suggest the customer speak to a human agent.” Connect the output to a wa.send_message node to deliver the response.
Token costs: AI response generation consumes tokens billed by OpenAI. Always set a maxTokens limit to control costs. The platform enforces a per-run safety cap of 150,000 tokens to prevent runaway agent loops.

Multi-Turn Conversations

AI Respond nodes automatically leverage the OpenAI Conversations API for persistent multi-turn memory. Each WhatsApp contact gets a dedicated conversation thread, so the AI remembers previous messages in the same thread without you needing to manage context manually.