Skip to main content

Overview

Webhook tools enable your AI persona to call external HTTP endpoints during conversations. This allows integration with any REST API, enabling your persona to:
  • Check order or shipment status from your e-commerce system
  • Create support tickets in your helpdesk software
  • Update CRM records based on conversation context
  • Fetch real-time data (weather, stock prices, availability)
  • Trigger workflows in external systems
  • Send notifications or alerts
  • Log conversation events
Webhook tools run server-side, keeping your API credentials secure and allowing the LLM to use response data in its answers.
Beta Feature: Tool calling is currently in beta. You may encounter some issues as we continue to improve the feature. Please report any feedback or issues to help us make it better.Response time requirements: Webhooks should respond quickly to maintain natural conversation flow:
  • Ideal: Under 1 second
  • Maximum: 5 seconds
  • Timeout: 60 seconds (hard limit)
For operations taking longer than 5 seconds, use the split pattern: one webhook to start the process, another to check status later.
Webhooks are perfect for implementing complex agentic workflows. Beyond simple data retrieval, use them to orchestrate multi-step processes, integrate with third-party services, and build sophisticated AI-driven automation.

Creating a Webhook Tool

  • Stateful (Database-Saved)
Create webhook tools in the Anam Lab that can be reused across personas:
1

Create the tool

Navigate to /tools in the Anam Lab:
  1. Click Create Tool
  2. Select Webhook Tool
  3. Fill in the configuration:
    • Name: check_order_status (snake_case)
    • Description: When the LLM should call this webhook
    • URL: Your API endpoint
    • Method: GET, POST, PUT, PATCH, or DELETE
    • Headers: Authentication and content-type headers
    • Parameters: JSON Schema for request body
    • Await Response: Whether to wait for the response
The tool is now saved and can be attached to any persona.
2

Attach to persona

Navigate to /build/{personaId}:
  1. Scroll to the Tools section
  2. Click Add Tool
  3. Select your webhook tool from the dropdown
  4. Save the persona
3

Use in session

Create a session with the persona ID:
const response = await fetch("https://api.anam.ai/v1/auth/session-token", {
  method: "POST",
  headers: {
    Authorization: "Bearer YOUR_API_KEY",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    personaConfig: {
      personaId: "your-persona-id",
    },
  }),
});
Stateful tools are ideal for organization-wide integrations where the same webhook is used by multiple personas.

Tool Configuration

Required Fields

type
string
required
Must be "server" for webhook tools
subtype
string
required
Must be "webhook" for HTTP endpoint tools
name
string
required
Unique identifier for the tool. Must be 1-64 characters, snake_case format.Examples: check_order_status, create_ticket, update_crm_contact
description
string
required
Describes when the LLM should invoke this webhook (1-1024 characters). Be specific about:
  • What triggers the webhook call
  • What data the webhook provides
  • When NOT to use it
Example: "Check order status when customer mentions an order number or asks about delivery, tracking, or shipping. Use the order ID from the conversation."
url
string
required
The HTTP endpoint to call. Must be a valid HTTPS URL (HTTP allowed for development).Example: https://api.yourcompany.com/v1/orders/status
method
string
required
HTTP method to use. Supported values: - GET: Retrieve data - POST: Create or query data (most common) - PUT: Update entire resource - PATCH: Partial update - DELETE: Remove resource

Optional Fields

headers
object
HTTP headers to include in the request. Common headers:
  • Authorization: API keys or bearer tokens
  • Content-Type: Usually application/json
  • X-API-Key: Alternative auth header
  • Custom headers specific to your API
Example:
{
  "Authorization": "Bearer sk_live_abc123",
  "Content-Type": "application/json",
  "X-Organization-ID": "org_xyz"
}
parameters
object
JSON Schema defining the request body structure. The LLM extracts these values from the conversation.Example:
{
  "type": "object",
  "properties": {
    "orderId": {
      "type": "string",
      "description": "Customer order ID"
    },
    "includeDetails": {
      "type": "boolean",
      "description": "Whether to include detailed tracking info"
    }
  },
  "required": ["orderId"]
}
awaitResponse
boolean
default:true
Whether to wait for the webhook response before continuing:
  • true (default): LLM waits for response and uses it in the answer
  • false: Fire-and-forget, useful for logging or notifications
Timeout: 60 seconds maximum

How Webhook Tools Work

1

User mentions relevant information

User: “What’s the status of my order ORD-12345?”
2

LLM extracts parameters

The LLM recognizes this requires the check_order_status webhook and extracts parameters:
{
  "toolName": "check_order_status",
  "arguments": {
    "orderId": "ORD-12345"
  }
}
3

Server makes HTTP request

Anam’s servers call your webhook endpoint:
POST https://api.yourcompany.com/orders/status
Authorization: Bearer YOUR_API_SECRET
Content-Type: application/json

{
  "orderId": "ORD-12345"
}
The request originates from Anam’s servers, not the client. Your API credentials stay secure.
4

Your API responds

Your endpoint returns the order data:
{
  "orderId": "ORD-12345",
  "status": "shipped",
  "trackingNumber": "1Z999AA10123456784",
  "estimatedDelivery": "2024-03-15",
  "carrier": "UPS"
}
5

LLM generates response

The LLM receives the webhook response and incorporates it into a natural language answer:“Your order ORD-12345 has been shipped! It’s currently in transit with UPS (tracking number 1Z999AA10123456784) and should arrive by March 15th.”
The user receives real-time, accurate information from your systems.

Common Use Cases

E-commerce: Order Status

{
  "type": "server",
  "subtype": "webhook",
  "name": "check_order_status",
  "description": "Check customer order status when they ask about delivery, shipping, or tracking",
  "url": "https://api.yourstore.com/orders/status",
  "method": "POST",
  "headers": {
    "Authorization": "Bearer YOUR_API_KEY"
  },
  "parameters": {
    "type": "object",
    "properties": {
      "orderId": {
        "type": "string",
        "description": "Order ID or order number"
      },
      "email": {
        "type": "string",
        "description": "Customer email for verification"
      }
    },
    "required": ["orderId"]
  },
  "awaitResponse": true
}

Support: Create Ticket

{
  "type": "server",
  "subtype": "webhook",
  "name": "create_support_ticket",
  "description": "Create a support ticket when customer reports a bug, issue, or problem that needs follow-up",
  "url": "https://api.zendesk.com/api/v2/tickets",
  "method": "POST",
  "headers": {
    "Authorization": "Basic BASE64_CREDENTIALS",
    "Content-Type": "application/json"
  },
  "parameters": {
    "type": "object",
    "properties": {
      "subject": {
        "type": "string",
        "description": "Brief summary of the issue"
      },
      "description": {
        "type": "string",
        "description": "Detailed description of the problem"
      },
      "priority": {
        "type": "string",
        "enum": ["low", "normal", "high", "urgent"],
        "description": "Ticket priority based on severity"
      }
    },
    "required": ["subject", "description"]
  },
  "awaitResponse": true
}

CRM: Update Contact

{
  "type": "server",
  "subtype": "webhook",
  "name": "update_contact_info",
  "description": "Update customer contact information when they provide new email, phone, or address",
  "url": "https://api.salesforce.com/services/data/v55.0/sobjects/Contact",
  "method": "PATCH",
  "headers": {
    "Authorization": "Bearer SALESFORCE_TOKEN",
    "Content-Type": "application/json"
  },
  "parameters": {
    "type": "object",
    "properties": {
      "contactId": {
        "type": "string",
        "description": "Salesforce contact ID"
      },
      "email": {
        "type": "string",
        "description": "New email address"
      },
      "phone": {
        "type": "string",
        "description": "New phone number"
      }
    },
    "required": ["contactId"]
  },
  "awaitResponse": false
}

Real-time Data: Weather

{
  "type": "server",
  "subtype": "webhook",
  "name": "get_weather",
  "description": "Get current weather conditions when user asks about weather for a specific location",
  "url": "https://api.openweathermap.org/data/2.5/weather",
  "method": "GET",
  "headers": {
    "X-API-Key": "YOUR_WEATHER_API_KEY"
  },
  "parameters": {
    "type": "object",
    "properties": {
      "city": {
        "type": "string",
        "description": "City name"
      },
      "units": {
        "type": "string",
        "enum": ["metric", "imperial"],
        "description": "Temperature units"
      }
    },
    "required": ["city"]
  },
  "awaitResponse": true
}

Availability Check

{
  "type": "server",
  "subtype": "webhook",
  "name": "check_product_availability",
  "description": "Check if a product is in stock when customer asks about availability",
  "url": "https://api.yourstore.com/inventory/check",
  "method": "POST",
  "headers": {
    "Authorization": "Bearer YOUR_API_KEY"
  },
  "parameters": {
    "type": "object",
    "properties": {
      "productId": {
        "type": "string",
        "description": "Product SKU or ID"
      },
      "location": {
        "type": "string",
        "description": "Store location or warehouse"
      }
    },
    "required": ["productId"]
  },
  "awaitResponse": true
}

Best Practices

Security

Always use HTTPS (not HTTP) for production webhooks to encrypt data in transit.
// ✅ Good
"url": "https://api.yourcompany.com/endpoint"

// ❌ Bad (except for local development)
"url": "http://api.yourcompany.com/endpoint"
Never hardcode production credentials in ephemeral tools. Use environment variables or secret management:
// ✅ Good - Use environment variables
headers: {
  'Authorization': `Bearer ${process.env.API_SECRET}`
}

// ❌ Bad - Hardcoded credentials
headers: {
  'Authorization': 'Bearer sk_live_abc123xyz_real_key'
}
For stateful tools created in the UI, credentials are stored encrypted in the database.
Your API should validate requests and return appropriate error messages:
// Good error response
{
  "error": "Order not found",
  "orderId": "ORD-12345",
  "suggestion": "Please check the order number and try again"
}
The LLM will communicate errors naturally to the user.
Implement rate limiting on your endpoints to prevent abuse:
  • Track requests per API key
  • Return 429 Too Many Requests when limit exceeded
  • Include Retry-After header

Performance

Webhooks should respond quickly to maintain conversation flow:
  • Target: Under 1 second for best experience
  • Maximum recommended: 5 seconds
  • Hard timeout: 60 seconds
Optimization strategies:
  • Use caching for frequently requested data
  • Defer slow operations to background jobs (use split pattern)
  • Return partial data quickly rather than waiting for complete results
  • Database queries should be indexed and optimized
Set awaitResponse: false for operations that don’t need to return data:
{
  "name": "log_conversation_event",
  "awaitResponse": false // Don't wait, just log it
}
This improves response time since the LLM doesn’t wait for confirmation.
Keep webhook responses concise. The LLM only needs relevant information:
// ✅ Good - Concise, relevant data
{
  "status": "shipped",
  "trackingNumber": "1Z999AA10123456784",
  "estimatedDelivery": "2024-03-15"
}

// ❌ Too verbose - Unnecessary data
{
  "orderId": "ORD-12345",
  "customerId": "CUST-789",
  "items": [...100 items],
  "internalNotes": "...",
  "warehouseData": {...},
  // ... lots of irrelevant data
}

Error Handling

Help the LLM understand what went wrong:
// ✅ Good error response
{
  "error": true,
  "message": "Order ORD-12345 not found in our system",
  "suggestion": "Please verify the order number or contact support at support@company.com"
}

// ❌ Bad error response
{
  "error": "NOT_FOUND"
}
Your API should handle missing or invalid parameters:
{
  "error": true,
  "message": "Missing required parameter: orderId",
  "requiredParameters": ["orderId"],
  "example": {
    "orderId": "ORD-12345"
  }
}
Return correct status codes for different scenarios:
  • 200 OK: Success
  • 400 Bad Request: Invalid parameters
  • 401 Unauthorized: Invalid API key
  • 404 Not Found: Resource doesn’t exist
  • 429 Too Many Requests: Rate limit exceeded
  • 500 Internal Server Error: Server error

Testing Webhook Tools

Local Development

For local testing, use tunneling services to expose your localhost:
# Using ngrok
ngrok http 3000

# Use the ngrok URL in your webhook tool
# Example: https://abc123.ngrok.io/api/orders

Test Mode

Create a test version of your webhook tool:
{
  "name": "check_order_status_test",
  "url": "https://api-staging.yourcompany.com/orders/status",
  "headers": {
    "Authorization": "Bearer TEST_API_KEY"
  }
}

Mock Responses

For development without a backend, use mock API services:
{
  "name": "check_order_status_mock",
  "url": "https://run.mocky.io/v3/your-mock-id",
  "method": "GET"
}

Logging and Debugging

Add logging to your webhook endpoint to debug issues:
app.post("/api/orders/status", async (req, res) => {
  console.log("Webhook called:", {
    timestamp: new Date().toISOString(),
    body: req.body,
    headers: req.headers,
    source: "anam",
  });

  try {
    const result = await checkOrderStatus(req.body.orderId);
    console.log("Webhook success:", result);
    res.json(result);
  } catch (error) {
    console.error("Webhook error:", error);
    res.status(500).json({
      error: true,
      message: error.message,
    });
  }
});

Troubleshooting

Possible causes:
  • Description too vague for LLM to understand when to use it
  • System prompt doesn’t mention the webhook capability
  • Missing required parameters in conversation
Solutions:
  1. Make description more specific with trigger words
  2. Update system prompt to mention the webhook
  3. Test with explicit parameter values in conversation
  4. Check server logs to see if request was made
Possible causes:
  • Endpoint takes >60 seconds to respond
  • Network issues or slow database queries
  • Endpoint not accessible from Anam servers
Solutions:
  1. Optimize endpoint performance
  2. Check endpoint is publicly accessible (not localhost in production)
  3. Verify firewall rules allow Anam’s IP range
  4. Use fire-and-forget (awaitResponse: false) for slow operations
Possible causes:
  • Incorrect API key or token
  • Expired credentials
  • Wrong authentication header format
Solutions:
  1. Verify credentials are current and valid
  2. Check header format matches API requirements
  3. Test endpoint directly with same credentials using curl/Postman
  4. Check for typos in header names (case-sensitive)
Possible causes:
  • LLM extracting wrong values from conversation
  • Parameter descriptions unclear
  • User providing ambiguous information
Solutions:
  1. Improve parameter descriptions with examples
  2. Update system prompt with parameter extraction guidance
  3. Ask users to confirm values before making webhook call

Advanced Patterns

Chaining Multiple Webhooks

The LLM can call multiple webhooks in sequence: User: “Check my order ORD-12345 and create a ticket if there’s a problem” LLM flow:
  1. Calls check_order_status → finds delay
  2. Calls create_support_ticket → creates ticket
  3. Responds: “I checked your order and it’s delayed. I’ve created support ticket #789 to investigate.”

Conditional Webhooks

Use system prompt to guide conditional webhook usage:
If the order status is "delayed" or "problem", automatically create a support ticket.
If the user asks about weather and mentions travel, also check flight status.

Dynamic Parameters

Extract multiple pieces of information from complex requests:
{
  "parameters": {
    "type": "object",
    "properties": {
      "orderId": { "type": "string" },
      "action": {
        "type": "string",
        "enum": ["check_status", "cancel", "modify"],
        "description": "What the user wants to do with the order"
      },
      "reason": {
        "type": "string",
        "description": "Reason for cancellation or modification"
      }
    }
  }
}
For backend processes that take longer than 5 seconds (like generating a report or running a batch job), the conversation can feel stalled. To keep the interaction fluid, we recommend a “split pattern” where you create two separate webhooks. This is a design pattern that you implement on your backend. You are responsible for creating the two endpoints and managing the state of the job. The Anam agent’s role is simply to call the webhooks you provide.
1

Step 1: Create a 'start' webhook

This webhook initiates the long-running process. It should immediately return a unique jobId and an estimated completion time.
{
  "name": "start_report_generation",
  "description": "Starts generating a custom report. Returns a job ID.",
  "url": "https://api.example.com/reports/start",
  "awaitResponse": true,
  "parameters": {
    "type": "object",
    "properties": {
      "reportType": { "type": "string" },
      "dateRange": { "type": "string" }
    }
  }
}
2

Step 2: Create a 'check status' webhook

This webhook checks the status of the job using the jobId. It should return the status and, if complete, the final result (e.g., a download URL).
{
  "name": "check_report_status",
  "description": "Checks the status of a report generation job.",
  "url": "https://api.example.com/reports/status",
  "awaitResponse": true,
  "parameters": {
    "type": "object",
    "properties": {
      "jobId": {
        "type": "string",
        "description": "The job ID returned from start_report_generation"
      }
    }
  }
}
3

Step 3: Guide the AI with a system prompt

Instruct the AI on how to use this two-step process.
When a user asks for a report, first call `start_report_generation`. Inform the user that the report is being generated and tell them the estimated wait time. After waiting, proactively call `check_report_status` to see if it's ready.
Example Conversation Flow:
  1. User: “Can you generate a sales report for last month?”
  2. AI:
    • Calls your start_report_generation webhook.
    • Your backend starts the job, saves its state (e.g., status: 'PENDING') in a database, and immediately returns { "jobId": "job_123", "estimatedTime": "30 seconds" }.
    • Responds: “I’ve started generating your sales report. It should be ready in about 30 seconds. I can let you know when it’s done.”
  3. (AI continues the conversation on other topics…)
  4. AI (after ~30 seconds):
    • Proactively calls your check_report_status webhook with jobId: "job_123".
    • Your backend checks the job’s state. If it’s done, it returns { "status": "COMPLETE", "url": "https://.../report.pdf" }.
    • Responds: “Great news! That sales report you asked for is ready. You can download it here: [link].”
This pattern works for any long-running operation: data exports, batch processing, AI model inference, video processing, etc. The key is providing immediate feedback that the process has started, then checking back later.

Next Steps