Overview
Webhook tools enable your AI persona to call external HTTP endpoints during conversations. This allows integration with any REST API, enabling your persona to:- Check order or shipment status from your e-commerce system
- Create support tickets in your helpdesk software
- Update CRM records based on conversation context
- Fetch real-time data (weather, stock prices, availability)
- Trigger workflows in external systems
- Send notifications or alerts
- Log conversation events
Beta Feature: Tool calling is currently in beta. You may encounter some issues as we continue to improve the feature. Please report any feedback or issues to help us make it better.Response time requirements: Webhooks should respond quickly to maintain natural conversation flow:
- Ideal: Under 1 second
- Maximum: 5 seconds
- Timeout: 60 seconds (hard limit)
Webhooks are perfect for implementing complex agentic workflows. Beyond simple
data retrieval, use them to orchestrate multi-step processes, integrate with
third-party services, and build sophisticated AI-driven automation.
Creating a Webhook Tool
- Stateful (Database-Saved)
Create webhook tools in the Anam Lab that can be reused across personas:
1
Create the tool
Navigate to
/tools in the Anam Lab:- Click Create Tool
- Select Webhook Tool
- Fill in the configuration:
- Name:
check_order_status(snake_case) - Description: When the LLM should call this webhook
- URL: Your API endpoint
- Method: GET, POST, PUT, PATCH, or DELETE
- Headers: Authentication and content-type headers
- Parameters: JSON Schema for request body
- Await Response: Whether to wait for the response
- Name:
The tool is now saved and can be attached to any persona.
2
Attach to persona
Navigate to
/build/{personaId}:- Scroll to the Tools section
- Click Add Tool
- Select your webhook tool from the dropdown
- Save the persona
3
Use in session
Create a session with the persona ID:
Stateful tools are ideal for organization-wide integrations where the same webhook is used by multiple personas.
Tool Configuration
Required Fields
Must be
"server" for webhook toolsMust be
"webhook" for HTTP endpoint toolsUnique identifier for the tool. Must be 1-64 characters, snake_case format.Examples:
check_order_status, create_ticket, update_crm_contactDescribes when the LLM should invoke this webhook (1-1024 characters). Be specific about:
- What triggers the webhook call
- What data the webhook provides
- When NOT to use it
"Check order status when customer mentions an order number or asks about delivery, tracking, or shipping. Use the order ID from the conversation."The HTTP endpoint to call. Must be a valid HTTPS URL (HTTP allowed for development).Example:
https://api.yourcompany.com/v1/orders/statusHTTP method to use. Supported values: -
GET: Retrieve data - POST: Create
or query data (most common) - PUT: Update entire resource - PATCH: Partial
update - DELETE: Remove resourceOptional Fields
HTTP headers to include in the request. Common headers:
Authorization: API keys or bearer tokensContent-Type: Usuallyapplication/jsonX-API-Key: Alternative auth header- Custom headers specific to your API
JSON Schema defining the request body structure. The LLM extracts these values from the conversation.Example:
Whether to wait for the webhook response before continuing:
true(default): LLM waits for response and uses it in the answerfalse: Fire-and-forget, useful for logging or notifications
How Webhook Tools Work
1
User mentions relevant information
User: “What’s the status of my order ORD-12345?”
2
LLM extracts parameters
The LLM recognizes this requires the
check_order_status webhook and extracts parameters:3
Server makes HTTP request
Anam’s servers call your webhook endpoint:
The request originates from Anam’s servers, not the client. Your API credentials stay secure.
4
Your API responds
Your endpoint returns the order data:
5
LLM generates response
The LLM receives the webhook response and incorporates it into a natural language answer:“Your order ORD-12345 has been shipped! It’s currently in transit with UPS (tracking number 1Z999AA10123456784) and should arrive by March 15th.”
The user receives real-time, accurate information from your systems.
Common Use Cases
E-commerce: Order Status
Support: Create Ticket
CRM: Update Contact
Real-time Data: Weather
Availability Check
Best Practices
Security
Use HTTPS endpoints
Use HTTPS endpoints
Always use HTTPS (not HTTP) for production webhooks to encrypt data in transit.
Secure API credentials
Secure API credentials
Never hardcode production credentials in ephemeral tools. Use environment variables or secret management:For stateful tools created in the UI, credentials are stored encrypted in the database.
Validate webhook responses
Validate webhook responses
Your API should validate requests and return appropriate error messages:The LLM will communicate errors naturally to the user.
Rate limiting
Rate limiting
Implement rate limiting on your endpoints to prevent abuse:
- Track requests per API key
- Return
429 Too Many Requestswhen limit exceeded - Include
Retry-Afterheader
Performance
Keep responses fast
Keep responses fast
Webhooks should respond quickly to maintain conversation flow:
- Target: Under 1 second for best experience
- Maximum recommended: 5 seconds
- Hard timeout: 60 seconds
- Use caching for frequently requested data
- Defer slow operations to background jobs (use split pattern)
- Return partial data quickly rather than waiting for complete results
- Database queries should be indexed and optimized
Use fire-and-forget when appropriate
Use fire-and-forget when appropriate
Set This improves response time since the LLM doesn’t wait for confirmation.
awaitResponse: false for operations that don’t need to return data:Return only necessary data
Return only necessary data
Keep webhook responses concise. The LLM only needs relevant information:
Error Handling
Return meaningful error messages
Return meaningful error messages
Help the LLM understand what went wrong:
Handle missing parameters gracefully
Handle missing parameters gracefully
Your API should handle missing or invalid parameters:
Use appropriate HTTP status codes
Use appropriate HTTP status codes
Return correct status codes for different scenarios:
200 OK: Success400 Bad Request: Invalid parameters401 Unauthorized: Invalid API key404 Not Found: Resource doesn’t exist429 Too Many Requests: Rate limit exceeded500 Internal Server Error: Server error
Testing Webhook Tools
Local Development
For local testing, use tunneling services to expose your localhost:Test Mode
Create a test version of your webhook tool:Mock Responses
For development without a backend, use mock API services:Logging and Debugging
Add logging to your webhook endpoint to debug issues:Troubleshooting
Webhook not being called
Webhook not being called
Possible causes:
- Description too vague for LLM to understand when to use it
- System prompt doesn’t mention the webhook capability
- Missing required parameters in conversation
- Make description more specific with trigger words
- Update system prompt to mention the webhook
- Test with explicit parameter values in conversation
- Check server logs to see if request was made
Webhook timing out
Webhook timing out
Possible causes:
- Endpoint takes >60 seconds to respond
- Network issues or slow database queries
- Endpoint not accessible from Anam servers
- Optimize endpoint performance
- Check endpoint is publicly accessible (not localhost in production)
- Verify firewall rules allow Anam’s IP range
- Use fire-and-forget (
awaitResponse: false) for slow operations
Authentication errors
Authentication errors
Possible causes:
- Incorrect API key or token
- Expired credentials
- Wrong authentication header format
- Verify credentials are current and valid
- Check header format matches API requirements
- Test endpoint directly with same credentials using curl/Postman
- Check for typos in header names (case-sensitive)
Incorrect parameter values
Incorrect parameter values
Possible causes:
- LLM extracting wrong values from conversation
- Parameter descriptions unclear
- User providing ambiguous information
- Improve parameter descriptions with examples
- Update system prompt with parameter extraction guidance
- Ask users to confirm values before making webhook call
Advanced Patterns
Chaining Multiple Webhooks
The LLM can call multiple webhooks in sequence: User: “Check my order ORD-12345 and create a ticket if there’s a problem” LLM flow:- Calls
check_order_status→ finds delay - Calls
create_support_ticket→ creates ticket - Responds: “I checked your order and it’s delayed. I’ve created support ticket #789 to investigate.”
Conditional Webhooks
Use system prompt to guide conditional webhook usage:Dynamic Parameters
Extract multiple pieces of information from complex requests:Long-Running Processes (A Recommended Pattern)
For backend processes that take longer than 5 seconds (like generating a report or running a batch job), the conversation can feel stalled. To keep the interaction fluid, we recommend a “split pattern” where you create two separate webhooks. This is a design pattern that you implement on your backend. You are responsible for creating the two endpoints and managing the state of the job. The Anam agent’s role is simply to call the webhooks you provide.1
Step 1: Create a 'start' webhook
This webhook initiates the long-running process. It should immediately return a unique
jobId and an estimated completion time.2
Step 2: Create a 'check status' webhook
This webhook checks the status of the job using the
jobId. It should return the status and, if complete, the final result (e.g., a download URL).3
Step 3: Guide the AI with a system prompt
Instruct the AI on how to use this two-step process.
- User: “Can you generate a sales report for last month?”
- AI:
- Calls your
start_report_generationwebhook. - Your backend starts the job, saves its state (e.g.,
status: 'PENDING') in a database, and immediately returns{ "jobId": "job_123", "estimatedTime": "30 seconds" }. - Responds: “I’ve started generating your sales report. It should be ready in about 30 seconds. I can let you know when it’s done.”
- Calls your
- (AI continues the conversation on other topics…)
- AI (after ~30 seconds):
- Proactively calls your
check_report_statuswebhook withjobId: "job_123". - Your backend checks the job’s state. If it’s done, it returns
{ "status": "COMPLETE", "url": "https://.../report.pdf" }. - Responds: “Great news! That sales report you asked for is ready. You can download it here: [link].”
- Proactively calls your
This pattern works for any long-running operation: data exports, batch
processing, AI model inference, video processing, etc. The key is providing
immediate feedback that the process has started, then checking back later.

