Build your own AI conversation logic with OpenAI, Llama, and other language models
Disable Default Brain
llmId: "CUSTOMER_CLIENT_V1"
setting in the session token request disables Anam’s default AI, allowing you to handle all conversation logic.Listen for User Input
MESSAGE_HISTORY_UPDATED
event fires when the user finishes speaking, providing the complete
conversation history including the new user message.Process with Custom LLM
Stream to Persona
createTalkMessageStream()
for natural text-to-speech conversion.Create project directory
Initialize Node.js project
bash npm init -y
This creates a package.json
file for managing dependencies.Create public directory
bash mkdir public
public
folder will contain your HTML and JavaScript files that are served to the browser.Install dependencies
bash npm install express dotenv openai
Configure environment variables
.env
file in your project root to store your API keys securely:llmId: "CUSTOMER_CLIENT_V1"
which disables Anam’s default AI
and enables custom LLM integration. The /api/chat-stream
endpoint handles the actual AI
conversation logic.LLM Responses Not Streaming
llmId: "CUSTOMER_CLIENT_V1"
is set in session tokenMESSAGE_HISTORY_UPDATED
event listener is properly connected/api/chat-stream
endpoint is responding correctlyStreaming Performance Issues
gpt-4o-mini
instead of gpt-4