Build a production-ready web application that integrates your own custom language model with Anam’s real-time personas. This comprehensive guide walks you through creating an interactive AI assistant that uses OpenAI’s GPT-4o-mini as the brain, with streaming responses, real-time conversation handling, and production-grade security practices.

How to use this guide

This comprehensive guide takes an additive approach, covering the essential patterns for custom LLM integration with Anam. Each section builds upon the previous ones, allowing you to create increasingly sophisticated functionality that rivals ChatGPT’s conversational experience but with a realistic AI persona.

We intentionally provide detailed, complete code examples to make integration with AI coding assistants seamless. Use the buttons in the top-right corner of this page to open the content in your preferred LLM or copy the entire guide for easy reference.

After completing the initial setup (Steps 1-5), you can extend this foundation by adding features like conversation memory, different LLM providers, custom system prompts, or specialized AI behaviors.

What You’ll Build

By the end of this guide, you’ll have a sophisticated persona application featuring:

  • Custom AI Brain powered by your own language model (OpenAI GPT-4o-mini)
  • Streaming Responses with real-time text-to-speech conversion
  • Turn-taking Management that handles conversation flow naturally
  • Message History Integration that maintains conversation context
  • Production Security with proper API key handling and rate limiting
  • Error Handling & Recovery for robust user experience
  • Modern UI/UX with loading states and real-time feedback

This guide uses OpenAI’s GPT-4o-mini as an example custom LLM for demonstration purposes. In your actual application, you would replace the OpenAI integration with calls to your specific LLM provider. The core integration pattern remains the same regardless of your LLM choice.

Prerequisites

  • Node.js (version 18 or higher) and npm installed
  • Understanding of modern JavaScript/TypeScript and streaming APIs
  • An Anam API key (get one here)
  • An OpenAI API key (get one here)
  • Basic knowledge of Express.js and modern web development
  • A microphone and speakers for voice interaction

Understanding the Custom LLM Flow

Before diving into the implementation, it’s important to understand how custom LLM integration works with Anam personas. Regardless of your custom LLM provider, the implementation pattern will always follow these steps:

1

Disable Default Brain

The brainType: "CUSTOMER_CLIENT_V1" setting in the session token request disables Anam’s default AI, allowing you to handle all conversation logic.

2

Listen for User Input

The MESSAGE_HISTORY_UPDATED event fires when the user finishes speaking, providing the complete conversation history including the new user message.

3

Process with Custom LLM

Your server endpoint receives the conversation history and generates a streaming response using your chosen LLM (OpenAI in this example).

4

Stream to Persona

The LLM response is streamed back to the client and forwarded to the persona using createTalkMessageStream() for natural text-to-speech conversion.

Using these core concepts, we’ll build a simple web application that allows you to chat with your custom LLM-powered persona.

Basic Setup

Let’s start by building the foundation with custom LLM integration. This setup creates a web application with four main components:

anam-custom-llm-app/
├── server.js              # Express server with streaming LLM endpoint
├── package.json           # Node.js dependencies
├── public/                # Static files served to the browser
│   ├── index.html         # Main HTML page with video element
│   └── script.js          # Client-side JavaScript for persona control
└── .env                   # Environment variables
1

Create project directory

mkdir anam-custom-llm-app
cd anam-custom-llm-app
2

Initialize Node.js project

npm init -y

This creates a package.json file for managing dependencies.

3

Create public directory

mkdir public

The public folder will contain your HTML and JavaScript files that are served to the browser.

4

Install dependencies

npm install express dotenv openai

We’re installing Express for the server, dotenv for environment variables, and the OpenAI SDK for custom LLM integration. The Anam SDK will be loaded directly from a CDN in the browser.

5

Configure environment variables

Create a .env file in your project root to store your API keys securely:

.env
ANAM_API_KEY=your-anam-api-key-here
OPENAI_API_KEY=your-openai-api-key-here

Replace the placeholder values with your actual API keys. Never commit this file to version control.

Step 1: Set up your server with LLM streaming

Create an Express server that handles both session token generation and LLM streaming:

server.js
require('dotenv').config();
const express = require('express');
const OpenAI = require('openai');

const app = express();

// Initialize OpenAI client
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

app.use(express.json());
app.use(express.static('public'));

// Session token endpoint with custom brain configuration
app.post('/api/session-token', async (req, res) => {
  try {
    const response = await fetch("https://api.anam.ai/v1/auth/session-token", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${process.env.ANAM_API_KEY}`,
      },
      body: JSON.stringify({
        personaConfig: {
          name: "Cara",
          avatarId: "30fa96d0-26c4-4e55-94a0-517025942e18",
          voiceId: "6bfbe25a-979d-40f3-a92b-5394170af54b",
          // This disables Anam's default brain and enables custom LLM integration
          brainType: "CUSTOMER_CLIENT_V1",
        },
      }),
    });
    
    const data = await response.json();
    res.json({ sessionToken: data.sessionToken });
  } catch (error) {
    console.error('Session token error:', error);
    res.status(500).json({ error: 'Failed to create session' });
  }
});

// Custom LLM streaming endpoint
app.post('/api/chat-stream', async (req, res) => {
  try {
    const { messages } = req.body;

    // Create a streaming response from OpenAI
    const stream = await openai.chat.completions.create({
      model: "gpt-4o-mini",
      messages: [
        {
          role: "system",
          content: "You are Cara, a helpful AI assistant. Be friendly, concise, and conversational in your responses. Keep responses under 100 words unless specifically asked for detailed information."
        },
        ...messages
      ],
      stream: true,
      temperature: 0.7,
    });

    // Set headers for streaming response
    res.setHeader('Content-Type', 'text/event-stream');
    res.setHeader('Cache-Control', 'no-cache');
    res.setHeader('Connection', 'keep-alive');

    // Process the OpenAI stream and forward to client
    for await (const chunk of stream) {
      const content = chunk.choices[0]?.delta?.content || "";
      if (content) {
        // Send each chunk as JSON
        res.write(JSON.stringify({ content }) + '\n');
      }
    }

    res.end();
  } catch (error) {
    console.error('LLM streaming error:', error);
    res.status(500).json({ error: 'An error occurred while streaming response' });
  }
});

app.listen(8000, () => {
  console.log('Server running on http://localhost:8000');
  console.log('Custom LLM integration ready!');
});

The key difference here is setting brainType: "CUSTOMER_CLIENT_V1" which disables Anam’s default AI and enables custom LLM integration. The /api/chat-stream endpoint handles the actual AI conversation logic.

Step 2: Set up your HTML

Create a simple HTML page with video element and conversation display:

public/index.html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Custom LLM Persona - Anam Integration</title>
</head>
<body style="font-family: Arial, sans-serif; margin: 20px; background-color: #f5f5f5;">
    <div style="max-width: 1000px; margin: 0 auto;">
        <h1 style="text-align: center; color: #333;">🤖 Custom LLM Persona</h1>
        
        <div style="display: grid; grid-template-columns: 1fr 1fr; gap: 20px; margin-bottom: 20px;">
            <!-- Persona Panel -->
            <div style="background: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
                <video id="persona-video" autoplay playsinline muted 
                       style="width: 100%; max-width: 400px; border-radius: 8px; background: #000; display: block; margin: 0 auto;"></video>
                
                <div id="status" style="text-align: center; margin: 15px 0; font-weight: bold;">Ready to connect</div>
                
                <div style="text-align: center;">
                    <button id="start-button" 
                            style="background: #007bff; color: white; border: none; padding: 10px 20px; border-radius: 4px; cursor: pointer; margin: 5px; font-size: 14px;">
                        Start Conversation
                    </button>
                    <button id="stop-button" disabled
                            style="background: #dc3545; color: white; border: none; padding: 10px 20px; border-radius: 4px; cursor: pointer; margin: 5px; font-size: 14px; opacity: 0.6;">
                        Stop Conversation
                    </button>
                </div>
            </div>

            <!-- Chat Panel -->
            <div style="background: white; padding: 20px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
                <h2 style="margin-top: 0; color: #333;">💬 Conversation</h2>
                <div id="chat-history" 
                     style="height: 400px; overflow-y: auto; padding: 10px; border: 1px solid #ddd; border-radius: 4px; background: #fafafa;">
                    <div style="font-style: italic; color: #666; text-align: center;">Start a conversation to see your chat history...</div>
                </div>
            </div>
        </div>
    </div>

    <script type="module" src="script.js"></script>
</body>
</html>

Step 3: Implement the client-side custom LLM integration

Create the client-side JavaScript that handles the custom LLM integration:

public/script.js
import { createClient } from "https://esm.sh/@anam-ai/js-sdk@latest";
import { AnamEvent } from "https://esm.sh/@anam-ai/js-sdk@latest/dist/module/types";

let anamClient = null;

// Get DOM elements
const startButton = document.getElementById("start-button");
const stopButton = document.getElementById("stop-button");
const videoElement = document.getElementById("persona-video");
const statusElement = document.getElementById("status");
const chatHistory = document.getElementById("chat-history");

// Status management
function updateStatus(message, type = 'normal') {
  statusElement.textContent = message;
  const colors = {
    loading: '#f39c12',
    connected: '#28a745',
    error: '#dc3545',
    normal: '#333'
  };
  statusElement.style.color = colors[type] || colors.normal;
}

// Chat history management
function updateChatHistory(messages) {
  if (!chatHistory) return;
  
  chatHistory.innerHTML = '';
  
  if (messages.length === 0) {
    chatHistory.innerHTML = '<div style="font-style: italic; color: #666; text-align: center;">Start a conversation to see your chat history...</div>';
    return;
  }

  messages.forEach(message => {
    const messageDiv = document.createElement('div');
    const isUser = message.role === 'user';
    messageDiv.style.cssText = `
      margin-bottom: 10px; 
      padding: 8px 12px; 
      border-radius: 8px; 
      max-width: 85%; 
      background: ${isUser ? '#e3f2fd' : '#f1f8e9'};
      ${isUser ? 'margin-left: auto; text-align: right;' : ''}
    `;
    messageDiv.innerHTML = `<strong>${isUser ? 'You' : 'Cara'}:</strong> ${message.content}`;
    chatHistory.appendChild(messageDiv);
  });

  // Scroll to bottom
  chatHistory.scrollTop = chatHistory.scrollHeight;
}

// Custom LLM response handler
async function handleUserMessage(messageHistory) {
  // Only respond to user messages
  if (messageHistory.length === 0 || messageHistory[messageHistory.length - 1].role !== "user") {
    return;
  }
  
  if (!anamClient) return;

  try {
    console.log('🧠 Getting custom LLM response for:', messageHistory);
    
    // Convert Anam message format to OpenAI format
    const openAIMessages = messageHistory.map(msg => ({
      role: msg.role === "user" ? "user" : "assistant",
      content: msg.content
    }));

    // Create a streaming talk session
    const talkStream = anamClient.createTalkMessageStream();
    
    // Call our custom LLM streaming endpoint
    const response = await fetch("/api/chat-stream", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ messages: openAIMessages }),
    });

    if (!response.ok) {
      throw new Error(`LLM request failed: ${response.status}`);
    }

    const reader = response.body?.getReader();
    if (!reader) {
      throw new Error("Failed to get response stream reader");
    }

    const textDecoder = new TextDecoder();
    console.log('🎤 Streaming LLM response to persona...');

    // Stream the response chunks to the persona
    while (true) {
      const { done, value } = await reader.read();

      if (done) {
        console.log('✅ LLM streaming complete');
        if (talkStream.isActive()) {
          talkStream.endMessage();
        }
        break;
      }

      if (value) {
        const text = textDecoder.decode(value);
        const lines = text.split("\n").filter(line => line.trim());

        for (const line of lines) {
          try {
            const data = JSON.parse(line);
            if (data.content && talkStream.isActive()) {
              talkStream.streamMessageChunk(data.content, false);
            }
          } catch (parseError) {
            // Ignore parse errors in streaming
          }
        }
      }
    }
  } catch (error) {
    console.error('❌ Custom LLM error:', error);
    if (anamClient) {
      anamClient.talk("I'm sorry, I encountered an error while processing your request. Please try again.");
    }
  }
}

async function startConversation() {
  try {
    startButton.disabled = true;
    updateStatus("Connecting...", "loading");

    // Get session token from server
    const response = await fetch("/api/session-token", {
      method: "POST",
    });
    
    if (!response.ok) {
      throw new Error('Failed to get session token');
    }
    
    const { sessionToken } = await response.json();

    // Create Anam client
    anamClient = createClient(sessionToken);

    // Set up event listeners
    anamClient.addListener(AnamEvent.SESSION_READY, () => {
      console.log('🎯 Session ready!');
      updateStatus("Connected - Custom LLM active", "connected");
      startButton.disabled = true;
      stopButton.disabled = false;
      
      // Send initial greeting
      anamClient.talk("Hello! I'm Cara, powered by a custom AI brain. How can I help you today?");
    });

    anamClient.addListener(AnamEvent.CONNECTION_CLOSED, () => {
      console.log('🔌 Connection closed');
      stopConversation();
    });

    // This is the key event for custom LLM integration
    anamClient.addListener(AnamEvent.MESSAGE_HISTORY_UPDATED, handleUserMessage);

    // Update chat history in real-time
    anamClient.addListener(AnamEvent.MESSAGE_HISTORY_UPDATED, (messages) => {
      updateChatHistory(messages);
    });

    // Handle stream interruptions
    anamClient.addListener(AnamEvent.TALK_STREAM_INTERRUPTED, () => {
      console.log('🛑 Talk stream interrupted by user');
    });

    // Start streaming to video element
    await anamClient.streamToVideoElement("persona-video");

    console.log("🚀 Custom LLM persona started successfully!");
  } catch (error) {
    console.error("❌ Failed to start conversation:", error);
    updateStatus(`Error: ${error.message}`, "error");
    startButton.disabled = false;
  }
}

function stopConversation() {
  if (anamClient) {
    anamClient.stopStreaming();
    anamClient = null;
  }

  // Reset UI
  videoElement.srcObject = null;
  updateChatHistory([]);
  updateStatus("Disconnected", "normal");
  startButton.disabled = false;
  stopButton.disabled = true;

  console.log("🛑 Conversation stopped");
}

// Add event listeners
startButton.addEventListener("click", startConversation);
stopButton.addEventListener("click", stopConversation);

// Cleanup on page unload
window.addEventListener('beforeunload', stopConversation);

Step 4: Test your custom LLM integration

  1. Start your server:
node server.js
  1. Open http://localhost:8000 in your browser

  2. Click “Start Conversation” to begin chatting with your custom LLM-powered persona!

You should see Cara appear and greet you, powered by your custom OpenAI integration. Try having a conversation - your voice will be transcribed, sent to OpenAI’s GPT-4o-mini, and the response will be streamed back through the persona’s voice and video.

Advanced Features

Enhanced Error Handling

Add robust error handling to improve user experience:

// Add this to your script.js handleUserMessage function
async function handleUserMessage(messageHistory) {
  if (messageHistory.length === 0 || messageHistory[messageHistory.length - 1].role !== "user") {
    return;
  }
  
  if (!anamClient) return;

  const maxRetries = 3;
  let retryCount = 0;

  while (retryCount < maxRetries) {
    try {
      // ... existing LLM call code ...
      return; // Success, exit retry loop
    } catch (error) {
      retryCount++;
      console.error(`❌ Custom LLM error (attempt ${retryCount}):`, error);
      
      if (retryCount >= maxRetries) {
        // Final fallback response
        if (anamClient) {
          anamClient.talk("I'm experiencing some technical difficulties. Please try rephrasing your question or try again in a moment.");
        }
      } else {
        // Wait before retry
        await new Promise(resolve => setTimeout(resolve, 1000 * retryCount));
      }
    }
  }
}

What You’ve Built

Congratulations! You’ve successfully integrated a custom language model with Anam’s persona system. Your application now features:

Custom AI Brain: Complete control over your persona’s intelligence using OpenAI’s GPT-4o-mini, with the ability to customize personality, knowledge, and behavior.

Real-time Streaming: Responses stream naturally from your LLM through the persona’s voice, creating fluid conversations without noticeable delays.

Conversation Context: Full conversation history is maintained and provided to your LLM for contextually aware responses.

Production Ready: Error handling, retry logic, and proper API key management make this suitable for real-world deployment.

Extensible Architecture: The modular design allows you to easily swap LLM providers, add custom logic, or integrate with other AI services.

Troubleshooting