Overview
ThesendUserMessage() method allows you to programmatically send text messages on behalf of the user. This is useful for sending contextual information or triggering specific persona responses without actual user input.
Requires SDK version 3.3.0 or higher
Basic Usage
Send a message as if the user typed or spoke it:Important Considerations
Messages sent via
sendUserMessage() are not automatically added to the
transcript. To maintain an accurate conversation history, you must manually
add these messages to your transcript display.sendUserMessage() method differs from regular user messages in that:
- It does not trigger message events that would normally update your UI
- The message is sent directly to the persona without going through the normal message pipeline
- You need to handle transcript updates separately in your application
Use Cases
Simulating User Input
Providing Context to the LLM
A powerful use case is providing contextual information about user actions that the LLM can respond to:When providing context to the LLM, prefix your messages with “Note to AI:” or
similar to make the intent clear. This helps the LLM understand that it’s
receiving contextual information rather than a direct user question.
Triggering Specific Flows
Custom Client-Side Transcription
ThesendUserMessage() method enables you to implement your own client-side speech-to-text transcription. You can capture and transcribe audio using your preferred service, then send the transcribed messages directly:
This approach gives you full control over the transcription process, allowing
you to use specialized transcription models, handle multiple languages, or
implement custom preprocessing of the audio before sending it to the persona.
Complete Example
Here’s a complete example showing how to usesendUserMessage() with proper transcript management:
Future Improvements
In future SDK versions, we plan to introduce a dedicated
sendSystemMessage()
method that will be better suited for sending contextual information and
system-level messages to the LLM. This will provide clearer separation between
user messages and system context.
