This project integrates Agent Voice Response with OpenAI Assistant, enabling the application to handle dynamic conversations in real-time using OpenAI's API. It sets up an Express.js server that receives a stream of prompt messages from Agent Voice Response Core, sends them to the OpenAI API, and streams back responses as Server-Sent Events (SSE).
To set up and run this project, you will need:
- Node.js and npm installed.
- An OpenAI API Key.
- An OpenAI Assistant ID to use the OpenAI Assistant feature.
git clone https://github.com/agentvoiceresponse/avr-llm-openai-assistant.git
cd avr-llm-openai-assistant
npm install
Create a .env
file in the root of your project and set the required environment variables:
OPENAI_API_KEY=your_openai_api_key
OPENAI_ASSISTANT_ID=your_assistant_id
PORT=6004
OPENAI_WAITING_MESSAGE=Loading...
OPENAI_WAITING_TIMEOUT=2000
OPENAI_API_KEY
: Your OpenAI API key.OPENAI_ASSISTANT_ID
: The unique ID of the OpenAI Assistant you are integrating with.OPENAI_WAITING_MESSAGE
: (Optional) A message that will be shown to the user if the response takes longer than expected.OPENAI_WAITING_TIMEOUT
: (Optional) Time (in milliseconds) before displaying the waiting message (default: 2000ms).PORT
: The port the server will listen on (default: 6004).
To start the application:
node index.js
The server will start and listen on the port specified in the .env
file or default to 6004
.
The application allows clients to send a sequence of prompt messages to the /prompt-stream
endpoint. These messages are then processed by the OpenAI Assistant, and the response is streamed back to the client in real-time using Server-Sent Events (SSE).
- Express.js Server: Handles incoming requests from clients and streams responses back.
- OpenAI API Integration: Uses OpenAI's Assistant API to process user prompts and generate intelligent responses.
- Server-Sent Events (SSE): Enables real-time streaming of responses from OpenAI back to the client.
- Function Management: The application now supports managing OpenAI functions, allowing for more complex interactions and operations.
- Function Call Handling: You can define and handle specific function calls within the prompt messages, enabling the assistant to perform tasks like calculations, data retrieval, and more.
- Receiving Client Prompts: The server listens for POST requests containing the
messages
(a sequence of user inputs). - OpenAI API Communication: It uses the OpenAI API to create a thread and stream responses back to the client.
- Response Streaming: Responses from the OpenAI Assistant are streamed back to the client as a series of events, allowing for dynamic, real-time interaction.
- Waiting Message: If the assistant is delayed, a waiting message is sent to the client.
- Function Calls: The server can now handle specific function calls defined in the prompt messages, enhancing the assistant's capabilities.
This endpoint accepts a JSON payload containing the user's prompt messages and streams the responses back in real-time.
{
"messages": [
{ "role": "user", "content": "What is the current weather?" },
{ "role": "system", "content": "Assist the user with weather information." }
]
}
The response will be streamed back in chunks via Server-Sent Events (SSE).
You can test the endpoint with curl
:
curl -X POST http://localhost:6004/prompt-stream \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Tell me about Agent Voice Response."}]}'
The response will be streamed back as an event stream, where each chunk of text will be sent as it is generated by the OpenAI Assistant.
A new directory named avr_functions
has been added to store default functions that developers can use.
The avr_transfer
function can be used to transfer a call from one internal extension to another on Asterisk. Here is an example of how to define this function:
The avr_hangup
function can be used to hang up a call on Asterisk. Here is an example of how to define this function:
Developers can create custom functions by creating a functions
directory and storing their JavaScript functions there. Each function should return a JSON object in the following format:
{
"data": {
"status": "failure",
"message": "Failed to do something."
}
}
Here is an example of a custom function to collect information during a call:
const fs = require('fs');
module.exports = async function (args) {
console.log("Collect info", args);
try {
fs.appendFileSync(`files/${args.uuid}.txt`, JSON.stringify(args));
return { data: { status: "success", message: "Information stored successfully." } };
} catch (error) {
console.log(error);
return { data: { status: "failure", message: "Failed to store information." } };
}
};
It is crucial that the function is exported using the following syntax:
module.exports = async function (args) {};
In the arguments, besides receiving those configured through the assistants, the uuid
of the call will also be passed. Another very important aspect is to return an object of the type:
{
"data": {}
}
The data
object should contain the necessary information for the function's operation.
To use functions with OpenAI, you need to configure them in the Assistants section. Specifically, in the functions section, you must declare the functions and their structure, including the parameters to be passed.
To use the default avr_transfer
and avr_hangup
functions, declare the functions as follows:
{
"name": "avr_hangup",
"description": "Ends the conversation once the maintenance is booked or if no availability is found.",
"strict": false,
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
{
"name": "avr_transfer",
"description": "Transfers a customer based on the bill type.",
"strict": false,
"parameters": {
"type": "object",
"properties": {
"transfer_extension": {
"type": "integer",
"description": "The transfer extension for the bill type (600 for phone, 601 for gas, 602 for electricity)."
},
"transfer_context": {
"type": "string",
"description": "The context for the transfer. Default is 'demo'.",
"default": "demo"
},
"transfer_priority": {
"type": "integer",
"description": "The priority level of the transfer. Default is 1.",
"default": 1
}
},
"required": [
"transfer_extension"
]
}
}
If you decide to use custom functions while running the application with Docker, you need to mount the volume containing the functions
directory to /usr/src/app/functions
. Here is an example of how to configure the volume in your Docker setup:
volumes:
- ./functions:/usr/src/app/functions