Skip to content

This project integrates Agent Voice Response with OpenAI Assistant, enabling the application to handle dynamic conversations in real-time using OpenAI's API

License

Notifications You must be signed in to change notification settings

agentvoiceresponse/avr-llm-openai-assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agent Voice Response - OpenAI Assistant Integration

This project integrates Agent Voice Response with OpenAI Assistant, enabling the application to handle dynamic conversations in real-time using OpenAI's API. It sets up an Express.js server that receives a stream of prompt messages from Agent Voice Response Core, sends them to the OpenAI API, and streams back responses as Server-Sent Events (SSE).

Prerequisites

To set up and run this project, you will need:

  1. Node.js and npm installed.
  2. An OpenAI API Key.
  3. An OpenAI Assistant ID to use the OpenAI Assistant feature.

Setup

1. Clone the Repository

git clone https://github.com/agentvoiceresponse/avr-llm-openai-assistant.git
cd avr-llm-openai-assistant

2. Install Dependencies

npm install

3. Configure Environment Variables

Create a .env file in the root of your project and set the required environment variables:

OPENAI_API_KEY=your_openai_api_key
OPENAI_ASSISTANT_ID=your_assistant_id
PORT=6004
OPENAI_WAITING_MESSAGE=Loading...
OPENAI_WAITING_TIMEOUT=2000
  • OPENAI_API_KEY: Your OpenAI API key.
  • OPENAI_ASSISTANT_ID: The unique ID of the OpenAI Assistant you are integrating with.
  • OPENAI_WAITING_MESSAGE: (Optional) A message that will be shown to the user if the response takes longer than expected.
  • OPENAI_WAITING_TIMEOUT: (Optional) Time (in milliseconds) before displaying the waiting message (default: 2000ms).
  • PORT: The port the server will listen on (default: 6004).

4. Running the Application

To start the application:

node index.js

The server will start and listen on the port specified in the .env file or default to 6004.

How It Works

The application allows clients to send a sequence of prompt messages to the /prompt-stream endpoint. These messages are then processed by the OpenAI Assistant, and the response is streamed back to the client in real-time using Server-Sent Events (SSE).

Key Components

  1. Express.js Server: Handles incoming requests from clients and streams responses back.
  2. OpenAI API Integration: Uses OpenAI's Assistant API to process user prompts and generate intelligent responses.
  3. Server-Sent Events (SSE): Enables real-time streaming of responses from OpenAI back to the client.

New Features

  1. Function Management: The application now supports managing OpenAI functions, allowing for more complex interactions and operations.
  2. Function Call Handling: You can define and handle specific function calls within the prompt messages, enabling the assistant to perform tasks like calculations, data retrieval, and more.

Example Code Overview

  1. Receiving Client Prompts: The server listens for POST requests containing the messages (a sequence of user inputs).
  2. OpenAI API Communication: It uses the OpenAI API to create a thread and stream responses back to the client.
  3. Response Streaming: Responses from the OpenAI Assistant are streamed back to the client as a series of events, allowing for dynamic, real-time interaction.
  4. Waiting Message: If the assistant is delayed, a waiting message is sent to the client.
  5. Function Calls: The server can now handle specific function calls defined in the prompt messages, enhancing the assistant's capabilities.

API Endpoints

POST /prompt-stream

This endpoint accepts a JSON payload containing the user's prompt messages and streams the responses back in real-time.

Request

{
  "messages": [
    { "role": "user", "content": "What is the current weather?" },
    { "role": "system", "content": "Assist the user with weather information." }
  ]
}

Response

The response will be streamed back in chunks via Server-Sent Events (SSE).

Example Usage with curl

You can test the endpoint with curl:

curl -X POST http://localhost:6004/prompt-stream \
     -H "Content-Type: application/json" \
     -d '{"messages": [{"role": "user", "content": "Tell me about Agent Voice Response."}]}' 

The response will be streamed back as an event stream, where each chunk of text will be sent as it is generated by the OpenAI Assistant.

Function Management

Default Functions

A new directory named avr_functions has been added to store default functions that developers can use.

avr_transfer

The avr_transfer function can be used to transfer a call from one internal extension to another on Asterisk. Here is an example of how to define this function:

avr_hangup

The avr_hangup function can be used to hang up a call on Asterisk. Here is an example of how to define this function:

Custom Functions

Developers can create custom functions by creating a functions directory and storing their JavaScript functions there. Each function should return a JSON object in the following format:

{
  "data": {
    "status": "failure",
    "message": "Failed to do something."
  }
}

Example Custom Function

Here is an example of a custom function to collect information during a call:

const fs = require('fs');

module.exports = async function (args) {
    console.log("Collect info", args);
    try {
        fs.appendFileSync(`files/${args.uuid}.txt`, JSON.stringify(args));
        return { data: { status: "success", message: "Information stored successfully." } };
    } catch (error) {
        console.log(error);
        return { data: { status: "failure", message: "Failed to store information." } };
    }
};

Important Notes for Function Implementation

It is crucial that the function is exported using the following syntax:

module.exports = async function (args) {};

In the arguments, besides receiving those configured through the assistants, the uuid of the call will also be passed. Another very important aspect is to return an object of the type:

{
  "data": {}
}

The data object should contain the necessary information for the function's operation.

Configuring OpenAI Functions

To use functions with OpenAI, you need to configure them in the Assistants section. Specifically, in the functions section, you must declare the functions and their structure, including the parameters to be passed.

Default Functions Configuration

To use the default avr_transfer and avr_hangup functions, declare the functions as follows:

avr_hangup
{
  "name": "avr_hangup",
  "description": "Ends the conversation once the maintenance is booked or if no availability is found.",
  "strict": false,
  "parameters": {
    "type": "object",
    "properties": {},
    "required": []
  }
}
avr_transfer
{
  "name": "avr_transfer",
  "description": "Transfers a customer based on the bill type.",
  "strict": false,
  "parameters": {
    "type": "object",
    "properties": {
      "transfer_extension": {
        "type": "integer",
        "description": "The transfer extension for the bill type (600 for phone, 601 for gas, 602 for electricity)."
      },
      "transfer_context": {
        "type": "string",
        "description": "The context for the transfer. Default is 'demo'.",
        "default": "demo"
      },
      "transfer_priority": {
        "type": "integer",
        "description": "The priority level of the transfer. Default is 1.",
        "default": 1
      }
    },
    "required": [
      "transfer_extension"
    ]
  }
}

Using Custom Functions with Docker

If you decide to use custom functions while running the application with Docker, you need to mount the volume containing the functions directory to /usr/src/app/functions. Here is an example of how to configure the volume in your Docker setup:

volumes:
  - ./functions:/usr/src/app/functions

About

This project integrates Agent Voice Response with OpenAI Assistant, enabling the application to handle dynamic conversations in real-time using OpenAI's API

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published