| IdentityRAG | Live Demo | LinkedIn |
IdentityRAGDemo.mp4
IdentityRAG is a retrieval-augmented generation system that integrates identity resolution capabilities to provide accurate, context-aware responses about specific customers. It retrieves unified customer data across disparate sources to create a comprehensive golden record before generating LLM responses, ensuring answers are based on an accurate, deduplicated view of the customer.
| LangChain Integration |
|---|
![]() |
- Unify - bring data together from various sources.
- Search - find and retrieve all relevant customer data with fuzzy matching.
- Consolidate - combine it meaningfully by creating a golden record.
- Disambiguate - resolve conflicts/unclear matches.
- Deduplicate - remove redundancies where repeated with no extra value.
| Multiple Customer Data Sources / Knowledge Bases |
|---|
![]() |
If you don't want to use your own LLM keys then give it a try on the following live demo.
| Live Demo |
|---|
![]() |
- Clone this repository
- Install dependencies:
pip install -r requirements.txt - Set up your environment variables (see Configuration section)
- Run the demo server:
chainlit run chat.py -w - Open http://localhost:8000 in your browser
- Try asking "search for Sophie Muller"
export TILORES_API_URL='https://8edvhd7rqb.execute-api.eu-central-1.amazonaws.com'
export TILORES_TOKEN_URL='https://saas-umgegwho-tilores.auth.eu-central-1.amazoncognito.com/oauth2/token'
export TILORES_CLIENT_ID='3l3i0ifjurnr58u4lgf0eaeqa3'
export TILORES_CLIENT_SECRET='1c0g3v0u7pf1bvb7v65pauqt6s0h3vkkcf9u232u92ov3lm4aun2'export OPENAI_API_KEY='your openAI key'
export OPENAI_MODEL_NAME='gpt-4o-mini'export LLM_PROVIDER='Bedrock'
export BEDROCK_CREDENTIALS_PROFILE_NAME='your configured AWS profile name with bedrock access'
export BEDROCK_REGION='us-east-1'
export BEDROCK_MODEL_ID='anthropic.claude-3-5-sonnet-20240620-v1:0'Important
The aws profile needs to have access to the model with action
InvokeModelWithResponseStream. Also make sure the model is enabled in bedrock
console and in the correct region.
To use your own data you will need to create a Tilores instance and get your free Tilores API credentials, Here's how to do that:
- Visit app.tilores.io and sign up for free.
- Click on "Switch to Instance View" on the bottom right.
- Select "Upload Data File" option and proceed. It is recommended to use csv file format.
- If the file has well named headers the matching will be automatically configured and you can proceed with the instance creation without any further changes. The deployment will take around 3-5 minutes.
- Once the deployment is done, navigate to "Manage Instance" -> "Integration" -> "GraphQL API"
- The first URL is the
TILORES_GRAPHQL_API, and the second isTILORES_TOKEN_URLyou will need to export these two values as shown in Configuration section. - Then click
CREATE NEW CREDENTIALSand store both values. Then export each one into its corresponding environment valueTILORES_CLIENT_IDandTILORES_CLIENT_SECRET. - Now run
chainlit run chat.py -wand ask to search for one of the records in your data.
If you want to test the automatic lookup from the PDFs, you also must have the poppler-utils installed:
sudo apt-get install poppler-utils


