Chat system architecture leveraging Apache Camel's aggregation pattern. This architecture incorporates a mechanism to buffer incoming messages for a specific timeout duration, measured in seconds. Subsequently, the system aggregates these buffered messages based on their unique chat IDs. The purpose of this aggregation step is to optimize Firestore usage by reducing the number of write operations. The aggregated message data is then efficiently dispatched to Firestore, with the primary goal of minimizing Firestore billing costs.
The chat pagination utilize Cassandra instead of Firestore. This strategic decision serves a dual purpose:
- It alleviates the load on Firestore's read operations
- Harnesses the inherent advantages of Cassandra for optimizing the partitioning using the
chat_idas the partition key.
ChatVideo1.mov
The next content of this paper explain the sistem using a Architectural Map Features approach.
Create a chat that makes spam difficult, optimizes availability, and manages reading and writing operations in Firestore to reduce expenses.
- Create User
- Login
- Get the list of users
- User send messages to other user
- Create a new chat room between to users: The user A will find the user B and start a chat.
- User get realtime message updating
- Feature Mapping 1: The users management and authentication (features 1, 2 and 3) will be handle by a microservice called Auth Microservice
- The microservice will have a store.
- The microservice will use JWT for the Authentication.
- Feature Mapping 2. Chat creation and Message creation/retrieving (features 4 and 5) will be handle by a microservice called Chat Microservice
- The microservice will have a store.
- Feature Mapping 3. Chat Microservice will take care of sending the messages to a queue proccesing in other to satisfy the feature 6
Instead of using Firestore or another cloud-based solution for a complete chat solution, we are going to use Firestore only to update the UI with the aggregated news messages.
The aggregated news messages refers to a set of messages sent by a user. Instead of displaying each message one by one, there will be a server that will group all the related messages sent by a user into a list. This list will then be retrieved for the chat..
This is how it looks like in Firestore:
- The auth and chat microservices will be written in Python using FastAPI.
- The message aggregating process will be handled by Apache Camel using SpringBoot on Java.
- User databases will be Postgres.
- Chat database will be Cassandra to create a fast partition for each chat and fast pagination based on Time UUID.
- The
aggregated news messageswill be saved into Firestore and only will retrieve for realtime - Chat messages queue processing will be handle by Kafka using a Python Kafka Consumer.
- Auth validation in Python microservices will be handled by a Python shared library called
auth-validator
The Chat will send a HTTP request to the Chat Microserver /chat/<id>?time=<time-uuid>&quantity to retrieve a set of messages
- Query parameter quatity will be a integer to know the max size of messages to get
- Query parameter time will be the UUID time of the last message that the chat have to get messages before that time
- Both query parameter are optional. When there are not quantity you get 10. When there are not time you get the latest messages.
- We going to use tilt in order to run the infraestructure on Kubernetes for development purposes.
- There is a Docker Compose for development purposes.
- For Python you need Pipenv and version 3.9.7
- For Java you need:
java 17.0.2 2022-01-18 LTS
Java(TM) SE Runtime Environment (build 17.0.2+8-LTS-86)
Java HotSpot(TM) 64-Bit Server VM (build 17.0.2+8-LTS-86, mixed mode, sharing)
docker-compose rm -svf && docker-compose upIt should create 9 containers:
1.) chat-firebase-apache-camel_zookeeper_1
2.) chat-firebase-apache-camel_cassandra_1
3.) chat-firebase-apache-camel_auth_db_1
4.) chat-firebase-apache-camel_kafka_1
5.) chat-firebase-apache-camel_consumer_1
6.) chat-firebase-apache-camel_auth_microservice_1
7.) chat-firebase-apache-camel_apache_camel_microservices_1
8.) chat-firebase-apache-camel_chat_microservices_1
9.) front_end_next_js_1
Then you can open http://localhost:3000/ and start using the chat.
mvn test -X- Chat Microservice:
cd chat-microservice && pipenv run server- Auth Microservice:
cd auth-microservice && pipenv run server- You will need Maven to run the Apache Camel Spring Boot server
cd apache-camel-service-bus && mvn package && mvn spring-boot:rundocker-compose rm -svf && docker-compose upIn order to use the auth microservice you must migrate using alembic to create the user table. Access to the container and run:
pipenv shell
alembic upgrade headIt should print something like:
dict_keys(['user'])
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> fe1a63894533, create user table
INFO [alembic.runtime.migration] Running upgrade fe1a63894533 -> d3c6204b9f3c, username uniqueCreate a topic:
docker exec -it CONTAINER_ID kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 3 --topic charlytestList of topics:
docker exec -it CONTAINER_ID kafka-topics.sh --bootstrap-server localhost:9092 --listGet topics information
docker exec -it CONTAINER_ID kafka-topics.sh --bootstrap-server localhost:9092 --describeConsole Producer using key and acks
In this case 12345 is the key and hello de message content (Useful to put all the messages in the same partition using the user_id)
docker exec -it CONTAINER_ID kafka-console-producer.sh --bootstrap-server localhost:9092 --topic charlytest --producer-property acks=all --property parse.key=true --property key.separator=:
>> 12345:hellodocker exec -it CONTAINER_ID cqlshdocker network ls | grep "camel"You need to download the service_account_key.json from your Firebase project add it to the aggregated-messages-consumer folder.
Also you need to update the .env file in the frontend-dev folder to looks like:
# MICROSERVICES
NEXT_PUBLIC_AUTH_MICROSERVICE = 'http://0.0.0.0:8000'
NEXT_PUBLIC_CHAT_MICROSERVICE = 'http://0.0.0.0:9000'
# FIREBASE
NEXT_PUBLIC_API_KEY="..."
NEXT_PUBLIC_AUTH_DOMAIN="..."
NEXT_PUBLIC_PROJECT_ID="..."
NEXT_PUBLIC_STORAGE_BUCKET="..."
NEXT_PUBLIC_MESSAGING_SENDER_ID="..."
NEXT_PUBLIC_APP_ID="..."- DOCKERHUB_TOKEN
- DOCKERHUB_USERNAME
- GCP_PROJECT
- GKE_SA_KEY
- GKE_ZONE
- GKE_CLUSTER
- GOOGLE_CREDENTIALS
- Create project
gcloud auth application-default set-quota-project $PROJECT_NAMEgcloud config set project $PROJECT_NAMEterraform init- Comment
gke_secrets.tfandkafka.tf terraform applygcloud container clusters get-credentials chat2-405718-gke --region us-central1- Set
gke_config_contextinterraform.tfvars - Uncomment
gke_secrets.tfandkafka.tf terraform apply- ./apply_k8s_configs.sh
- Set in a .env
NEXT_PUBLIC_AUTH_MICROSERVICEandNEXT_PUBLIC_CHAT_MICROSERVICEusing the load balancing external IP - Go to firebase and create the project and a web app for this gcp project and fill the .env
- Run
kubectl create secret generic frontend-secrets --from-env-file=.env - https://github.com/google-github-actions/auth/blob/main/docs/EXAMPLES.md to set GOOGLE_CREDENTIALS secret
- GKE_SA_KEY gcloud iam service-accounts keys create key.json --iam-account=SA_EMAIL
- export GKE_SA_KEY=$(cat key.json | base64)
- use key.json in secret GKE_SA_KEY
Apache License 2.0

