Develop a full-stack application that integrates NestJS, MongoDB, ReactJS, and AWS (using LocalStack only for S3), employing TypeScript, Serverless Framework (for Lambda), and Docker.
The scope should be achievable within 4 hours.
link youtube test: https://www.youtube.com/watch?v=LcUjX-c-HEA
This guide explains how to configure and run all services locally using Docker Compose. The setup includes:
- Back-end (NestJS API)
- Front-end (Vite + React)
- MongoDB (Database)
- LocalStack (Mock AWS services)
- Lambda-like Notification Service (Express server simulating AWS Lambda behavior)
NODE_ENV=development
MONGO_URI=mongodb://mongo:27017/nouslatam
SNS_TOPIC_ARN=arn:aws:sns:us-east-1:000000000000:test-sns-topic
SNS_TOPIC_NAME=test-sns-topic
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
AWS_REGION=us-east-1
LOCALSTACK_HOST=localstack
LOCALSTACK_ENDPOINT=http://localstack:4566
SERVICES=sns,s3,lambda
DEBUG=1
This configuration connects to the local MongoDB and LocalStack services inside Docker and simulates AWS services like SNS, S3, and Lambda.
HOST=localhost
PORT=3000
HTTPS=false
API_FRONTEND=http://localhost:5173
AWS_REGION=us-east-1
S3_ENDPOINT=http://localstack:4566
S3_ENDPOINT_HOST_MY_BUCKET=http://localhost:4566
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
MONGO_URI=mongodb://mongo:27017/nouslatam
API_LAMBDA=http://localstack-lambda-app:4000/dev/order/notification
This sets up the back-end to run locally and integrate with LocalStack, MongoDB, and the Lambda-like service.
VITE_API_URL=http://localhost:3000
This points the front-end app to the local back-end API.
Use Docker Compose to spin up all services:
docker-compose up --build
This will:
- Build and run all containers
- Connect services through a shared Docker network
- Initialize MongoDB with data (if configured)
- Make all services available on your local machine
Service | URL |
---|---|
Front-end | http://localhost:5173 |
Back-end API | http://localhost:3000 |
Lambda Service | http://localhost:4000 |
LocalStack | http://localhost:4566 |
MongoDB | mongodb://localhost:27017 |
-
Make sure Docker is running before starting.
-
You can stop the services using
Ctrl + C
ordocker-compose down
. -
Changes to
.env
files require a container rebuild:docker-compose down && docker-compose up --build
Sure! Here's an explanation for creating a .env
file in markdown format:
A .env
file is used to store environment variables that are important for configuring your application. These variables are typically used to store sensitive information, such as API keys, database connection strings, and other configuration settings that differ between development, staging, and production environments.
For a front-end application, such as a React app, you may need to set environment variables that configure API endpoints and other settings.
Example of a Front-end .env
file:
VITE_API_URL=http://localhost:3000
- VITE_API_URL: This specifies the API endpoint the front-end application will communicate with. It could point to your back-end server, usually set to a local development server (e.g.,
http://localhost:3000
) during development.
For a back-end application, such as an Express server, environment variables help configure important services, ports, and other server-related settings.
Example of a Back-end .env
file:
HOST=localhost
PORT=3000
HTTPS=false
API_FRONTEND=http://localhost:5173
AWS_REGION=us-east-1
S3_ENDPOINT=http://localhost:4566
S3_ENDPOINT_HOST_MY_BUCKET=http://localhost:4566
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
MONGO_URI=mongodb://127.0.0.1:27017/nouslatam
API_LAMBDA=http://localhost:4000/dev/order/notification
When running a back-end application like an Express server inside a Docker container, the values of certain environment variables need to reflect the internal Docker network. Instead of using localhost
, you reference other services by their container names defined in docker-compose.yml
.
Here’s how the .env
file would look inside the container:
HOST=localhost
PORT=3000
HTTPS=false
API_FRONTEND=http://localhost:5173
AWS_REGION=us-east-1
S3_ENDPOINT=http://localstack:4566
S3_ENDPOINT_HOST_MY_BUCKET=http://localhost:4566
AWS_ACCESS_KEY_ID=test
AWS_SECRET_ACCESS_KEY=test
MONGO_URI=mongodb://mongo:27017/nouslatam
API_LAMBDA=http://localstack-lambda-app:4000/dev/order/notification
-
S3_ENDPOINT=http://localstack:4566
→ Uses the container namelocalstack
instead oflocalhost
, because that's how it's accessed from within the Docker network. -
MONGO_URI=mongodb://mongo:27017/nouslatam
→ The hostnamemongo
refers to the MongoDB container, not the local machine. -
API_LAMBDA=http://localstack-lambda-app:4000/dev/order/notification
→ Refers to another container namedlocalstack-lambda-app
.
Using container names allows the services to correctly locate and communicate with each other within the isolated Docker network.
- HOST: Defines the hostname or IP address on which the back-end server will listen. In this example, it’s set to
localhost
, meaning the server will only be accessible from the same machine it’s running on. - PORT: Specifies the port on which the back-end server will listen. Here, it’s set to
3000
. - HTTPS: Indicates whether the server should use HTTPS. In this case, it’s set to
false
(non-HTTPS). - API_FRONTEND: Specifies the URL of the front-end application. In this example, it points to the local development server (e.g.,
http://localhost:5173
). - AWS_REGION: Defines the AWS region where services (like S3 or Lambda) are configured. In this case, it's set to
us-east-1
. - S3_ENDPOINT: Sets the endpoint for AWS S3. When using a mock service like LocalStack, this typically points to a local address (e.g.,
http://localhost:4566
). - S3_ENDPOINT_HOST_MY_BUCKET: Defines the host-style endpoint for accessing an S3 bucket, often used when running a local mock like LocalStack. For example, this might be
http://localhost:4566
to simulate access tohttp://localhost:4566/my-bucket/
. - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY: Represent the AWS credentials required to authenticate with AWS services. For local development, dummy values are typically sufficient.
- MONGO_URI: Specifies the MongoDB connection string. In this case, it connects to a local MongoDB instance (e.g.,
mongodb://127.0.0.1:27017/nouslatam
). - API_LAMBDA: Specifies the URL endpoint for invoking a Lambda-like service (such as one running on LocalStack). For example,
http://localhost:4000/dev/order/notification
points to a locally hosted server that simulates a Lambda function.
-
Create the file:
- In the root directory of your project, create a new file named
.env
(without any file extension).
- In the root directory of your project, create a new file named
-
Add your variables:
- Open the
.env
file in a text editor and add your environment variables in theKEY=VALUE
format. Ensure each key-value pair is on its own line.
- Open the
-
Use the variables in your code:
- In the application code, you can access the environment variables using
process.env.KEY_NAME
. For example, in Node.js:
const apiUrl = process.env.VITE_API_URL;
- In the application code, you can access the environment variables using
-
Never commit the
.env
file to source control (Git):- The
.env
file often contains sensitive information. Make sure to add it to your.gitignore
file to prevent it from being committed to version control.
# .gitignore .env
- The
This is how you can create and use a .env
file to store and manage environment-specific configuration settings for your application.
-
Product
- Fields:
id
(string or ObjectId)name
(string)description
(string)price
(number)categoryIds
(array of Category IDs) — many-to-many relationship with CategoryimageUrl
(string) — points to the file in S3
- Fields:
-
Category
- Fields:
id
(string or ObjectId)name
(string)
- Fields:
-
Order
- Fields:
id
(string or ObjectId)date
(Date)productIds
(array of Product IDs)total
(number)
- Fields:
- Product CRUD:
- Create, List, Update, Delete
- Category CRUD:
- Create, List, Update, Delete
- Order CRUD:
- Create, List, Update, Delete
- Dashboard:
- Display aggregated sales data with filters for category, product, and period.
- Implement aggregate queries for sales metrics (e.g., total orders, average value, etc.).
- Associate Products with Categories (many-to-many).
- Associate Products with Orders.
- Create a script (e.g., CLI in Node.js) to populate MongoDB with fictitious data:
- Products (with price variations and categories)
- Categories (different types)
- Orders (varied combinations of products, dates, and totals)
- Use DTOs (Data Transfer Objects) to validate each route, ensuring data integrity and security.
- Properly manage deletions (e.g., when deleting a Category, make sure Products do not become orphaned).
- Implement error handling (correct status codes and JSON response structure).
- Create a function using the Serverless Framework to perform a background task, such as:
- Processing sales reports (based on Orders)
- Sending notifications when a new Order is created
- Any other relevant background functionality for the business
- Explain how the Lambda function could be triggered (event, cron, HTTP request).
- If applicable, demonstrate how the Lambda can read data from MongoDB or integrate with the NestJS backend.
Note: No need for LocalStack for Lambda. The Lambda function should be configured only via the Serverless Framework.
-
Products
- List, Create, Edit, Delete.
- Image upload (stored in S3).
-
Categories
- List, Create, Edit, Delete.
-
Orders
- List, Create, Edit, Delete.
- Display sales metrics about Orders, such as:
- Total number of orders
- Average order value
- Total revenue
- Orders by period (daily, weekly, monthly, etc.)
- Use Storybook to document at least 2 main components:
- Table (for listing Products, Orders, or Categories)
- Form (for creating/editing)
- Use LocalStack only to simulate S3 in the local environment.
- Upload Product images to the simulated S3 bucket.
- Configure Docker (using docker-compose or similar) to spin up both LocalStack and your NestJS and React applications.
- Ensure the application can upload files to the S3 bucket and retrieve the image URL (to display in the front-end).
This challenge involves setting up both the backend and frontend in a cohesive manner, ensuring proper integration with AWS services, and handling critical aspects like validation, error handling, and background processing using Serverless Framework.