Personality assessment platform for Dr. Brent Roberts, a Professor of Psychology at the University of Illinois at Urbana-Champaign, and his research lab.
"Brent W. Roberts is a Professor of Psychology and at the University of Illinois at Urbana-Champaign, he holds the Gutsgell Endowed Professorship at the University of Illinois, is designated as a Health Innovation Professor in the Carle-Illinois College of Medicine, and is a Distinguished Guest Professor at the Hector Research Institute of Education Sciences and Psychology at the University of Tübingen, Germany."
— University of Illinois at Urbana-Champaign
- 0. General Information
- 1. Local development
- 2. Launch a new AWS EC2 instance on the AWS console
- 3. Attach an IAM Role to a newly created EC2 instance
- 4. Deploying the Next.js app to AWS EC2 instance and serving it with Caddy
- 5. What to do if the SSH key ever gets lost, deleted, or corrupted
- 6. What to do if you want to use a new Elastic IP address?
- 7. Working with screento view Next.js and Caddy logs separately
- 8. Auth0
- 9. Working with Docker containers
- 10. Configure IAM role for GitHub Actions scripts
- 11. Deploying Next.js app to AWS EC2 instance and viewing it with NGINX
- 12. Set up Amazon Elastic Container Registry (ECR)
- 13. Deleting all GitHub Action workflow results at once
- 14. Accessing a localhostserver on another device
- 15. Valkey cache on Aiven
- 16.0 Updating the kernel of an EC2 Amazon Linux 2023 server
To build a modular personality assessment platform.
The personality assessment platform has or is being modeled after the following websites or platforms:
0.1.1.1 yourPersonality.net
This is a site of Dr. R. Chris Fraley, a Professor at the Department of Psychology at the University of Illinois at Urbana-Champaign.
Longitudinal Data Visualization:
- Incorporate a visualization of the user's test/assessment results over time to clearly show a user's changes to their personality profile over time.
0.1.1.2. what-is-my-personality.com
This is a site of Dr. Brent Roberts.
Default Available Assessments:
- Incorporate cohesively as many of the various assessments offered on this site as possible.
Structure of Assessments:
- Reuse components to simplify the integration of the handful of assessments
This is another site of Dr. Brent Roberts. - Default Available Assessments: - Incorporate the scoring information and various forms of the BESSI - Incorporate the German, Italian, and Spanish translations of the BESSI
0.1.1.4. Qualtrics
Modularity:
- Building a survey from scratch or from a template in Qualtrics should be used to partially model the study or assessment builder in personality-lab.
- personality-lablays out a broad list of personality measures and allows administrators to select from the list to create an assessment for their study.
0.1.1.5. Prolific
Structure of Assessments:
- The creation and management of a research study should be modeled after Prolific's study feature.
Modularity:
- Invite-only to each study
- Participant invitations to a study should be modeled after Prolific, ensuring that a respondent is invited before they participant in completing an assessment for a study.
 
- Downloadable participant data
- Participants' test/assessment data may be downloaded as a .csvfile by a study's administrators.
 
- Participants' test/assessment data may be downloaded as a 
- Human-readable study IDs
- Each study is represented by a human-readable ID. Each studyID is generated using a strong and custom form of string compression.
- Examples:
- YouTube:
- URL:
- https://www.youtube.com/watch?v=Xg_9F2h89TE
 
- URL slugs, separators, and identifiers:
- /watch: slug to indicate that the user viewing a video
- ?v=: query string separator used to identify a unique video ID
- Xg_9F2h89TE: represents the unique ID of the video being watched
 
 
- URL:
- Spotify:
- URL:
https://open.spotify.com/track/7KA4W4McWYRpgf0fWsJZWB?si=7f7ae9e0d8b74484
- URL slugs, separators, and identifiers:
- track/: slug to indicate that the user is viewing a track
- 7KA4W4McWYRpgf0fWsJZWB: unique ID that identifies the page of the specific track's page
- ?si=: query string separator used to indicate the referral method to the specific track's page
- 7f7ae9e0d8b74484: represents the unique ID of the referral method
 
 
- URL:
 
- YouTube:
 
0.1.1.6. Linkedin
Social:
- Share test/assessment results on social media:
- Respondents may share a visualization of their test/assessment results to Linkedin to attract more users to the platform.
 
- Rate others' test/assessment results
- Respondents may share a short URL on Linkedin to invite others to view and/or rate their test/assessment results.
 
- Credential or Certificate of Completion
- Respondents may share a credential or certificate of completion on Linkedin, a credential/certificate that provides evidence of their completion of multiple test/assessment results on personality-lab.
 
- Respondents may share a credential or certificate of completion on Linkedin, a credential/certificate that provides evidence of their completion of multiple test/assessment results on 
0.1.1.7. Kahoot
Kahoot's short and gamified quizzes can be used to model shareable personality quizzes on
personality-lab.
Social:
- Individual gamified personality quizzes
- Respondents may share a link to their results as a shareable URL for friends and family to access their results and simply rate them or play a short personality quiz.
 
- Gamified studies
- Administrators may enable the sharing of the test/assessment results of participants' of a study
 
0.1.1.8. Inclivio
Platform:
- Longitudinal research
- Model the ability to create longitudinal research designs and conduct them
 
Pricing:
- Price per invite or price per license:
- Use their pricing model as a reference to build the pricing/licensing model for personality-lab
 
- Use their pricing model as a reference to build the pricing/licensing model for 
0.1.1.9 In8ness
User engagement:
- Engage users after completion of test/assessment
- Use the Pop Personality Profiles and Character Personality Comparison as a reference to create fictional characters
- Create personality profiles and test/assessment results of fictional characters using generative AI
- Offer a user the ability to compare and visualize their test/assessment results and personality profile against fictional characters
- Offer a user to ask an AI chatbot which fictional characters they would like to compare their personality profile to
 
 
- Use the Pop Personality Profiles and Character Personality Comparison as a reference to create fictional characters
Data visualization:
- Create and display similar grouped bar plots and radar plots
Platform:
- Personality Profile Report
- Reference In8ness's "Dynamic Web Reports" and "Downloadable PDF Reports" to generate a detailed report of a user's personality profile and/or test/assessment results
 
.
├── u-websocket
├── next-appMake sure to set up the following for local development:
Create an .env.local file by running the following command on the CLI:
cp .env-example.local .env.localThen fill in the parameters environment variables with the appropriate values:
- 
Get the AUTH0_*variables from your Auth0Applicationssettings, under theBasic Informationmenu- 
As a pre-requisite, make sure to update the Allowed Callback URLsandAllowed Logout URLsin your Auth0'sApplicationssettings to includelocalhost:- 
Allowed Callback URLs: https://example.com/api/auth/callback, https://localhost:3000/api/auth/callback 
- 
Allowed Logout URLs: https://example.com, https://localhost:3000 
 
- 
- 
Then, update the AUTH0_BASE_URLtolocalhostin your.env.localfile, like so:AUTH0_BASE_URL='https://localhost:3000' 
 
- 
- 
The encryptCompressEncode()anddecodeDecompressDecrypt()functions of theCSCrypto(i.e. Client-Side Crypto) class are used to encrypt the shareable ID string, which is used in the shareable link to share a user's assessment results. To encrypt strings on the client, create an initialization vector, i.e.iv, and an asymmetric encryption key:- 
You will need an ivandkeyto encrypt thestrargument.- 
Note that we we are generating a 128-bit key length because it results in a shorter shareable ID that we place in the shareable URL. (You can generate a key with a 256-bit key length by using a 32-byte initialization vector, i.e. iv.):// 1. Set the size of the key to 16 bytes const bytesSize = new Uint8Array(16) // 2. Create an initialization vector of 128 bit-length const iv = crypto.getRandomValues(bytesSize).toString() console.log(`iv:`, iv) // 3. Generate a new asymmetric key const key = await crypto.subtle.generateKey( { name: 'AES-GCM', length: 128 }, true, ['encrypt', 'decrypt'] ) // 4. Export the `CryptoKey` const jwk = await crypto.subtle.exportKey('jwk', key) const serializedJwk = JSON.stringify(jwk) console.log(`serializedJwk:`, serializedJwk) 
- 
Copy the logged ivandserializedJwkvalues.
- 
Set these values in your .env.locallike so:// The values below are merely an example NEXT_PUBLIC_SHARE_RESULTS_ENCRYPTION_KEY="{"alg":"A128GCM","ext":true,"k":"8_kB0wHsI43JNuUhoXPu5g","key_ops":["encrypt","decrypt"],"kty":"oct"}" NEXT_PUBLIC_SHARE_RESULTS_ENCRYPTION_IV="129,226,226,155,222,189,77,19,14,94,116,195,86,198,192,117" 
- 
For cloud-development, make sure to add the NEXT_PUBLIC_SHARE_RESULTS_ENCRYPTION_KEYandNEXT_PUBLIC_SHARE_RESULTS_ENCRYPTION_IVvariables as GitHub Secrets to the GitHub repository.
 
- 
 
- 
Get AWS credentials your AWS access portal. They look something like the following:
[ROLE_NAME]
aws_access_key_id=ASIA4ID7IBTSMB46LGC4
aws_secret_access_key=6Mu1cPzUz+9yGP12dUBo7HsdVi89bea39YfWUblj
aws_session_token=IQoJb3JpZ2luX2VjEJn//////////wEaCXVzLWVhc3QtMSJIMEYCIQDQoeNwakI4MRGcm110T8pff5htfppKbf7fqUdPMX5e/AIhAIVrV4QclSkNQaajoZklUTX8nNwb2o5auK5mgJV6OVSvKucCCDEQABoMODQyMDgwNDU1OTA4Igwt7f5xGprEfuSyfAwqxALhwKJY/KDqMjaau6Zx4AlpWLIS2rU1vNHg8ADUKI+Lchw2S574eIH7bab2HrtIZIwDRKbOL1ykGeDLkDOvwYiOW4SsnBgLMOcqhmz8ORrgIpQE+NQZkCIezVUaQs8uOTK0jah6Q48Apwc3LRFx5s0hEzr+dV2Vlnyt/qHHskD9SSeGfCFmhTxhyKZdlAiLjRvuMe6D9Aue+vHQadoY4xC3Zx0GtVFeTJZkMPOsaUxKz3WEjZTBIhPO0EoKtafIM5EyfH4TM079XD+69x39BcdMlprLE/AWfeUGdTOzwTUZE5yRMHBrGavUsoqcO/5wP3PF3nfSQWxdYXP64pJniehxp6nFk6vlJmJfi5RzaPdUR1dBYtIc4ex7sT+klLX4qOVA/8Yx5UBiEk1NN49e31yu663303lVj7G/EPbeTdMN4pRnk3kwr6WXswY6pgGxGMrDyA5QxW2yUaDR4nStfNRojzocfeLhvTbxDztGfRU/QtI0p7qPDig3FCetgvrcYFDiFPwQ9iddtaDb418y7mjKMHHYenVSaHY55Iz4rfozMAuHf6jqWMAlBrqMYObP7WDQpSAiFogDAeconLw5Ti/DsS/S9bRF1lhXsckayfPsX6jUOrh0iiDQIxQLqTALgCCXSPUszfUVOhD8PFYq6nRH8loPCopy and paste these values to src/utils/aws/constants/index.ts as shown in the example below:
NOTE: NEVER commit AWS credentials!
const credentials_ = {
  aws_access_key_id: `ASIA4ID7IBTSMB46LGC4`,
  aws_secret_access_key: `6Mu1cPzUz+9yGP12dUBo7HsdVi89bea39YfWUblj`,
  aws_session_token: `IQoJb3JpZ2luX2VjEJn//////////wEaCXVzLWVhc3QtMSJIMEYCIQDQoeNwakI4MRGcm110T8pff5htfppKbf7fqUdPMX5e/AIhAIVrV4QclSkNQaajoZklUTX8nNwb2o5auK5mgJV6OVSvKucCCDEQABoMODQyMDgwNDU1OTA4Igwt7f5xGprEfuSyfAwqxALhwKJY/KDqMjaau6Zx4AlpWLIS2rU1vNHg8ADUKI+Lchw2S574eIH7bab2HrtIZIwDRKbOL1ykGeDLkDOvwYiOW4SsnBgLMOcqhmz8ORrgIpQE+NQZkCIezVUaQs8uOTK0jah6Q48Apwc3LRFx5s0hEzr+dV2Vlnyt/qHHskD9SSeGfCFmhTxhyKZdlAiLjRvuMe6D9Aue+vHQadoY4xC3Zx0GtVFeTJZkMPOsaUxKz3WEjZTBIhPO0EoKtafIM5EyfH4TM079XD+69x39BcdMlprLE/AWfeUGdTOzwTUZE5yRMHBrGavUsoqcO/5wP3PF3nfSQWxdYXP64pJniehxp6nFk6vlJmJfi5RzaPdUR1dBYtIc4ex7sT+klLX4qOVA/8Yx5UBiEk1NN49e31yu663303lVj7G/EPbeTdMN4pRnk3kwr6WXswY6pgGxGMrDyA5QxW2yUaDR4nStfNRojzocfeLhvTbxDztGfRU/QtI0p7qPDig3FCetgvrcYFDiFPwQ9iddtaDb418y7mjKMHHYenVSaHY55Iz4rfozMAuHf6jqWMAlBrqMYObP7WDQpSAiFogDAeconLw5Ti/DsS/S9bRF1lhXsckayfPsX6jUOrh0iiDQIxQLqTALgCCXSPUszfUVOhD8PFYq6nRH8loP`,
}
export const CREDENTIALS = {
  accessKeyId: credentials_.aws_access_key_id,
  secretAccessKey: credentials_.aws_secret_access_key,
  sessionToken: credentials_.aws_session_token
}Then, call use CREDENTIALS in src/utils/aws/dynamodb/index.ts and src/utils/aws/systems-manager/index.ts like so:
/* src/utils/aws/dynamodb/index.ts */
import { 
  REGION,
  CREDENTIALS,
} from '../constants'
// const ddbClient = new DynamoDBClient({ region: REGION })
const ddbClient = new DynamoDBClient({ 
  region: REGION,
  credentials: CREDENTIALS
})/* src/utils/aws/systems-manager/index.ts */
import { 
  REGION,
  CREDENTIALS,
  AWS_PARAMETER_NAMES,
} from '../constants'
// export const ssmClient = new SSMClient({ region: REGION })
export const ssmClient = new SSMClient({ 
  region: REGION,
  credentials: CREDENTIALS,
})To start the Next.js web application, run the following command on the CLI:
npx turbo dev- When launching a new EC2 instance on the AWS console, be sure to use one of the Amazon Machine Images (AMIs). I chose Amazon linux 2023 AMI because it is the fastest.
- Avoid using 64-bit (Arm) CPU architecture because most software libraries and packages have most of their infrastructure having been tested on use x86 CPU architectures and not on Arm.
Select t2.micro because it is uses the lowest vCPU (1vCPU) and GiB Memory (1 GiB Memory)
- Specify a key-pair which is used later for remotely SSH'ing in to the EC2 instance from your machine.
- Make sure to select Create a new key pairwhenever you launch a new instance; otherwise, you risk getting confused when you use the same key for more than one EC2 instance, and you have to manage the keys in the appropriate SSH configuration files (e.g..ssh,authorized_hosts,known_hostsetc.).
Save the generated .pem file at the top-most level of the directory of the local GitHub repository.
You will use this to ssh into the remote EC2 instance later, both locally within your own machine and remotely through GitHub Actions CI/CD scripts.
Create a new security group, or use an existing security group, that has the following three options enabled:
- Allow SSH traffic from(this is defaulted to- Anywhere 0.0.0.0/0-- keep the default setting enabled)
- Allow HTTPS traffic from the internet(this is disabled by default)
- Allow HTTP traffic from the internet(this is disabled by default)
The EC2 instance uses several AWS services and thus requires AWS credentials to make API calls to leverage each of these services. However, managing these AWS credentials and passing them on to an application is tedious and can come with a ton of security risks. Thankfully, AWS provides a solution to this issue by allowing the creation of an AWS IAM Role specifically for an EC2 instance to use.
More information on "IAM roles for Amazon EC2" can be found on the AWS's official documentation
Before you create a new IAM role that will be used specifically for the EC2 instance, make sure you have the following pre-requisites:
- To create a new IAM role, make sure you have the proper permissions under your organization.
- Identify the AWS services, and the respective permissions per each AWS service, that your EC2 instance will require to make API calls to.
Once you have the pre-requisites, you can create a new IAM role by following the steps below:
- To create a new IAM role, go to the IAM service and under Access Management, click on theRoles.
- Click the orange Create rolebutton on the top right of theRolessettings-page.
- Under the Select trusted entitypage, and underTrusted entity typemenu, selectAWS service.
- Under the Use casemenu, click theService or use caseselect-dropdown menu to choose a service. SelectEC2.
- Leave the default radio option, EC2, toggled and clickNext.
For the next-app Next.js project, we use 4 AWS services:
- DynamoDB
- EC2
- Systems Manager (for Parameter Store)
- Elastic Container Registry (ECR)
For simplicity and to save time configuring granular custom permissions policies, we selected broad-general permissions for each of these services:
- AmazonDynamoDBFullAccess
- AmazonEC2FullAccess
- AmazonSSMFullAccess
- AmazonEC2ContainerRegistryFullAccess- This managed policy is used to provide the necessary permissions to authenticate with Amazon ECR, which is used to pull images from ECR when SSH'ing into the EC2 instance.
 
Add each of the four permissions policies listed above.
Then, click the orange Next button on the bottom right.
- 
Enter a short and unique role name for the ec2 instance. This role name will be used later when you need to SSH in to your EC2 instance, so make sure it is short. We named it ec2-user
- 
Give it a detailed and concise description of what the new IAM role is for. We went with left default description, Allows EC2 instances to call AWS services on your behalf.
- 
Click the orange Create rolebutton at the bottom right to finally create the new IAM role.
Lastly, we want to have the EC2 instance use this newly created IAM role. To do that, we need to modify the EC2 instance's IAM Role in its settings.
- Go to the EC2 service
- Click on the Instancestab on the left-side menu
- Click on your target EC2 instance's ID, highlighted in blue
- Click on the select-dropdown menu titled, Actions
- Click on the Securityselect-dropdown menu
- Click on Modify IAM role
- On the Modify IAM rolepage, under theIAM rolemenu, click on the select-dropdown menu and select the IAM role that you created earlier in steps 2 through 4.
- Click the orange Update IAM rolebutton to associate the IAM role with the EC2 instance.
Now, whenever your EC2 instance is running, it will always have AWS credentials with specific permissions to make API calls to the services specified in the permissions policies you selected when created the IAM role.
This saves you time and a headache from manually always having to add the AWS credentials manually or having to write your own system for managing and distributing the AWS credentials to your EC2 instance(s).
4.1. Set up a Caddy server
ssh -i key-pair-name.pem AWS_EC2_USERNAME@AWS_EC2_HOSTNAMEPlease follow the instructions from Caddy's official documentation.
The instructions below are mostly taken from there:
- 
Use curlto download the latest version of Caddy from GitHub:curl -o caddy.tar.gz -L "https://github.com/caddyserver/caddy/releases/download/v2.8.4/caddy_2.8.4_linux_amd64.tar.gz"
- 
Extract the downloaded .tarfile:tar -zxvf caddy.tar.gz 
- 
Move the caddybinary into your PATH at/usr/bin:sudo mv caddy /usr/bin/ 
- 
Make the binary executable: chmod +x /usr/bin/caddy 
- 
Verify the installation: caddy version 
- 
Move downloaded files to /caddy-downloadmkdir caddy-download && mv {LICENSE,README.md,caddy.tar.gz} ./caddy-download
- 
Create a group named caddysudo groupadd --system caddy 
- 
Create a user named caddy with a writeable home directory: sudo useradd --system \ --gid caddy \ --create-home \ --home-dir /var/lib/caddy \ --shell /usr/sbin/nologin \ --comment "Caddy web server" \ caddyIf using a config file, be sure it is readable by the caddy user you just created. 
- 
Next, choose a systemd unit file based on your use case. NOTE: This involves copying the file contents from caddy.serviceand then pasting it into the/etc/systemd/system/caddy.servicefile. To ensure that you can write to the file, sure to usesudo vim, like so:sudo vim /etc/systemd/system/caddy.service Then, paste in the file contents from dist/int/caddy.serviceand save the file.Double-check the ExecStartandExecReloaddirectives. Make sure the binary's location and command line arguments are correct for your installation! For example: if using a config file, change your--configpath if it is different from the defaults.The usual place to save the service file is: /etc/systemd/system/caddy.serviceAfter saving your service file, you can start the service for the first time with the usual systemctl dance: sudo systemctl daemon-reload sudo systemctl enable --now caddyNOTE: If you have not created a Caddyfile and run the following command: sudo systemctl you will get the following error: Job for caddy.service failed because the control process exited with error code. See "systemctl status caddy.service" and "journalctl -xeu caddy.service" for details. You resolve this error by completed step 4.2 Working with Caddy server and then re-running that command again: sudo systemctl enable --now caddyVerify that it is running: systemctl status caddy Now you're ready to use the service! 
4.2. Working with Caddy server
A Caddy server uses a Caddyfile to configure it, similar to how a NGINX server uses a .conf file, usually located somewhere like /etc/nginx/conf.d/<APP_NAME>.conf.
For Caddy, we will store Caddyfiles under /etc/caddy.
Let's create a Caddyfile.
- 
Create the /etc/caddydirectory to storeCaddyfilessudo mkdir /etc/caddy 
- 
Edit a Caddyfile:Next, we can run the command below to begin editing a Caddyfile: sudo vim /etc/caddy/Caddyfile Paste in the following to ensure that it works to reverse proxy requests made to our Next.js application, which will run on port 3000from our Docker container:example.com { encode gzip header { Strict-Transport-Security "max-age=31536000;" Access-Control-Allow-Origin "*" } reverse_proxy localhost:3000 }
- 
Using the Caddy Service Please follow Caddy's official documentation for this section on how to use the Caddy service. This section closely follows that section, as of the time of writing. - 
If using a Caddyfile, you can edit your configuration with nano, vi, or your preferred editor: sudo nano /etc/caddy/Caddyfile You can place your static site files in either /var/www/htmlor/srv. Make sure thecaddyuser has permission to read the files.
- 
To verify that the service is running: systemctl status caddy The status command will also show the location of the currently running service file. 
- 
When running with our official service file, Caddy's output will be redirected to journalctl. To read your full logs and to avoid lines being truncated:journalctl -u caddy --no-pager | less +G
- 
If using a config file, you can gracefully reload Caddy after making any changes: sudo systemctl reload caddy 
- 
You can stop the service with: sudo systemctl stop caddy 
- 
Do not stop the service to change Caddy's configuration. Stopping the server will incur downtime. Use the reload command instead. 
- 
The Caddy process will run as the caddyuser, which has its$HOMEset to/var/lib/caddy. This means that:- The default data storage location (for certificates and other state information) will be in /var/lib/caddy/.local/share/caddy.
- The default config storage location (for the auto-saved JSON config, primarily useful for the caddy-apiservice) will be in/var/lib/caddy/.config/caddy.
 
- The default data storage location (for certificates and other state information) will be in 
 
- 
You can configure multiple domains in a single Caddyfile for different use cases.
- 
To serve the same website over multiple domains, your Caddyfile would need to be configured like so: https://example.com, http://example.com, https://example2.com { encode gzip header { Strict-Transport-Security "max-age=31536000;" Access-Control-Allow-Origin "*" } reverse_proxy localhost:3000 }
- 
If you want to serve different websites at different domains, it would look something like this: example.com { root /www/example.com } sub.example.com { root /www/sub.example.com gzip log ../access.log }
Assuming you are working under a single domain in a single Caddyfile, you can configure your Caddyfile to proxy your WebSocket server through Caddy with a simple configuration as shown below:
https://example.com  {
    # WebSocket
    @ws `header({'Connection': '*Upgrade*', 'Upgrade': 'websocket'})`
    reverse_proxy @ws localhost:3001
}This way, Caddy terminates all TLS for you.
To upgrade caddy, simply stop all running Caddy servers run the following command:
sudo caddy upgradeThis will replace the current Caddy binary with the latest version from Caddy's download page with the same modules installed, including all third-party plugins that are registered on the Caddy website.
Upgrades do not interrupt running servers; currently, the command only replaces the binary on disk. This might change in the future.
The upgrade process is fault tolerant; the current binary is backed up first (copied beside the current one) and automatically restored if anything goes wrong. If you wish to keep the backup after the upgrade process is complete, you may use the --keep-backup option.
This command may require elevated privileges if your user does not have permission to write to the executable file.
- 
To locate Caddy's TLS certificate and private key, start by inspecting the Caddy service logs using the following command: journalctl -u caddy --no-pager | less +GThis lets you read the Caddy's full service logs and avoids lines being truncated. 
- 
Next, in the logs, search for the use of */caddy/.config/*.
- 
Navigate to this location within the EC2 instance. You can do this in several different ways, but since I found this path to be /var/lib/caddy/.config/caddy, I had permissions issues with letting me navigate to it.So, I first navigated to /var/lib/using:cd /var/libFrom there, I then used vimwithsudoprivileges to enter thecaddydirectory:sudo vim caddy Running sudo vim caddyopened up aNetrw Directory Listing, as shown in the image below:From here, I navigated to .local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<DOMAIN_NAME>to finally find the.crtand.keyfiles that I needed.
- 
You can confirm that you found the correct TLS certificate and key files by double-checking the Caddy service logs. Again, run the command to review the Caddy service logs: journalctl -u caddy --no-pager | less +GFrom the logs, locate the following log: "logger": "tls.obtain", "msg": "certificate obtained successfully" and at the end of this line, you should see the issuerto match theacmefilepath:"issuer":"acme-v02.api.letsencrypt.org-directory" 
To view the status of the current running Caddy service, run systemctl with sudo:
sudo systemctl status caddyThe Caddy service will display a color code, with either a green circle that is next to the caddy.service - Caddy text, or an empty circle, that is at the top of the output that is returned from running the command. Those outputs are shared below as screenshots:
sudo yum install docker -yThis step is required to pull Docker images from the AWS's Elastic Container Registry (ECR) from within the EC2 instance.
Why is it required? Because the ECR will use IAM role permissions to allow image pulling.
You need to add your current user to the docker group.
This will give your user the necessary permissions to interact with the Docker daemon.
To run docker commands without sudo, do the following:
- 
First, before starting this step, check if the dockergroup exists on your EC2 instance by using the following command:getent group docker If the dockergroup exists, this command will output information about the group, such as:docker:x:1234: If the dockergroup does not exist, the command will not produce any output.
- 
Create the dockergroup if it doesn't exist:sudo groupadd docker 
- 
Add the IAM user to the dockergroup: For example, since our IAM user isec2-user:sudo usermod -aG docker ec2-user 
- 
Log out and log back in: After adding your user to the dockergroup, you need to log out of your session and log back in for the group changes to take effect. Alternatively, you can run the following command to apply the new group permissions without logging out:newgrp docker 
- 
Verify the setup: Now you should be able to run Docker commands without sudo:docker ps If you see the list of running containers or an empty list without any permission errors, you have successfully configured Docker to run without sudo.
Now you should be able to run docker pull <ECR_IMAGE_URL within the EC2 instance without the no basic auth credentials error.
sudo systemctl restart dockerMake sure to routinely prune all data from Docker running on the AWS EC2 instance. Before doing so, ALWAYS make sure that you are still able to pull new copies of your desired images from AWS ECR.
To prune all Docker data, run the following command:
docker system prune -asudo systemctl restart dockerMake sure to specify the correct port number that is exposed in the Dockerfile.
Also, make sure to specify the <IMAGE_ID> and not the image name of the image that was pulled from the Docker repository.
docker run -d -it -p 3000:3000 <IMAGE_ID>Make sure to include the -d flag to run the container in "detached" mode, so that we can run other commands while the container is running.
Finally, we can serve the dockerized version of our the Next.js app by starting the Caddy server. Run the command below to start or reload the Caddy server.
Start caddy:
sudo systemctl start caddyor reload caddy:
sudo systemctl reload caddyThen, navigate to your domain to see the hosted site.
Do this from the AWS Console in the browser.
RSA is not as secure as ED25519, so select ED25519 as the encryption method.
- 
Open an SSH client. 
- 
Locate your private key file that was created when you launched this instance. For example, next-app.pem
- 
Run this command, if necessary, to ensure your key is not publicly viewable. chmod 400 "key-pair-name.pem"
- 
Connect to your instance using its Public Elastic IP. ELASTIC_IP.compute-1.amazonaws.com where ELASTIC_IP.compute-1.amazonaws.comis equal toAWS_EC2_HOSTNAME.
Example:
ssh -i "key-pair-name.pem" AWS_EC2_USERNAME@AWS_EC2_HOSTNAMEwhere AWS_EC2_HOSTNAME the formatted like so:
ec2-51-139-011-930.compute-1.amazonaws.comRelease the old Elastic IP address from the AWS console.
On the AWS console, on the EC2 service, under the "Network & Security" tab and under "Elastic IPs", click the orange, "Allocate Elastic IP address" button.
- Toggle the checkbox on the far left of the row of the newly allocated Elastic IP address.
- Then, click the "Actions" dropdown menu, and select "Associate Elastic IP Address".
- From this menu, for the Resource type, keep "Instance" selected.
- For the Instance, select the EC2 instance to associate the Elastic IP address to.
- Toggle the checkbox to allow reassociation.
- Finally, click the orange, "Associate" button.
Prerequisites for the screen example below:
- You have an SSH key (.pemfile) to SSH in to the EC2 instance
- The latest version of both dockerandcaddyare installed on the EC2 instance
- You have pulled the Next.js Docker image to run as a container
- You have a Caddyfileon the EC2 instance to start the Caddy server from
- You want to work with two screens: one for next-appand another forcaddy
- 
Start the screen session: screen -S next-app 
- 
Run the Docker container: docker run -it -p 3000:3000 <IMAGE_ID> You can leave this container running since we can detach from this screen without interrupting the container. 
- 
Detach from the next-appscreen session:Press Ctrl+AthenD.
- 
Switch to a new screen session for the Caddy server: screen -S caddy 
- 
List all screen sessions to view both the next-appandcaddysessions have been created:screen -ls 
- 
Start or reload the Caddy server: Start caddy:sudo systemctl start caddy or reload caddy:sudo systemctl reload caddy You can leave the Caddy server running since we can detach from this screen without interrupting the Caddy server. 
- 
View the current screen you are on: echo $STY This will return the screen ID and name to stdout. 
- 
Detach from the caddysession:Press Ctrl+AthenD.
- 
List all screen sessions: screen -ls You should see both sessions are in the Detachedstate, as opposed to theAttachedstate.
- 
In your browser, go to the domain where Caddy is serving the Next.js app Click around several pages. 
- 
In the EC2 instance, re-attach to the next-appsession:screen -r next-app You should see the logs from the Next.js server that correspond to the pages you navigated to in your browser. 
- 
Make code changes to a file, save, commit and push them 
- 
Stop the Caddy server and the Docker container running the Next.js app - 
As the GitHub Action is being run from step 12 to build, push, and pull the latest version of the Next.js Docker image to the EC2 instance, first stop the Caddy server that is serving the Next.js Docker container: screen -r caddy Then, press CTL + Cto stop the Caddy server.Then, detach from this screen by running: screen -d Obviously, you can only run screen -dto detach from a screen when you have access to the CLI and are not blocked by a running process.
- 
Next, stop the running Next.js Docker container by going back to the next-appscreen session and then pressingCTL + Cto interrupt the container.- 
Enter the next-appscreen sessionscreen -r next-app 
- 
Interrupt the container by pressing CTL + C.
- 
Exit the next-appscreen session by running:screen -d 
 
- 
- 
After the latest Docker image has been pulled, restart the Caddy server and Next.js Docker container: - 
Re-enter the caddyscreen session:screen -r caddy Then, start or reload the Caddy server: Start caddy:sudo systemctl start caddy or reload caddy:sudo systemctl reload caddy Then, detach from the caddyscreen session, leaving the Caddy server running, by pressingCTL + Aand thenD.
- 
Re-enter the next-appscreen session:screen -r next-app Then, start the Next.js Docker container: docker run -it -p 3000:3000 <IMAGE_ID> Then, detach from the next-appscreen session, to work in a separate screen to start other processes or run other commands, or proceed to step 3.
- 
From your browser, navigate to the domain where the Caddy server is serving the Next.js Docker container. After browsing pages on the domain, go back to the EC2 instance next-appscreen session:screen -r next-app You should see Next.js returning logs from the page(s) you viewed from your browser. 
 
- 
 
- 
- 
To kill a detached session, run the following command: screen -X -S <SESSION_ID_YOU_WANT_TO_KILL> quit For example, after running: screen -ls You may see: There are screens on: 3662307.caddy (Detached) 82273.next-app (Detached) 82242.caddy (Detached)To kill the session with the ID, 82273.next-app, you can run either of the following commands:screen -X -S 82273 quit or screen -X -S 82273.next-app quit Either of these commands kills that session. 
Follow this Medium post by a that walks through step-by-step how to set up a Database Connection in Auth0 for DynamoDB, from start to finish.
When creating the custom permission policy for the Auth0DynamoDBUser IAM User, make sure to copy and paste the ARN of the DynamoDB table of the accounts table for the Resource under the property of the permissions policy's JSON.
As a reference, here are the relevant and modified Database Action Scripts that I made, using the tutorial as a guide:
function login (email, password, callback) {
  const AWS = require('aws-sdk');
  const { createHash, pbkdf2Sync } = require('crypto');
  const ITERATIONS = 1000000; // 1_000_000
  const KEY_LENGTH = 128;
  const HASH_ALGORITHM = 'sha512';
  AWS.config.update({
    accessKeyId: configuration.accessKeyId,
    secretAccessKey: configuration.secretAccessKey,
    region: configuration.region
  });
  const docClient = new AWS.DynamoDB.DocumentClient();
  const params = {
    TableName: configuration.dynamoDBTable,
    KeyConditionExpression: 'email = :email',
    ExpressionAttributeValues: {
      ':email': email
    }
  };
  docClient.query(params, function (err, data) {
    if (err) {
      return callback(err);
    } else {
      if (data.Items.length === 0) {
        return callback(new WrongUsernameOrPasswordError(email));
      } else {
        const account = data.Items[0];
        // Use `crypto` to compare the provided password with the hashed 
        // password stored in DynamoDB
        try {
          const salt = account.password.salt;
          const hashToVerify = pbkdf2Sync(
            password,
            salt,
            ITERATIONS,
            KEY_LENGTH,
            HASH_ALGORITHM
          ).toString('hex');
  
          const hash = account.password.hash;
          const isMatch = hash === account.password.hash;
  
          // Passwords match, return the user profile
          if (isMatch) {
            const email_ = account.email;
            // Depending on your user schema, you might want to use a different
            // unique identifier
            const user_id = createHash(
              'shake256', 
              16
            ).update(account.email).digest('hex');
            const userProfile = {
              user_id,
              email: email_,
              // Add additional user profile information here as needed
            };
            
            return callback(null, userProfile);
          } else {
            return callback(new WrongUsernameOrPasswordError(email));
          }
        } catch (error) {
          return callback(err);
        }
      }
    }
  });
}function create (user, callback) {
  const AWS = require('aws-sdk');
  const { randomBytes, pbkdf2Sync } = require('crypto');
  AWS.config.update({
    accessKeyId: configuration.accessKeyId,
    secretAccessKey: configuration.secretAccessKey,
    region: configuration.region
  });
  const ACCOUNT_ADMINS = [
    {
      email: '[email protected]',
      name: 'Brent Roberts',
    },
    {
      email: '[email protected]',
      name: 'Jack Winfield',
    },
  ];
  const isGlobalAdmin = ACCOUNT_ADMINS.some(
    admin => admin.email === user.email
  );
  // Generate a salt and hash the password
  const salt = randomBytes(16).toString('hex');
  const ITERATIONS = 1000000; // 1_000_000
  const KEY_LENGTH = 128;
  const HASH_ALGORITHM = 'sha512';
  const hash = pbkdf2Sync(
    user.password,
    salt,
    ITERATIONS,
    KEY_LENGTH,
    HASH_ALGORITHM
  ).toString('hex');
  const email_ = user.email;
  const password = { hash, salt };
  const updatedAtTimestamp = 0;
  const hasVerifiedEmail = false;
  const createdAtTimestamp = Date.now();
  const docClient = new AWS.DynamoDB.DocumentClient();
  const params = {
    TableName: configuration.dynamoDBTable,
    Item: {
      email: email_,
      password,
      isGlobalAdmin,
      hasVerifiedEmail,
      updatedAtTimestamp,
      createdAtTimestamp,
      // Add any other user attributes here
    },
    // Ensure the user does not already exist
    ConditionExpression: 'attribute_not_exists(email)'
  };
  docClient.put(params, function (err, data) {
    if (err) {
      if (err.code === 'ConditionalCheckFailedException') {
        // This error means the user already exists
        callback(new Error('User already exists'));
      } else {
        callback(err);
      }
    } else {
      // User was created successfully
      callback(null);
    }
  });
}function verify (email, callback) {
  const AWS = require('aws-sdk');
  AWS.config.update({
    accessKeyId: configuration.accessKeyId,
    secretAccessKey: configuration.secretAccessKey,
    region: configuration.region
  });
  const docClient = new AWS.DynamoDB.DocumentClient();
  const queryParams = {
    TableName: configuration.dynamoDBTable,
    KeyConditionExpression: 'email = :email',
    ExpressionAttributeValues: {
      ':email': email
    },
  };
  docClient.query(queryParams, function (err, data) {
    if (err) {
      return callback(err);
    } else {
      if (data.Items.length === 0) {
        return callback(new WrongUsernameOrPasswordError(email));
      } else {
        const account = data.Items[0];
        const createdAtTimestamp = account.createdAtTimestamp;
        const updateParams = {
          TableName: configuration.dynamoDBTable,
          Key: {
            email,
            createdAtTimestamp,
          },
          UpdateExpression: 'set hasVerifiedEmail = :hasVerifiedEmail',
          ExpressionAttributeValues: {
            ':hasVerifiedEmail': true
          },
          ReturnValues: 'UPDATED_NEW',
          // Check if user email exists
          ConditionExpression: 'attribute_exists(email)'
        };
        docClient.update(updateParams, function (err, data) {
          if (err) {
            callback(err);
          } else {
            callback(null, data);
          }
        });
      }
    }
  });
}function changePassword (email, newPassword, callback) {
  const AWS = require('aws-sdk');
  const { randomBytes, pbkdf2Sync } = require('crypto');
  AWS.config.update({
    accessKeyId: configuration.accessKeyId,
    secretAccessKey: configuration.secretAccessKey,
    region: configuration.region
  });
  const docClient = new AWS.DynamoDB.DocumentClient();
  // First, hash the new password
  const salt = randomBytes(16).toString('hex');
  const ITERATIONS = 1000000; // 1_000_000
  const KEY_LENGTH = 128;
  const HASH_ALGORITHM = 'sha512';
  const hash = pbkdf2Sync(
    newPassword,
    salt,
    ITERATIONS,
    KEY_LENGTH,
    HASH_ALGORITHM
  ).toString('hex');
  const updatedPassword = { hash, salt };
  const queryParams = {
    TableName: configuration.dynamoDBTable,
    KeyConditionExpression:  'email = :email',
    ExpressionAttributeValues: {
      ':email': email
    },
  };
  // Perform query to get the `createdAtTimestamp` which is required for 
  // Put, Post, and Update operations on the `accounts` table
  docClient.query(queryParams, function (err, data) {
    if (err) {
      return callback(err);
    } else {
      if (data.Items.length === 0) {
        return callback(new WrongUsernameOrPasswordError(email));
      } else {
        const account = data.Items[0];
        const createdAtTimestamp = account.createdAtTimestamp;
        const updateParams = {
          TableName: configuration.dynamoDBTable,
          Key: {
            email,
            createdAtTimestamp,
          },
          UpdateExpression: 'set password = :password',
          ExpressionAttributeValues: {
            ':password': updatedPassword
          },
          ReturnValues: 'UPDATED_NEW'
        };
        // Next, update the old password
        docClient.update(updateParams, function (err, data) {
          if (err) {
            callback(err);
          } else {
            callback(null, data);
          }
        });
      }
    }
  });
}function getByEmail(email, callback) {
  const AWS = require('aws-sdk');
  const { createHash } = require('crypto');
  AWS.config.update({
    accessKeyId: configuration.accessKeyId,
    secretAccessKey: configuration.secretAccessKey,
    region: configuration.region
  });
  const docClient = new AWS.DynamoDB.DocumentClient();
  const params = {
    TableName: configuration.dynamoDBTable,
    KeyConditionExpression: 'email = :email',
    ExpressionAttributeValues: {
      ':email': email
    }
  };
  docClient.query(params, function (err, data) {
    if (err) {
      callback(err);
    } else {
      if (data.Items.length === 0) {
        callback(null);
      } else {
        const account = data.Items[0];
        const email_ = account.email;
        const email_verified = account.email_verified;
        // Use a unique identifier for the user_id
        const user_id = createHash(
          'shake256', 
          16
        ).update(account.email).digest('hex');
        
        const userProfile = {
          user_id,
          email: email_,
          email_verified,
          // Add other user attributes here as needed
        };
        // Return the user profile. Adjust the returned attributes as needed.
        callback(null, userProfile);
      }
    }
  });
}function deleteUser(email, callback) {
  const AWS = require('aws-sdk');
  AWS.config.update({
    accessKeyId: configuration.accessKeyId,
    secretAccessKey: configuration.secretAccessKey,
    region: configuration.region
  });
  const docClient = new AWS.DynamoDB.DocumentClient();
  const params = {
    TableName: configuration.dynamoDBTable,
    Key: {
      email: email
    },
    ConditionExpression: 'attribute_exists(email)'
  };
  docClient.delete(params, function (err, data) {
    if (err) {
      callback(err);
    } else {
      // Successfully deleted the user
      callback(null);
    }
  });
}Under the Settings section of your Auth0 Application, update the Application URIs inputs with the following:
- 
Application Login URI https://example.com/api/auth/login 
- 
Allowed Callback URLs https://example.com/api/auth/callback, https://localhost:3000/api/auth/callback Things to keep in mind for these callback URLs. - For local development, whenever the local base URL, i.e. https://localhost:3000, changes, remember to also update the value for theAUTH0_BASE_URLvariable in your.env.localfile.
- For remote development, whenever the remote base URL, i.e. https://example.com, changes, remember to also update the value for theAUTH0_BASE_URLGitHub Actions secret in your GitHub repository's security settings. The page for this can be found under your repository by going toSettings→Security→Secrets and variables→Actions.
 
- For local development, whenever the local base URL, i.e. 
- 
Allowed Logout URLs https://example.com, https://localhost:3000 
Then, click Save Changes to save the changes for your Auth0 Application.
To debug this error, make sure to inspect the logs for your Auth0 application and view the URL that is being used in the callback.
This URL must be the exact same that is configured in your Application URIs settings of your Auth0 application.
To find the logs for your application, click on Monitoring, then click on Logs.
Next, navigate to your application's URL.
Once you receive the Callback URL Mismatch. error, go back to the logs and click on the Failed Login or Failed Logout log entry and read the object field that named description and take note of the URL that is being used.
If the URL that is shown is different than what is configured in your Application URIs settings, then make the necessary changes to resolve the error.
NOTE:
Double check whether your domain uses HTTP or HTTPS. Otherwise, you may be confused to find in your Application's logs that your domain's URL uses either
httporhttps.
Sometimes you may want to work within a Docker container. To do that, you will want to access you access the Docker container's shell by running the following commands:
- 
Start the Docker container in detached mode (assumes your app is running on port 3000): docker run -d -it -p 3000:3000 <IMAGE_ID> 
- 
Get the newly running container's ID: docker ps Copy the ID value to use in the next step. 
- 
Step inside of the container's shell: docker exec -it <CONTAINER_ID> /bin/sh Then make your changes within the container. Make sure to restart any services or apps within the container. docker container stop <CONTAINER_ID> Then restart the container normally without the detached-mode, -d, flag.
When working inside of an AWS EC2 instance and running your containers within it, instead of waiting for time-consuming CI/CD builds to complete, you may just want to debug your code within the container itself by stepping inside of it and making the necessary changes.
Stepping inside of a Docker container's shell and making quick changes to debug your container may help to speed up your workflow.
Make sure to routinely prune all data from Docker running on the AWS EC2 instance. Before doing so, ALWAYS make sure that you are still able to pull new copies of your desired images from AWS ECR.
To prune all Docker data, run the following command:
docker system prune -a- 
Create a Custom Docker Network Create a custom Docker network so that the containers can communicate by their container names. docker network create personality-lab-network 
- 
Start Next.js container docker run -d --network personality-lab-network --name next-app -p 3000:3000 <IMAGE_ID> 
- 
Start WebSocket container docker run -d --network personality-lab-network --name u-websocket -p 3001:3001 <IMAGE_ID> 
- 
Verify Connectivity Ensure both containers are running and connected to the same network: docker network inspect personality-lab-network You should see both the websocketandnext-appcontainers listed under the same network.
- 
Test the WebSocket Connection Navigate to your Next.js application ( http://localhost:3000) and check if the WebSocket connection is established successfully with thewss://websocket:3001endpoint.
- 
Check WebSocket Server Logs. Make sure the WebSocket server is listening on all interfaces ( 0.0.0.0), not justlocalhost.This ensures it can accept connections from outside its container. docker logs u-websocket 
- 
Check Environment Variables. Verify that the WebSocket URL is correctly set in your Next.js app. 
- 
Test Network Connectivity. Enter the next-appcontainer and try tocurlthe WebSocket server to ensure connectivity:docker exec -it next-app /bin/sh curl websocket:3001This should show a response if the connection is successful. 
The GitHub Actions scripts for both the next-app and u-websocket repositories make use of automated deployments on a commit, either via a pull request or to any branch.
Each GitHub Actions script:
- Builds the Docker image for that repository.
- Pushes the Docker image to the ECR.
- Pulls the Docker image from the ECR onto the EC2 instance.
- Uses the following environment variables that need to be individually added to the each repository's list of GitHub Secrets:
- SSH_KEY: the full contents of the- .pemfile. This- .pemfile must be the same private key that was generated when you created the EC2 instance.
- AWS_EC2_USERNAME: The name of the IAM role of the EC2 instance (e.g.- ec2-user).
- AWS_EC2_HOSTNAME: The hostname of the EC2 instance. It is usually the Elastic IP address (e.g.- 54.493.101.393)
- AWS_REGION: The region where the ECR repository is located in (e.g.,- us-east-1).
- AWS_ECR_REPOSITORY_NAME: The name of the ECR repository.
- AWS_ACCOUNT_ID: The full Account ID for the AWS account (e.g.- 0001-0002-0003).
- AWS_ACCESS_KEY_ID: Part of the AWS authentication credentials. It is created under the IAM role. See 10.1. Getting an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- AWS_SECRET_ACCESS_KEY: Part of the AWS authentication credentials. It is created under the IAM role. See 10.1. Getting an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
 
First, you need an IAM user that is used specifically for GitHub Actions stuff. If you already have a GitHub Actions user, move to the next step, 10.2.2 Creating new access keys
If you don't have one yet, create a new IAM user using the steps outline below:
- Go to the IAM service in the AWS console.
- Under the "Access management" dropdown menu on the left, click on "Users".
- Click the orange "Create user" button to start the process of creating a new IAM user.
- Enter a name for the new IAM user, e.g. ec2-user-github-actions.
- Leave "access to the AWS Management Console" unchecked.
- Click Next.
- On the "Set permissions" step, select "Attach policies directly".
6.1. Add the AmazonEC2ContainerRegistryPowerUsermanaged policy.
- Click Next.
- On the Review and createstep, click the orange "Create user" button to create the new IAM user.
Next, you will create an access key that will be used for your GitHub repository's GitHub Actions script.
- Go to the IAM service in the AWS console.
- Under the "Access management" dropdown menu on the left, click on "Users".
- Click on the IAM user that you are using specifically for GitHub Actions.
- Click on the "Security credentials" tab.
- Scroll down to the "Access keys" section and click the white "Create access key" button.
- On "Access key best practices & alternatives" step, the select the "Application running outside AWS" option and click the Next.
- Enter a useful description tag value for this secret key. For example, using the name of the repository, e.g. next-appis a great choice.
- Click the orange "Create access key" button.
After completing the last step, make sure to copy each of the Secret value and the Access ID values.
You will use copy and paste each of these values in the Secrets page for your repository's GitHub Actions Secrets, using the Secret value as the AWS_SECRET_ACCESS_KEY and the Access ID value as the AWS_ACCESS_KEY_ID.
Steps for how to add Secrets to a GitHub repository can be found on GitHub's official documentation for Creating secrets for a repository.
ssh -i key-pair-name.pem AWS_EC2_USERNAME@AWS_EC2_HOSTNAMEFollow the instructions to install mainline NGINX packages for Amazon Linux 2023 here.
The commands to run are copied below for convenience but may be out-of-date, so always refer to the official documentation on the link shared above.
- 
Install the prerequisites: sudo yum install yum-utils 
- 
To set up the yum repository for Amazon Linux 2023, create the file named /etc/yum.repos.d/nginx.repowith the following contents:[nginx-stable] name=nginx stable repo baseurl=<http://nginx.org/packages/amzn/2023/$basearch/> gpgcheck=1 enabled=1 gpgkey=<https://nginx.org/keys/nginx_signing.key> module_hotfixes=true priority=9 [nginx-mainline] name=nginx mainline repo baseurl=<http://nginx.org/packages/mainline/amzn/2023/$basearch/> gpgcheck=1 enabled=0 gpgkey=<https://nginx.org/keys/nginx_signing.key> module_hotfixes=true priority=9 
- 
By default, the repository for stable nginx packages is used. If you would like to use mainline nginx packages, run the following command: sudo yum-config-manager --enable nginx-mainline 
- 
To install nginx, run the following command: sudo yum install nginx In stdout, you should see the Repositoryused is nownginx-mainline.
- 
When prompted to accept the GPG key, verify that the fingerprint matches 573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62, and if so, accept it.
sudo service nginx restartsudo vim "/etc/nginx/conf.d/<APP_NAME>.conf"server {
    listen 80;
    server_name AWS_EC2_HOSTNAME; 
  
    location / {
        proxy_pass http://127.0.0.1:3000/;
    }
}sudo service nginx restartFollow this article as a tutorial.
- Make sure to replace anywhere you see apt-getwithyum.
- Replace where you see example.comwith your custom domain.
- Use your valid email address.
As per the F5's guide on getting an NGINX SSL/TLS certificate with Certbot, before generating an SSL certificate with Certbot, below is an example of what your NGINX configuration file should look like.
Notice that to enable access to Next.js app from server_name, we add the location block to specify that we want to use localhost as a proxy.
server {    
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html;
    server_name example.com www.example.com;
    location / {
        proxy_pass http://localhost:3000;
    }
}Below is an example of a configuration file for NGINX after generating an SSL certificate using Certbot.
server {
    root /var/www/html;
    server_name example.com www.example.com;
    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    
    location / {
        proxy_pass http://localhost:3000;
    }
}
server {
    if ($host = www.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name example.com www.example.com;
    return 404; # managed by Certbot
}Here are changes to enable QUIC and HTTP/3 for the NGINX server:
server {
    root /var/www/html;
    server_name example.com www.example.com;
    listen [::]:443 quic reuseport default_server; # QUIC and HTTP/3 only
    listen 443 quic reuseport default_server; # QUIC and HTTP/3 only
    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    gzip on;             # QUIC and HTTP/3 only 
    http2 on;            # HTTP/2 only  
    http3 on;            # QUIC and HTTP/3 only
    http3_hq on;         # QUIC and HTTP/3 only
    quic_retry on;       # QUIC and HTTP/3 only
    ssl_early_data on;   # QUIC and HTTP/3 only
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    location / {
        proxy_pass http://localhost:3000;
        add_header Alt-svc 'h3=":$server_port"; ma=3600'; # QUIC and HTTP/3 only
        add_header x-quic 'h3'; # QUIC and HTTP/3 only
        add_header Cache-Control 'no-cache,no-store'; # QUIC and HTTP/3 only
        add_header X-protocol $server_protocol always; # QUIC and HTTP/3 only
        proxy_set_header Early-Data $ssl_early_data; # QUIC and HTTP/3 only
    }
}
server {
    if ($host = www.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name example.com www.example.com;
    return 404; # managed by Certbot
}To ensure that our NGINX server can receive UDP packets, we need to add another security group rule to our EC2 instance's Security Group.
- Under the EC2 service, go to the Security Groupssettings.
- Click on the security group that your EC2 instance is using.
- Click the Add Rulebutton at the bottom to add a new rule.
- For Type, selectCustom UDP
- For Port Range, enter443, which is the port that our NGINX server will be listening on for HTTP/3 requests.
- For Source, enter0.0.0.0/0to allow for requests from the range of all IP addresses.
- Click Saveto save the changes.
If crontab is not found on your Amazon Linux 2023 EC2 instance, it means that the cronie package, which provides the crontab command, is not installed by default. You can install it with the following steps:
- 
Install cronie: Use the package manager to installcronie:sudo yum install cronie -y 
- 
Start and Enable the crondService: After installingcronie, start thecrondservice and enable it to start on boot:sudo systemctl start crond sudo systemctl enable crond
- 
Open the Crontab Editor: Now you can use the crontabcommand as expected:crontab -e 
- 
Verify Crontab Installation: To verify the installation and that the crontab service is running, use: systemctl status crond This should show that the crondservice is active and running.
Let’s Encrypt certificates expire after 90 days. We encourage you to renew your certificates automatically. Here we add a cron job to an existing crontab file to do this.
- 
Open the crontab file. crontab -e 
- 
Add the certbotcommand to run daily. In this example, we run the command every day at noon. The command checks to see if the certificate on the server will expire within the next 30 days, and renews it if so. The--quietdirective tellscertbotnot to generate output.0 12 ** * /usr/bin/certbot renew --quiet 
- 
Save and close the file. All installed certificates will be automatically renewed and reloaded. 
Create an ECR repostiory for each application and/or service of the system.
- Click the orange "Create repository" button.
- Enter a namespace/repo-name. For example,jackw/next-app.
- Under "Encryption settings", select "AWS KMS".
- Click the orange "Create" button.
Make sure to save the namespace/repo-name as a GitHub Actions Secret for AWS_ECR_REPOSITORY_NAME for your specific GitHub repository.
For example, for the ECR with the name jackw/next-app, save this as a GitHub Actions Secret under the next-app GitHub repository, with AWS_ECR_REPOSITORY_NAME as the Secret's name, and jackw/next-app as the Secret's value.
- Under the "Private registry" menu on the left panel, select "Features & Settings", then select "Permissions".
- Click the orange "Generate statement" button.
- For "Policy type", select the "Pull through cache - scoping" option.
- For "Statement id", select enter a useful description for the permissions. For example, ec2-user-pull-only
- For "IAM entities", select the IAM role that is tied to your EC2 instance. For example, ec2-user.
- Click the orange "Save" button.
- On the "Registry permissions" page, click the "Edit" button for the newly created permissions.
- Under the Resourcefield, enter in the full ARN of each of the private repositories that you created in 12.1 Create a Private Repository.
- Click the orange "Save" button.
Here's a possible error you may receive when calling docker pull from a GitHub Actions script:
Error response from daemon: pull access denied for _**.dkr.ecr.**_.amazonaws.com/***, repository does not exist or may require 'docker login': denied: Your authorization token has expired. Reauthenticate and try again.
To resolve this error, simply SSH into the EC2 instance and run the aws ecr get-login-password command.
This will re-authenticate you as a user to be able to use the docker pull command against repositories on AWS ECR.
See the StackOverflow post for a complete explanation.
You will find the latest gh version here.
Run the following command.
user=GH_USERNAME repo=REPO_NAME; gh api repos/$user/$repo/actions/runs \
--paginate -q '.workflow_runs[] | select(.head_branch != "master") | "\(.id)"' | \
xargs -n1 -I % gh api repos/$user/$repo/actions/runs/% -X DELETEReplace GH_USERNAME and REPO_NAME with the desired github username and repo name correspondingly.
This will delete all the old workflows that aren't on the main branch. You can further tweak this to do what you need.
- You may have to gh auth loginif this is your first time using it
- You may further change the command to gh api --silentif you prefer not to see the verbose output.
- For the final xargspart of the command chain - the original used-Jinstead of-I, which is not supported by GNUxargs.-Jresults in a single command, and-Iwill execute the command for each records, so it's a bit slower.
For example, a public network likw public WiFi.
When you have multiple devices on the same public WiFi network, rely on using ngrok, which is a badass secure ingress cross-platform application that allows for exposing local web servers to the internet through a reverse and SSL/TLS (when using HTTPS) secure TCP tunnel.
Before you begin to integrate ngrok software package, create a free account with ngrok at https://ngrok.com
By using ngrok, you can:
- Maintain a local development environment without deploying your app to a remote server.
- Test real-time features across multiple devices, even over networks that restrict direct device communication like public Wi-Fi.
- Ensure secure connections through ngrok's tunneling, which encrypts traffic between clients and your local server.
For running the next-app on ngrok with HTTPS, you need to do two things:
- 
Start the ngrokserver.In a new terminal window, start the ngrokserver by running the following command:ngrok http https://localhost:3000 
- 
Go to the website served through https://ngrok.com On your devices, a maximum of 2 on ngrok's free plan, navigate to the "Forwarding" URL. The Forwarding URL is found in the terminal logs under the "Web Interface" row.The Forwarding URL usually looks something like: https://e894-98-227-236-100.ngrok-free.app 
- 
(Optional) Use the free static domain. Using the free static domain will save time from having to copy and paste different Forwarding URLs into your various devices. 
- Shut down ngrok when done:
- Close the ngrok terminal window after testing to stop the tunnel.
 
- Monitor Traffic:
- ngrok provides a web interface at http://127.0.0.1:4040where you can inspect requests and debug issues.
 
- ngrok provides a web interface at 
- Upgrade if Necessary:
- The free ngrok plan has limitations (e.g., only 2 connections/devices per URL, tunnels expire after a certain time).
- Consider upgrading if you need persistent tunnels or additional features.
 
Visual Studio Code has a native Port Forwarding feature that lets you forward a port to access your locally running services over the internet.
This allows you to continue to expose any port over either HTTP or HTTPS, and either publicly or privately.
VCode's port forwarding feature can be found by pressing CMD + J on a Mac. This command opens up the TERMINAL, which also includes the PORTS along the row of tabs.
For example, to expose next-app to the internet for other devices to connect to your localhost server, do the following:
- 
Press CMD+Jand then click on the "PORT" tab.
- 
In the package.jsonfornext-app, make sure that the Node script named,dev, includes the--experimental-httpsflag.
- 
Start the Next.js server by going to the "TERMINAL" tab, and then running: npx turbo dev 
- 
Go back to the "PORTS" tab and click on the "Forward a Port" button. 
- 
Enter port 3000for the port number, which is the default port the Next.js server runs on.
- 
Press ENTER 
- 
(Optional) Right-click on the row for the newly forwarded port, go to "Change Port Protocol", and select "HTTPS", to match the protocol that the next-appserver uses when running thedevscript.
- 
(Optional) Right-click on the same row again and change "Port Visibility" from "Private" to "Public". This allows the API request to /api/v1/auth/log-into be made without issues, since cookies are only set if the URL's protocol is HTTPS, denoted by thehttpOnlyflag when setting the user-auth cookie:cookies().set(COOKIE_NAME, token, { httpOnly: true, secure: true, sameSite: 'strict', path: '/', }) 
- 
Go to the "Forwarded Address" on any device and use your application like normal. 
With Tailscale, you can expose ports for access by other devices that are on the same tailnet.
- First, create an account with Tailscale and create a new tailnet.
- Once you've created a new tailnet, on the main admin dashboard, go to the "Settings" tab.
- Under "Device Approval", toggle on "Manually approve new devices". This will ensure that new devices must be approved by admins of your tailnet before they can access it.
- Under the "Machines" tab, add devices to the new tailnet. These devices could include your phone, Raspberry Pi, Arduino, or other laptops and desktops.
- (Optional) If necessary, invite users to your new tailnet.
- In DNS settings, enable "MagicDNS" and "HTTPS Certificates".
- (Optional) Under the "Settings" tab, enable "Services Collection" to collect and display information about devices running on your tailnet in the admin panel.
- 
Turn off any Firewalls. On MacOS, make sure that Mac's native Firewall, or any other firewall, is turned off. Similarly, on a non-Mac device, make sure that firewalls are turned off. 
- 
Test connectivity to your host device on the tailnet by testing an HTTP connection. - 
Set up a simple Python HTTP server - 
Open Terminal on the target device. 
- 
Navigate to a directory you want to serve. For example, to serve your Mac's Desktop: cd ~/Desktop 
- 
Start the HTTP server: python3 -m http.server 8000 
- 
The server will start, listening on port 8000.
 
- 
- 
From Another Device on Your Tailnet: - Using a Web Browser:
- 
Open a web browser. 
- 
Enter the following URL: http://target_device_ip_address:8000/
- 
Replace target_device_ip_addresswith the target device's tailnet IP address.
 
- 
 
- Using a Web Browser:
- 
Verify the Connection: - 
Web Browser: - You should see a directory listing of the served folder.
- If you have an index.htmlfile in the directory, it will display that page.
 
- 
Command Line: - The content of the directory or index.htmlfile will be displayed in the terminal.
 
- The content of the directory or 
 
- 
- 
Stop the HTTP Server: - Return to the Terminal window on the device where the server is running.
- Press Ctrl + Cto stop the server.
 
 
- 
- 
Connect to the target application running locally on the target machine. - 
Start the app For example, for a Node.js server. npm run dev Or npx run dev 
- 
Verify the Connection Just like step 2.3.1, open aand web browser and navigate to the target device's IP address and append the port number to it: http://target_device_ip_address:3000/
 
- 
- 
Make sure to close the connection once you are finished. 
Aiven is a cloud platform that offers a basket of useful cloud services. With several offerings from Google Cloud, AWS, Microsoft Azure, Oracle Cloud, Digital Ocean, and UpCloud, it streamlines integration of some of the services that these mainstream cloud platforms offer.
One of these integrations is a Valkey cache, which is an open source (BSD) high-performance key/value datastore that supports a variety of workloads such as caching, message queues, and can act as a primary database. V alkey can run as either a standalone daemon or in a cluster, with options for replication and high availability.
personality-lab runs a standalone Valkey cache on Aiven.
For documentation on using Valkey on Aiven, view Aiven's official documentation
To create a Valkey cache, follow these steps:
- Create a free account with Aiven.
- On Aiven, create a new project.
- On Aiven, begin the process to create a new Valkey service.
- Select the "Free plan".
- Select "AWS" as the cloud provider.
- Select aws-us-east-1as the service region.
- For service plan, select "Free-1" from the free tier.
- Give the service a useful name, like my-app-valkey
- Click "Create free service"
Integrate the Valkey cache with Node.js by following Aiven's quickstart instructions.
Simply copy the SSL URL and use it with the node-redis Node package.
The Valkey cache can be used just as simply as using a Redis OSS cache. However, a Valkey cache does offer a greater number of performance enhancing features, which can be reviewed on Valkey's blog post on the first release candidate of Valkey 8.0.
Sometimes the kernel of the EC2 instance needs to be updated. If running Amazon Linux 2023, sometimes you will receive an update notice that requires the dnf command to be run to install the kernel update without a system reboot.
Other times, you will see a security notice as shown in the image below:
If you see a security notice as shown in the image above, then, to install the security update, you must perform a system reboot by running the following command:
sudo rebootOnce the EC2 instance reboots, you may SSH back in to the server and use it as you normally would, assuming there aren't any other issues or security updates.



