- starter-monorepo
This is a base monorepo starter template to kick-start your beautifully organized project, whether its a fullstack project, monorepo of multiple libraries and applications, or even just one API server and its related infrastructure deployment and utilities.
Out-of-the-box with the included apps, we have a fullstack project: with a frontend Nuxt 4 app, a main backend using Hono, and a backend-convex Convex app.
- General APIs, such as authentication, are handled by the main
backend, which is designed to be serverless-compatible and can be deployed anywhere, allowing for the best possible latency, performance, and cost, according to your needs. backend-convexis an optional, modular/add-in backend, utilized to power components likeAI Chat.
It is recommended to use an AI Agent (Roo Code recommended) to help you setup the monorepo according to your needs, see Utilities
⏩ This template is powered by Turborepo.
😊 Out-of-the-box, this repo is configured for an SSG frontend Nuxt app, and a backend Hono app that will be the main API, to optimize on cost and simplicity.
- The starter kit is still configured for 100% SSR support,
Simply change theapps/frontend's build script tonuxt buildto enable SSR building
🌩️ SST Ion, an Infrastructure-as-Code solution, with powerful Live development.
- SST is 100% opt-in, by using
sstCLI commands yourself, likesst dev,
simply removesstdependency andsst.config.tsif you want to use another solution. - currently only
backendapp is configured, which will deploy a Lambda with Function URL enabled
🔐 Comes with authentication boilerplate via WorkOS AuthKit, see: /apps/backend/api/auth
- Add your env variables, DONE!
- Please note that by default
backendcomes with a cookies-based session manager, which have great DX, security and does not require an external database (which also means great performance), but as thebackendis decoupled with the Nuxt's SSR server, it will not work well with SSR (the session/auth state is not shared).
So, if you use SSR, you should implement another auth solution.- If you have a good session manager implementation, a PR is greatly appreciated!
💯 JS is always TypeScript where possible.
Work started in 2025-06-12 for T3 Chat Cloneathon competition, with no prior AI SDK and chat streams experience, but I think I did an amazing job 🫡!
The focus of the project is for broader adoption, prioritizing easy-to-access UI/UX, bleeding-edge features like workflows are a low prio, though, advanced capabilities per-model capabilities and fine-tuning are still expected to be elegantly supported via the model's interface. #48
A super efficient and powerful, yet friendly LLM Chat system, featuring:
- Business-ready, support
hostedprovider that you can control the billing of. - Supports other add-in BYOK providers, like
OpenAI,OpenRouter,... - Seamless authentication integration with the main
backend. - Beautiful syntax highlighting 🌈.
- Thread branching, freezing, and sharing.
- Real-time, multi-agents, multi-users support ¹.
- Invite your families and friends, and play with the Agents together in real-time.
- Or maybe invite your colleagues, and brainstorm together with the help and power of AI.
- Resumable and multi-streams ¹.
- Ask follow-up questions while the previous isn't done, the model is able to pick up what's available currently 🍳🍳.
- Multi-users can send messages at the same time 😲😲.
- Easy and private: guest, anonymous usage supported.
- Your dad can just join and chat with just a link share 😉, no setup needed.
- Mobile-friendly.
- Fully internalized, with AI-powered translations and smooth switching between languages.
- Blazingly fast ⚡ with local caching and optimistic updates.
- Designed to be scalable
-
Things are isolated and common interfaces are defined and utilized where possible, there's no tightly coupled-hacks that prevents future scaling, things just works, elegantly.
- Any AI provider that is compatible with
@ai-sdkinterface can be added in a few words of code, I just don't want to bloat the UI by adding all of them.
-
*1: currently the "stream" received when resuming or for other real-time users in the same thread is implemented via a custom polling mechanism, and not SSE. it is intentionally chosed to be this way for more minimal infrastructure setup and wider hosting support, so smaller user groups can host their own version easily, it is still very performant and efficient.
- There is boilerplate code for SSE resume support, you can simply add a pub-sub to the backend and switch to using SSE resume in
ChatInterfacecomponent.
- By default, the frontend
/api/*routes is proxied to thebackendUrl. - The
rpcApiplugin will call the/api/*proxy if they're on the same domain but different ports (e.g: 127.0.0.1)-
this mimics a production environment where the static frontend and the backend lives on the same domain at /api, which is the most efficient configuration for Cloudfront + Lambda Function Url, or Cloudflare Workers.
- If the
frontendandbackendare on different domains then the backend will be called directly without proxy. - This could be configured in frontend's
app.config.ts
-
backend-convex: a Convex app.
@local/locales: a shared central locales/i18n data library powered by spreadsheet-i18n.- 🌐✨🤖 AUTOMATIC localization with AI, powered by lingo.dev, just
pnpm run i18n. - 🔄️ Hot-reload and automatic-reload supported, changes are reflected in apps (
frontend,backend) instantly.
- 🌐✨🤖 AUTOMATIC localization with AI, powered by lingo.dev, just
@local/common: a shared library that can contain constants, functions, types.@local/common-vue: a shared library that can contain components, constants, functions, types for vue-based apps.tsconfig:tsconfig.jsons used throughout the monorepo.
This Turborepo has some additional tools already setup for you:
- 🧐 ESLint + stylistic formatting rules (antfu)
- 📦📢
repo-releasescript, easily generates changelog, bump version, create GitHub release, and publish your packages to npm. - 📚 A few more goodies like:
- lint-staged pre-commit hook
- 🤖 Initialization prompt for AI Agents to modify the monorepo according to your needs.
- To start, open the chat with your AI Agent, and include the
INIT_PROMPT.mdfile in your prompt.
- To start, open the chat with your AI Agent, and include the
To build all apps and packages, run the following command:
pnpm run build
If you just want a quick check out, without having to set up anything, you can use pnpm run dev:noConvex, this will skips backend-convex, which is the only component that have initial set ups, though, this of course means related features are disabled (AI Chat).
To develop all apps and packages, run the following command:
pnpm run dev
For local development environment variables / secrets, create a copy of .env.dev to .env.dev.local.
Guide to setup local development for Cloudflare workerd runtime testing:
- (Optional) Run the convex dev server if you use convex.
- Config
wrangler.jsonc, specfically, thevarsblock, so that it properly targets the local env. - (Optional) Config
apps/frontend/.env.workerd.devif you use convex or use different ip/port. - Build a new static dist for local dev:
- Start the local
backendserver - Run
build:workerdLocalscript forfrontend.
- Start the local
- Run
pnpm dlx wrangler devto start wrangler dev server.workerddoes not work with Alpine Linux, so if you use the included Dev Container, change the base image to some other distro.
- You can add your custom deploy instructions in
deployscript andscripts/deploy.shin each app, it could be a full script that deploys to a platform, or necessary actions before for some platform integration deploys it,frontendwill only start build and deploy after all backends are deployed, to have context for SSG. - The repo also contains some deployment presets samples:
- Action to deploy frontend to GitHub Pages
- Wrangler configured to deploy fullstack to Cloudflare, just run
npx wrangler deployor connect and deploy it through the Cloudflare Dashboard.- Wrangler will deploy
backendandfrontendat the same time, which might causefrontendto have old context for SSG, you should trigger a redeploy in such case.
- Wrangler will deploy
- Deploy backend to Lambda via SST
- Some more deploying notes:
- To enable deploy with Convex in production, simply rename
_deployscript todeployinbackend-convexapp, run the deploy script once manually to get the Convex's production url, set it toNUXT_PUBLIC_CONVEX_URLenv infrontend's.env.prodfile or CI / build machine env variable.
- To enable deploy with Convex in production, simply rename
Imports should not be separated by empty lines, and should be sorted automatically by eslint.
The project comes with a localcert SSL at locals/common/dev to enable HTTPS for local development (because some 3rd-party services require local dev to have HTTPS, so I enable it by default), generated with mkcert, you can install mkcert, generate your own certificate and replace it, or install the localcert.crt to your trusted CA to remove the untrusted SSL warning.
Turborepo can use a technique known as Remote Caching to share cache artifacts across machines, enabling you to share build caches with your team and CI/CD pipelines.
By default, Turborepo will cache locally. To enable Remote Caching you will need an account with Vercel. If you don't have an account you can create one, then enter the following commands:
npx turbo login
This will authenticate the Turborepo CLI with your Vercel account.
Next, you can link your Turborepo to your Remote Cache by running the following command from the root of your Turborepo:
npx turbo link
Learn more about the tech of the repo: