EdgeTTS is a highly scalable TTS API that is compatible with OpenAI API.
It is built with Hono and designed to be deployed on Cloudflare Workers.
- OpenAI Compatible: Follows the
/v1/audio/speech
endpoint structure. - High-Quality Voices: Access to Microsoft Edge's extensive library of neural voices.
- Scalable: Runs on Cloudflare's global edge network, ensuring low latency and high availability.
- Secure: Protected by API key authentication.
- Streaming Support: Streams audio data in real-time.
Clone the repo and install dependencies.
git clone https://github.com/taowang1993/edgetts.git
cd edgetts
npm install
Configure wrangler.toml with your desired API key.
cp wrangler.toml.template wrangler.toml
Deploy to your Cloudflare account.
npm run deploy
curl --request POST \
--url https://edgetts.{YOUR_ACCOUNT}.workers.dev/v1/audio/speech \
--header 'Authorization: Bearer {API_KEY}' \
--header 'Content-Type: application/json' \
--data '{
"model": "tts-1",
"input": "Hello, world! This is a test of the API.",
"voice": "en-US-EmmaMultilingualNeural"
}' \
--output test.mp3
This command will save the synthesized audio to a file named test.mp3
.
model
(string, optional): The TTS model. Can be tts-1
or tts-1-hd
.
input
(string, required): The text to synthesize.
voice
(string, required): The voice to use for synthesis. Here are the best multilingual voices:
- en-US-AvaMultilingualNeural
- en-US-AndrewMultilingualNeural
- en-US-EmmaMultilingualNeural
- en-US-BrianMultilingualNeural
- fr-FR-VivienneMultilingualNeural
- de-DE-SeraphinaMultilingualNeural
For more voices, see voices.yaml
.
This is a universal TypeScript conversion of the Python edge-tts
library. It allows you to use Microsoft Edge's online text-to-speech service from Node.js, browsers, and any JavaScript environment.
- π Multiple Entry Points: Choose between Node.js, browser, or isomorphic APIs
- π Cross-Platform: Works in Node.js, browsers, Deno, Bun, and edge runtimes
- π¦ Tree-Shakable: Import only what you need for optimal bundle size
- π Isomorphic: Same API works across all environments
- β‘ Zero Dependencies: Browser builds have no external dependencies
- π‘οΈ Type Safe: Full TypeScript support with comprehensive type definitions
This package provides high fidelity to the original Python implementation, replicating the specific headers and WebSocket communication necessary to interact with Microsoft's service.
npm install edge-tts-universal
# or
yarn add edge-tts-universal
This package provides three usage patterns for maximum compatibility:
import {
EdgeTTS,
Communicate,
IsomorphicCommunicate,
} from 'edge-tts-universal';
import { EdgeTTS, Communicate } from 'edge-tts-universal/browser';
import { EdgeTTS, Communicate } from 'edge-tts-universal/isomorphic';
import {
EdgeTTS,
Communicate,
postAudioMessage,
} from 'edge-tts-universal/webworker';
<!-- Via unpkg -->
<script type="module">
import { EdgeTTS } from 'https://unpkg.com/edge-tts-universal/dist/browser.js';
</script>
<!-- Via jsdelivr -->
<script type="module">
import { EdgeTTS } from 'https://cdn.jsdelivr.net/npm/edge-tts-universal/dist/browser.js';
</script>
Choose the right entry point for optimal bundle size:
Entry Point | Bundle Size* | Use Case | Dependencies |
---|---|---|---|
edge-tts-universal |
~46KB | Node.js apps | All deps |
edge-tts-universal/browser |
~30KB | Browser apps | Zero deps |
edge-tts-universal/isomorphic |
~36KB | Universal apps | Isomorphic deps |
edge-tts-universal/webworker |
~36KB | Web Workers | Isomorphic deps |
* Minified + Gzipped estimates
import { EdgeTTS } from 'edge-tts-universal';
import fs from 'fs/promises';
// Simple one-shot synthesis
const tts = new EdgeTTS('Hello, world!', 'en-US-EmmaMultilingualNeural');
const result = await tts.synthesize();
// Save audio file
const audioBuffer = Buffer.from(await result.audio.arrayBuffer());
await fs.writeFile('output.mp3', audioBuffer);
import { Communicate } from 'edge-tts-universal';
import fs from 'fs/promises';
const communicate = new Communicate('Hello, world!', {
voice: 'en-US-EmmaMultilingualNeural',
});
const buffers: Buffer[] = [];
for await (const chunk of communicate.stream()) {
if (chunk.type === 'audio' && chunk.data) {
buffers.push(chunk.data);
}
}
await fs.writeFile('output.mp3', Buffer.concat(buffers));
import { IsomorphicCommunicate } from 'edge-tts-universal';
// Works in both Node.js and browsers (subject to CORS policy)
const communicate = new IsomorphicCommunicate('Hello, universal world!', {
voice: 'en-US-EmmaMultilingualNeural',
});
const audioChunks: Buffer[] = [];
for await (const chunk of communicate.stream()) {
if (chunk.type === 'audio' && chunk.data) {
audioChunks.push(chunk.data);
}
}
// Environment-specific handling
const isNode = typeof process !== 'undefined' && process.versions?.node;
if (isNode) {
// Node.js - save to file
const fs = await import('fs/promises');
await fs.writeFile('output.mp3', Buffer.concat(audioChunks));
} else {
// Browser - create audio element
const audioBlob = new Blob(audioChunks, { type: 'audio/mpeg' });
const audioUrl = URL.createObjectURL(audioBlob);
// Use audioUrl with <audio> element
}
β‘ Simple API Usage (Recommended)
Here's how to use the simple, promise-based API for quick synthesis:
// examples/simple-api.ts
import { EdgeTTS, createVTT, createSRT } from 'edge-tts-universal';
import { promises as fs } from 'fs';
import path from 'path';
const TEXT = 'Hello, world! This is a test of the simple edge-tts API.';
const VOICE = 'en-US-EmmaMultilingualNeural';
const OUTPUT_FILE = path.join(__dirname, 'simple-test.mp3');
async function main() {
// Create TTS instance with prosody options
const tts = new EdgeTTS(TEXT, VOICE, {
rate: '+10%',
volume: '+0%',
pitch: '+0Hz',
});
try {
// Synthesize speech (one-shot)
const result = await tts.synthesize();
// Save audio file
const audioBuffer = Buffer.from(await result.audio.arrayBuffer());
await fs.writeFile(OUTPUT_FILE, audioBuffer);
// Generate subtitle files
const vttContent = createVTT(result.subtitle);
const srtContent = createSRT(result.subtitle);
await fs.writeFile('subtitles.vtt', vttContent);
await fs.writeFile('subtitles.srt', srtContent);
console.log(`Audio saved to ${OUTPUT_FILE}`);
console.log(`Generated ${result.subtitle.length} word boundaries`);
} catch (error) {
console.error('Synthesis failed:', error);
}
}
main().catch(console.error);
π Advanced Streaming Usage
Here is an example using the advanced streaming API for real-time processing:
// examples/streaming.ts
import { Communicate } from 'edge-tts-universal';
import { promises as fs } from 'fs';
import path from 'path';
const TEXT =
'Hello, world! This is a test of the new edge-tts Node.js library.';
const VOICE = 'en-US-EmmaMultilingualNeural';
const OUTPUT_FILE = path.join(__dirname, 'test.mp3');
async function main() {
const communicate = new Communicate(TEXT, { voice: VOICE });
const buffers: Buffer[] = [];
for await (const chunk of communicate.stream()) {
if (chunk.type === 'audio' && chunk.data) {
buffers.push(chunk.data);
}
}
const finalBuffer = Buffer.concat(buffers);
await fs.writeFile(OUTPUT_FILE, finalBuffer);
console.log(`Audio saved to ${OUTPUT_FILE}`);
}
main().catch(console.error);
π€ Listing and Finding Voices
You can list all available voices and filter them by criteria.
// examples/listVoices.ts
import { VoicesManager } from 'edge-tts-universal';
async function main() {
const voicesManager = await VoicesManager.create();
// Find all English voices
const voices = voicesManager.find({ Language: 'en' });
console.log(
'English voices:',
voices.map((v) => v.ShortName)
);
// Find female US voices
const femaleUsVoices = voicesManager.find({
Gender: 'Female',
Locale: 'en-US',
});
console.log(
'Female US voices:',
femaleUsVoices.map((v) => v.ShortName)
);
}
main().catch(console.error);
πΊ Streaming with Subtitles (WordBoundary events)
The stream()
method provides WordBoundary
events for generating subtitles.
// examples/streaming.ts
import { Communicate, SubMaker } from 'edge-tts-universal';
const TEXT = 'This is a test of the streaming functionality, with subtitles.';
const VOICE = 'en-GB-SoniaNeural';
async function main() {
const communicate = new Communicate(TEXT, { voice: VOICE });
const subMaker = new SubMaker();
for await (const chunk of communicate.stream()) {
if (chunk.type === 'audio' && chunk.data) {
// Do something with the audio data, e.g., stream it to a client.
console.log(`Received audio chunk of size: ${chunk.data.length}`);
} else if (chunk.type === 'WordBoundary') {
subMaker.feed(chunk);
}
}
// Get the subtitles in SRT format.
const srt = subMaker.getSrt();
console.log('\nGenerated Subtitles (SRT):\n', srt);
}
main().catch(console.error);
π Complete API Documentation β
The main exports of the package are:
Simple API:
EdgeTTS
- Simple, promise-based TTS class for one-shot synthesiscreateVTT
/createSRT
- Utility functions for subtitle generation
Advanced API:
Communicate
- Advanced streaming TTS class for real-time processingVoicesManager
- A class to find and filter voiceslistVoices
- A function to get all available voicesSubMaker
- A utility to generate SRT subtitles fromWordBoundary
events
Isomorphic (Universal) API:
IsomorphicCommunicate
- Universal TTS class that works in Node.js and browsersIsomorphicVoicesManager
- Universal voice management with environment detectionlistVoicesIsomorphic
- Universal voice listing using cross-fetchIsomorphicDRM
- Cross-platform security token generation
Common:
- Exception classes -
NoAudioReceived
,WebSocketError
, etc. - TypeScript types - Complete type definitions for voices, options, and stream chunks
All three APIs use the same robust infrastructure including DRM security handling, error recovery, proxy support, and all Microsoft Edge authentication features. The isomorphic API provides universal compatibility through environment detection and isomorphic packages.
For detailed documentation, examples, and advanced usage patterns, see the comprehensive API guide.