Next.MetadataOutlet▌

by hammeiam
Kokoro Speech: natural-sounding Kokoro TTS with customizable voices and playback speed — fast, reliable text-to-speech w
Provides text-to-speech capabilities using the Kokoro TTS model, enabling natural-sounding voice output with customizable playback speed and voice selection through robust error handling and temporary file management.
best for
- / Adding voice output to applications
- / Creating audio content from text
- / Accessibility features for reading text aloud
capabilities
- / Convert text to speech with multiple voice options
- / Adjust speech speed from 0.5x to 2.0x
- / List all available TTS voices
- / Check TTS model initialization status
what it does
Converts text to natural-sounding speech using the Kokoro TTS model with customizable voice selection and playback speed.
how to install
You can install Kokoro Speech in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
license
MIT
Kokoro Speech is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
readme
Speech MCP Server
A Model Context Protocol server that provides text-to-speech capabilities using the Kokoro TTS model.
Configuration
The server can be configured using the following environment variables:
| Variable | Description | Default | Valid Range |
|---|---|---|---|
MCP_DEFAULT_SPEECH_SPEED | Default speed multiplier for text-to-speech | 1.1 | 0.5 to 2.0 |
MCP_DEFAULT_VOICE | Default voice for text-to-speech | af_bella | Any valid voice ID |
In Cursor:
{
"mcpServers": {
"speech": {
"command": "npx",
"args": [
"-y",
"speech-mcp-server"
],
"env": {
"MCP_DEFAULT_SPEECH_SPEED": 1.3,
"MCP_DEFAULT_VOICE": "af_bella"
}
}
}
}
Features
- 🎯 High-quality text-to-speech using Kokoro TTS model
- 🗣️ Multiple voice options available
- 🎛️ Customizable speech parameters (voice, speed)
- 🔌 MCP-compliant interface
- 📦 Easy installation and setup
- 🚀 No API key required
Installation
# Using npm
npm install speech-mcp-server
# Using pnpm (recommended)
pnpm add speech-mcp-server
# Using yarn
yarn add speech-mcp-server
Usage
Run the server:
# Using default configuration
npm start
# With custom configuration
MCP_DEFAULT_SPEECH_SPEED=1.5 MCP_DEFAULT_VOICE=af_bella npm start
The server provides the following MCP tools:
text_to_speech: Basic text-to-speech conversiontext_to_speech_with_options: Text-to-speech with customizable speedlist_voices: List all available voicesget_model_status: Check the initialization status of the TTS model
Development
# Clone the repository
git clone <your-repo-url>
cd speech-mcp-server
# Install dependencies
pnpm install
# Start development server with auto-reload
pnpm dev
# Build the project
pnpm build
# Run linting
pnpm lint
# Format code
pnpm format
# Test with MCP Inspector
pnpm inspector
Available Tools
1. text_to_speech
Converts text to speech using the default settings.
{
"type": "request",
"id": "1",
"method": "call_tool",
"params": {
"name": "text_to_speech",
"arguments": {
"text": "Hello world",
"voice": "af_bella" // optional
}
}
}
2. text_to_speech_with_options
Converts text to speech with customizable parameters.
{
"type": "request",
"id": "1",
"method": "call_tool",
"params": {
"name": "text_to_speech_with_options",
"arguments": {
"text": "Hello world",
"voice": "af_bella", // optional
"speed": 1.0, // optional (0.5 to 2.0)
}
}
}
3. list_voices
Lists all available voices for text-to-speech.
{
"type": "request",
"id": "1",
"method": "list_voices",
"params": {}
}
4. get_model_status
Check the current status of the TTS model initialization. This is particularly useful when first starting the server, as the model needs to be downloaded and initialized.
{
"type": "request",
"id": "1",
"method": "call_tool",
"params": {
"name": "get_model_status",
"arguments": {}
}
}
Response example:
{
"content": [{
"type": "text",
"text": "Model status: initializing (5s elapsed)"
}]
}
Possible status values:
uninitialized: Model initialization hasn't startedinitializing: Model is being downloaded and initializedready: Model is ready to useerror: An error occurred during initialization
Testing
You can test the server using the MCP Inspector or by sending raw JSON messages:
# List available tools
echo '{"type":"request","id":"1","method":"list_tools","params":{}}' | node dist/index.js
# List available voices
echo '{"type":"request","id":"2","method":"list_voices","params":{}}' | node dist/index.js
# Convert text to speech
echo '{"type":"request","id":"3","method":"call_tool","params":{"name":"text_to_speech","arguments":{"text":"Hello world","voice":"af_bella"}}}' | node dist/index.js
Integration with Claude Desktop
To use this server with Claude Desktop, add the following to your Claude Desktop config file (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"servers": {
"speech": {
"command": "npx",
"args": ["@decodershq/speech-mcp-server"]
}
}
}
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - see the LICENSE file for details.
Troubleshooting
Model Initialization Issues
The server automatically attempts to download and initialize the TTS model on startup. If you encounter initialization errors:
- The server will automatically retry up to 3 times with a cleanup between attempts
- Use the
get_model_statustool to monitor initialization progress and any errors - If initialization fails after all retries, try manually removing the model files:
# Remove model files (MacOS/Linux)
rm -rf ~/.npm/_npx/**/node_modules/@huggingface/transformers/.cache/onnx-community/Kokoro-82M-v1.0-ONNX/onnx/model_quantized.onnx
rm -rf ~/.cache/huggingface/transformers/onnx-community/Kokoro-82M-v1.0-ONNX/onnx/model_quantized.onnx
# Then restart the server
npm start
The get_model_status tool will now include retry information in its response:
{
"content": [{
"type": "text",
"text": "Model status: initializing (5s elapsed, retry 1/3)"
}]
}
FAQ
- What is the Next.MetadataOutlet MCP server?
- Next.MetadataOutlet is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
- How do MCP servers relate to agent skills?
- Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
- How are reviews shown for Next.MetadataOutlet?
- This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
Ratings
4.5★★★★★10 reviews- ★★★★★Shikha Mishra· Oct 10, 2024
Next.MetadataOutlet is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Piyush G· Sep 9, 2024
We evaluated Next.MetadataOutlet against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Chaitanya Patil· Aug 8, 2024
Useful MCP listing: Next.MetadataOutlet is the kind of server we cite when onboarding engineers to host + tool permissions.
- ★★★★★Sakshi Patil· Jul 7, 2024
Next.MetadataOutlet reduced integration guesswork — categories and install configs on the listing matched the upstream repo.
- ★★★★★Ganesh Mohane· Jun 6, 2024
I recommend Next.MetadataOutlet for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Oshnikdeep· May 5, 2024
Strong directory entry: Next.MetadataOutlet surfaces stars and publisher context so we could sanity-check maintenance before adopting.
- ★★★★★Dhruvi Jain· Apr 4, 2024
Next.MetadataOutlet has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.
- ★★★★★Rahul Santra· Mar 3, 2024
According to our notes, Next.MetadataOutlet benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.
- ★★★★★Pratham Ware· Feb 2, 2024
We wired Next.MetadataOutlet into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.
- ★★★★★Yash Thakker· Jan 1, 2024
Next.MetadataOutlet is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.