ai-mldeveloper-tools

Llama.cpp Bridge

by openconstruct

Llama.cpp Bridge connects local llama-server instances to MCP clients, enabling chat, health checks, and flexible llama.

Bridges local llama-server instances with MCP clients, providing chat interface, health monitoring, and configurable generation parameters for integrating llama.cpp models with desktop applications

github stars

8

No cloud API keys requiredFull conversation supportBuilt-in testing tools

best for

  • / AI researchers running local models
  • / Privacy-focused users avoiding cloud APIs
  • / Developers integrating local LLMs with desktop workflows

capabilities

  • / Chat with local llama.cpp models through Claude Desktop
  • / Control generation parameters like temperature and max_tokens
  • / Monitor llama-server health and status
  • / Track performance metrics and token usage
  • / Test model capabilities with built-in tools

what it does

Connects Claude Desktop to your local llama.cpp models, letting you chat with local LLMs directly through Claude's interface.

about

Llama.cpp Bridge is a community-built MCP server published by openconstruct that provides AI assistants with tools and capabilities via the Model Context Protocol. Llama.cpp Bridge connects local llama-server instances to MCP clients, enabling chat, health checks, and flexible llama. It is categorized under ai ml, developer tools.

how to install

You can install Llama.cpp Bridge in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

Llama.cpp Bridge is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

LibreModel MCP Server 🤖

A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.

Features

  • 💬 Full conversation support with Local Model through Claude Desktop
  • 🎛️ Complete parameter control (temperature, max_tokens, top_p, top_k)
  • Health monitoring and server status checks
  • 🧪 Built-in testing tools for different capabilities
  • 📊 Performance metrics and token usage tracking
  • 🔧 Easy configuration via environment variables

Quick Start

npm install @openconstruct/llama-mcp-server

A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.

Features

  • 💬 Full conversation support with LibreModel through Claude Desktop
  • 🎛️ Complete parameter control (temperature, max_tokens, top_p, top_k)
  • Health monitoring and server status checks
  • 🧪 Built-in testing tools for different capabilities
  • 📊 Performance metrics and token usage tracking
  • 🔧 Easy configuration via environment variables

Quick Start

1. Install Dependencies

cd llama-mcp
npm install

2. Build the Server

npm run build

3. Start Your LibreModel

Make sure llama-server is running with your model:

./llama-server -m lm37.gguf -c 2048 --port 8080

4. Configure Claude Desktop

Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):

{
  "mcpServers": {
    "libremodel": {
      "command": "node",
      "args": ["/home/jerr/llama-mcp/dist/index.js"]
    }
  }
}

5. Restart Claude Desktop

Claude will now have access to LibreModel through MCP!

Usage

Once configured, you can use these tools in Claude Desktop:

💬 chat - Main conversation tool

Use the chat tool to ask LibreModel: "What is your name and what can you do?"

🧪 quick_test - Test LibreModel capabilities

Run a quick_test with type "creative" to see if LibreModel can write poetry

🏥 health_check - Monitor server status

Use health_check to see if LibreModel is running properly

Configuration

Set environment variables to customize behavior:

export LLAMA_SERVER_URL="http://localhost:8080"  # Default llama-server URL

Available Tools

ToolDescriptionParameters
chatConverse with MOdelmessage, temperature, max_tokens, top_p, top_k, system_prompt
quick_testRun predefined capability teststest_type (hello/math/creative/knowledge)
health_checkCheck server health and statusNone

Resources

  • Configuration: View current server settings
  • Instructions: Detailed usage guide and setup instructions

Development

# Install dependencies
npm install # LibreModel MCP Server 🤖

A Model Context Protocol (MCP) server that bridges Claude Desktop with your local LLM instance running via llama-server.

## Features

- 💬 **Full conversation support** with LibreModel through Claude Desktop
- 🎛️ **Complete parameter control** (temperature, max_tokens, top_p, top_k)
- ✅ **Health monitoring** and server status checks
- 🧪 **Built-in testing tools** for different capabilities
- 📊 **Performance metrics** and token usage tracking
- 🔧 **Easy configuration** via environment variables

## Quick Start

### 1. Install Dependencies

```bash
cd llama-mcp
npm install

2. Build the Server

npm run build

3. Start Your LibreModel

Make sure llama-server is running with your model:

./llama-server -m lm37.gguf -c 2048 --port 8080

4. Configure Claude Desktop

Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):

{
  "mcpServers": {
    "libremodel": {
      "command": "node",
      "args": ["/home/jerr/llama-mcp/dist/index.js"]
    }
  }
}

5. Restart Claude Desktop

Claude will now have access to LibreModel through MCP!

Usage

Once configured, you can use these tools in Claude Desktop:

💬 chat - Main conversation tool

Use the chat tool to ask LibreModel: "What is your name and what can you do?"

🧪 quick_test - Test LibreModel capabilities

Run a quick_test with type "creative" to see if LibreModel can write poetry

🏥 health_check - Monitor server status

Use health_check to see if LibreModel is running properly

Configuration

Set environment variables to customize behavior:

export LLAMA_SERVER_URL="http://localhost:8080"  # Default llama-server URL

Available Tools

ToolDescriptionParameters
chatConverse with MOdelmessage, temperature, max_tokens, top_p, top_k, system_prompt
quick_testRun predefined capability teststest_type (hello/math/creative/knowledge)
health_checkCheck server health and statusNone

Resources

  • Configuration: View current server settings
  • Instructions: Detailed usage guide and setup instructions

Development

# Install dependencies
npm install openconstruct/llama-mcp-server

# Development mode (auto-rebuild)
npm run dev

# Build for production
npm run build

# Start the server directly
npm start

Architecture

Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model

The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.

Troubleshooting

"Cannot reach LLama server"

  • Ensure llama-server is running on the configured port
  • Check that the model is loaded and responding
  • Verify firewall/network settings

"Tool not found in Claude Desktop"

  • Restart Claude Desktop after configuration changes
  • Check that the path to index.js is correct and absolute
  • Verify the MCP server builds without errors

Poor response quality

  • Adjust temperature and sampling parameters
  • Try different system prompts

License

CC0-1.0 - Public Domain. Use freely!


Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4

Development mode (auto-rebuild)

npm run dev

Build for production

npm run build

Start the server directly

npm start


## Architecture

Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model


The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.

## Troubleshooting

**"Cannot reach LLama server"**
- Ensure llama-server is running on the configured port
- Check that the model is loaded and responding
- Verify firewall/network settings

**"Tool not found in Claude Desktop"**
- Restart Claude Desktop after configuration changes
- Check that the path to `index.js` is correct and absolute
- Verify the MCP server builds without errors

**Poor response quality**
- Adjust temperature and sampling parameters
- Try different system prompts

## License

CC0-1.0 - Public Domain. Use freely!

---

Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet4

### 1. Install Dependencies

```bash
cd llama-mcp
npm install

2. Build the Server

npm run build

3. Start Your LibreModel

Make sure llama-server is running with your model:

./llama-server -m lm37.gguf -c 2048 --port 8080

4. Configure Claude Desktop

Add this to your Claude Desktop configuration (~/.config/claude/claude_desktop_config.json):

{
  "mcpServers": {
    "libremodel": {
      "command": "node",
      "args": ["/home/jerr/llama-mcp/dist/index.js"]
    }
  }
}

5. Restart Claude Desktop

Claude will now have access to LibreModel through MCP!

Usage

Once configured, you can use these tools in Claude Desktop:

💬 chat - Main conversation tool

Use the chat tool to ask LibreModel: "What is your name and what can you do?"

🧪 quick_test - Test LibreModel capabilities

Run a quick_test with type "creative" to see if LibreModel can write poetry

🏥 health_check - Monitor server status

Use health_check to see if LibreModel is running properly

Configuration

Set environment variables to customize behavior:

export LLAMA_SERVER_URL="http://localhost:8080"  # Default llama-server URL

Available Tools

ToolDescriptionParameters
chatConverse with MOdelmessage, temperature, max_tokens, top_p, top_k, system_prompt
quick_testRun predefined capability teststest_type (hello/math/creative/knowledge)
health_checkCheck server health and statusNone

Resources

  • Configuration: View current server settings
  • Instructions: Detailed usage guide and setup instructions

Development

# Install dependencies
npm install

# Development mode (auto-rebuild)
npm run dev

# Build for production
npm run build

# Start the server directly
npm start

Architecture

Claude Desktop ←→ LLama MCP Server ←→ llama-server API ←→ Local Model

The MCP server acts as a bridge, translating MCP protocol messages into llama-server API calls and formatting responses for Claude Desktop.

Troubleshooting

"Cannot reach LLama server"

  • Ensure llama-server is running on the configured port
  • Check that the model is loaded and responding
  • Verify firewall/network settings

"Tool not found in Claude Desktop"

  • Restart Claude Desktop after configuration changes
  • Check that the path to index.js is correct and absolute
  • Verify the MCP server builds without errors

Poor response quality

  • Adjust temperature and sampling parameters
  • Try different system prompts

License

CC0-1.0 - Public Domain. Use freely!


Built with ❤️ for open-source AI and the LibreModel project. by Claude Sonnet


FAQ

What is the Llama.cpp Bridge MCP server?
Llama.cpp Bridge is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Llama.cpp Bridge?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    Llama.cpp Bridge is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated Llama.cpp Bridge against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: Llama.cpp Bridge is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    Llama.cpp Bridge reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend Llama.cpp Bridge for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: Llama.cpp Bridge surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    Llama.cpp Bridge has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, Llama.cpp Bridge benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired Llama.cpp Bridge into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    Llama.cpp Bridge is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.