ai-mldeveloper-tools

AI Hub

by feiskyer

AI Hub offers unified access to 100+ AI providers via LiteLLM, enabling seamless switching and configuration with advanc

Provides unified access to 100+ AI providers through LiteLLM integration, enabling seamless switching between OpenAI, Anthropic, Google, Azure, AWS Bedrock, and other services through YAML-based configuration with tools for chatting, listing models, and retrieving model information.

github stars

7

100+ AI providers supportedSingle unified interface for all modelsYAML-based configuration

best for

  • / Developers wanting to test multiple AI models without separate integrations
  • / Teams comparing AI provider performance and capabilities
  • / Applications requiring fallback options across AI services

capabilities

  • / Chat with 100+ AI models from different providers
  • / List available models across all providers
  • / Retrieve detailed model information and capabilities
  • / Switch between AI providers using YAML configuration
  • / Access custom endpoints and proxy servers

what it does

Provides unified access to 100+ AI providers (OpenAI, Anthropic, Google, AWS Bedrock, etc.) through a single interface using LiteLLM integration.

about

AI Hub is a community-built MCP server published by feiskyer that provides AI assistants with tools and capabilities via the Model Context Protocol. AI Hub offers unified access to 100+ AI providers via LiteLLM, enabling seamless switching and configuration with advanc It is categorized under ai ml, developer tools.

how to install

You can install AI Hub in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

AI Hub is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

MCP AI Hub

Python License Code style: black Ruff PyPI Downloads

A Model Context Protocol (MCP) server that provides unified access to various AI providers through LiteLM. Chat with OpenAI, Anthropic, and 100+ other AI models using a single, consistent interface.

🌟 Overview

MCP AI Hub acts as a bridge between MCP clients (like Claude Desktop/Code) and multiple AI providers. It leverages LiteLM's unified API to provide seamless access to 100+ AI models without requiring separate integrations for each provider.

Key Benefits:

  • Unified Interface: Single API for all AI providers
  • 100+ Providers: OpenAI, Anthropic, Google, Azure, AWS Bedrock, and more
  • MCP Protocol: Native integration with Claude Desktop and Claude Code
  • Flexible Configuration: YAML-based configuration with Pydantic validation
  • Multiple Transports: stdio, SSE, and HTTP transport options
  • Custom Endpoints: Support for proxy servers and local deployments

Quick Start

1. Install

Choose your preferred installation method:

# Option A: Install from PyPI
pip install mcp-ai-hub

# Option B: Install with uv (recommended)
uv tool install mcp-ai-hub

# Option C: Install from source
pip install git+https://github.com/your-username/mcp-ai-hub.git

Installation Notes:

  • uv is a fast Python package installer and resolver
  • The package requires Python 3.10 or higher
  • All dependencies are automatically resolved and installed

2. Configure

Create a configuration file at ~/.ai_hub.yaml with your API keys and model configurations:

model_list:
  - model_name: gpt-4  # Friendly name you'll use in MCP tools
    litellm_params:
      model: openai/gpt-4  # LiteLM provider/model identifier
      api_key: "sk-your-openai-api-key-here"  # Your actual OpenAI API key
      max_tokens: 2048  # Maximum response tokens
      temperature: 0.7  # Response creativity (0.0-1.0)

  - model_name: claude-sonnet
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-anthropic-api-key-here"
      max_tokens: 4096
      temperature: 0.7

Configuration Guidelines:

  • API Keys: Replace placeholder keys with your actual API keys
  • Model Names: Use descriptive names you'll remember (e.g., gpt-4, claude-sonnet)
  • LiteLM Models: Use LiteLM's provider/model format (e.g., openai/gpt-4, anthropic/claude-3-5-sonnet-20241022)
  • Parameters: Configure max_tokens, temperature, and other LiteLM-supported parameters
  • Security: Keep your config file secure with appropriate file permissions (chmod 600)

3. Connect to Claude Desktop

Configure Claude Desktop to use MCP AI Hub by editing your configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "ai-hub": {
      "command": "mcp-ai-hub"
    }
  }
}

4. Connect to Claude Code

claude mcp add -s user ai-hub mcp-ai-hub

Advanced Usage

CLI Options and Transport Types

MCP AI Hub supports multiple transport mechanisms for different use cases:

Command Line Options:

# Default stdio transport (for MCP clients like Claude Desktop)
mcp-ai-hub

# Server-Sent Events transport (for web applications)
mcp-ai-hub --transport sse --host 0.0.0.0 --port 3001

# Streamable HTTP transport (for direct API calls)
mcp-ai-hub --transport http --port 8080

# Custom config file and debug logging
mcp-ai-hub --config /path/to/config.yaml --log-level DEBUG

Transport Type Details:

TransportUse CaseDefault Host:PortDescription
stdioMCP clients (Claude Desktop/Code)N/AStandard input/output, default for MCP
sseWeb applicationslocalhost:3001Server-Sent Events for real-time web apps
httpDirect API callslocalhost:3001 (override with --port)HTTP transport with streaming support

CLI Arguments:

  • --transport {stdio,sse,http}: Transport protocol (default: stdio)
  • --host HOST: Host address for SSE/HTTP (default: localhost)
  • --port PORT: Port number for SSE/HTTP (default: 3001; override if you need a different port)
  • --config CONFIG: Custom config file path (default: ~/.ai_hub.yaml)
  • --log-level {DEBUG,INFO,WARNING,ERROR}: Logging verbosity (default: INFO)

Usage

Once MCP AI Hub is connected to your MCP client, you can interact with AI models using these tools:

MCP Tool Reference

Primary Chat Tool:

chat(model_name: str, message: str | list[dict]) -> str
  • model_name: Name of the configured model (e.g., "gpt-4", "claude-sonnet")
  • message: String message or OpenAI-style message list
  • Returns: AI model response as string

Model Discovery Tools:

list_models() -> list[str]
  • Returns: List of all configured model names
get_model_info(model_name: str) -> dict
  • model_name: Name of the configured model
  • Returns: Model configuration details including provider, parameters, etc.

Configuration

MCP AI Hub supports 100+ AI providers through LiteLM. Configure your models in ~/.ai_hub.yaml with API keys and custom parameters.

System Prompts

You can define system prompts at two levels:

  • global_system_prompt: Applied to all models by default
  • Per-model system_prompt: Overrides the global prompt for that model

Precedence: model-specific prompt > global prompt. If a model's system_prompt is set to an empty string, it disables the global prompt for that model.

global_system_prompt: "You are a helpful AI assistant. Be concise."

model_list:
  - model_name: gpt-4
    system_prompt: "You are a precise coding assistant."
    litellm_params:
      model: openai/gpt-4
      api_key: "sk-your-openai-api-key"

  - model_name: claude-sonnet
    # Empty string disables the global prompt for this model
    system_prompt: ""
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-anthropic-api-key"

Notes:

  • The server prepends the configured system prompt to the message list it sends to providers.
  • If you pass an explicit message list that already contains a system message, both system messages will be included in order (configured prompt first).

Supported Providers

Major AI Providers:

  • OpenAI: GPT-4, GPT-3.5-turbo, GPT-4-turbo, etc.
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
  • Google: Gemini Pro, Gemini Pro Vision, Gemini Ultra
  • Azure OpenAI: Azure-hosted OpenAI models
  • AWS Bedrock: Claude, Llama, Jurassic, and more
  • Together AI: Llama, Mistral, Falcon, and open-source models
  • Hugging Face: Various open-source models
  • Local Models: Ollama, LM Studio, and other local deployments

Configuration Parameters:

  • api_key: Your provider API key (required)
  • max_tokens: Maximum response tokens (optional)
  • temperature: Response creativity 0.0-1.0 (optional)
  • api_base: Custom endpoint URL (for proxies/local servers)
  • Additional: All LiteLM-supported parameters

Configuration Examples

Basic Configuration:

global_system_prompt: "You are a helpful AI assistant. Be concise."

model_list:
  - model_name: gpt-4
    system_prompt: "You are a precise coding assistant."  # overrides global
    litellm_params:
      model: openai/gpt-4
      api_key: "sk-your-actual-openai-api-key"
      max_tokens: 2048
      temperature: 0.7

  - model_name: claude-sonnet
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-actual-anthropic-api-key"
      max_tokens: 4096
      temperature: 0.7

Custom Parameters:

model_list:
  - model_name: gpt-4-creative
    litellm_params:
      model: openai/gpt-4
      api_key: "sk-your-openai-key"
      max_tokens: 4096
      temperature: 0.9  # Higher creativity
      top_p: 0.95
      frequency_penalty: 0.1
      presence_penalty: 0.1

  - model_name: claude-analytical
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: "sk-ant-your-anthropic-key"
      max_tokens: 8192
      temperature: 0.3  # Lower creativity for analytical tasks
      stop_sequences: ["

", "Human:"]

Local LLM Server Configuration:

model_list:
  - model_name: local-llama
    litellm_params:
      model: openai/llama-2-7b-chat
      api_key: "dummy-key"  # Local servers often accept any API key
      api_base: "http://localhost:8080/v1"  # Local OpenAI-compatible server
      max_tokens: 2048
      temperature: 0.7

For more providers, please refer to the LiteLLM docs: https://docs.litellm.ai/docs/providers.

Development

Setup:

# Install all dependencies including dev dependencies
uv sync

# Install package in development mode
uv pip install -e ".[dev]"

# Add new runtime dependencies
uv add package_name

# Add new development dependencies
uv add --dev package_name

# Update dependencies
uv sync --upgrade

Running and Testing:

# Run the MCP server
uv run mcp-ai-hub

# Run with custom configuration
uv run mcp-ai-hub --config ./custom_config.yaml --log-level DEBUG

# Run with different transport
uv run mcp-ai-hub --transport sse --port 3001

# Run tests (when test suite is added)
uv run pytest

# Run tests with coverage
uv run pytest --cov=src/mcp_

---

FAQ

What is the AI Hub MCP server?
AI Hub is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for AI Hub?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    AI Hub is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated AI Hub against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: AI Hub is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    AI Hub reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend AI Hub for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: AI Hub surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    AI Hub has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, AI Hub benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired AI Hub into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    AI Hub is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.