ai-mlanalytics-data

Vectara

by vectara

Leverage Vectara for retrieval augmented generation with AI chat bots that deliver accurate, context-aware responses and

Provides a bridge between conversational interfaces and Vectara's Retrieval-Augmented Generation capabilities, enabling powerful search queries that return both relevant results and generated responses with customizable parameters.

github stars

26

Reduced hallucination via trusted RAGMultiple transport modes including secure HTTPBuilt-in authentication and rate limiting

best for

  • / AI assistants needing document search capabilities
  • / Applications requiring RAG-powered Q&A
  • / Conversational interfaces with knowledge bases
  • / Reducing hallucination in AI responses

capabilities

  • / Search documents using RAG queries
  • / Generate AI responses from search results
  • / Customize search parameters and filters
  • / Access Vectara's trusted RAG platform
  • / Configure authentication and rate limiting
  • / Deploy via HTTP, SSE, or STDIO transport

what it does

Connects conversational AI interfaces to Vectara's RAG (Retrieval-Augmented Generation) platform for enhanced search with generated responses. Enables AI assistants to search documents and get both relevant results and AI-generated answers.

about

Vectara is an official MCP server published by vectara that provides AI assistants with tools and capabilities via the Model Context Protocol. Leverage Vectara for retrieval augmented generation with AI chat bots that deliver accurate, context-aware responses and It is categorized under ai ml, analytics data.

how to install

You can install Vectara in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

Apache-2.0

Vectara is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

Vectara MCP Server

GitHub Repo stars PyPI version License Security

🔌 Compatible with Claude Desktop, and any other MCP Client!

Vectara MCP is also compatible with any MCP client

The Model Context Protocol (MCP) is an open standard that enables AI systems to interact seamlessly with various data sources and tools, facilitating secure, two-way connections.

Vectara-MCP provides any agentic application with access to fast, reliable RAG with reduced hallucination, powered by Vectara's Trusted RAG platform, through the MCP protocol.

Installation

You can install the package directly from PyPI:

pip install vectara-mcp

Quick Start

Secure by Default (HTTP/SSE with Authentication)

# Start server with secure HTTP transport (DEFAULT)
python -m vectara_mcp
# Server running at http://127.0.0.1:8000 with authentication enabled

Local Development Mode (STDIO)

# For Claude Desktop or local development (less secure)
python -m vectara_mcp --stdio
# ⚠️ Warning: STDIO transport is less secure. Use only for local development.

Configuration Options

# Custom host and port
python -m vectara_mcp --host 0.0.0.0 --port 8080

# SSE transport mode
python -m vectara_mcp --transport sse --path /sse

# Disable authentication (DANGEROUS - dev only)
python -m vectara_mcp --no-auth

Transport Modes

HTTP Transport (Default - Recommended)

  • Security: Built-in authentication via bearer tokens
  • Encryption: HTTPS ready
  • Rate Limiting: 100 requests/minute by default
  • CORS Protection: Configurable origin validation
  • Use Case: Production deployments, cloud environments

SSE Transport

  • Streaming: Server-Sent Events for real-time updates
  • Authentication: Bearer token support
  • Compatibility: Works with legacy MCP clients
  • Use Case: Real-time streaming applications

STDIO Transport

  • ⚠️ Security Warning: No transport-layer security
  • Performance: Low latency for local communication
  • Use Case: Local development, Claude Desktop
  • Requirement: Must be explicitly enabled with --stdio flag

Environment Variables

# Required
export VECTARA_API_KEY="your-api-key"

# Optional
export VECTARA_AUTHORIZED_TOKENS="token1,token2"  # Additional auth tokens
export VECTARA_ALLOWED_ORIGINS="http://localhost:*,https://app.example.com"
export VECTARA_TRANSPORT="http"  # Default transport mode
export VECTARA_AUTH_REQUIRED="true"  # Enforce authentication

Authentication

HTTP/SSE Transport

When using HTTP or SSE transport, authentication is required by default:

# Using curl with bearer token
curl -H "Authorization: Bearer $VECTARA_API_KEY" \
     -H "Content-Type: application/json" \
     -X POST http://localhost:8000/call/ask_vectara \
     -d '{"query": "What is Vectara?", "corpus_keys": ["my-corpus"]}'

# Using X-API-Key header (alternative)
curl -H "X-API-Key: $VECTARA_API_KEY" \
     http://localhost:8000/sse

Disabling Authentication (Development Only)

# ⚠️ NEVER use in production
python -m vectara_mcp --no-auth

Available Tools

API Key Management

  • setup_vectara_api_key: Configure and validate your Vectara API key for the session (one-time setup).

    Args:

    • api_key: str, Your Vectara API key - required.

    Returns:

    • Success confirmation with masked API key or validation error.
  • clear_vectara_api_key: Clear the stored API key from server memory.

    Returns:

    • Confirmation message.

Query Tools

  • ask_vectara: Run a RAG query using Vectara, returning search results with a generated response.

    Args:

    • query: str, The user query to run - required.
    • corpus_keys: list[str], List of Vectara corpus keys to use for the search - required.
    • n_sentences_before: int, Number of sentences before the answer to include in the context - optional, default is 2.
    • n_sentences_after: int, Number of sentences after the answer to include in the context - optional, default is 2.
    • lexical_interpolation: float, The amount of lexical interpolation to use - optional, default is 0.005.
    • max_used_search_results: int, The maximum number of search results to use - optional, default is 10.
    • generation_preset_name: str, The name of the generation preset to use - optional, default is "vectara-summary-table-md-query-ext-jan-2025-gpt-4o".
    • response_language: str, The language of the response - optional, default is "eng".

    Returns:

    • The response from Vectara, including the generated answer and the search results.
  • search_vectara: Run a semantic search query using Vectara, without generation.

    Args:

    • query: str, The user query to run - required.
    • corpus_keys: list[str], List of Vectara corpus keys to use for the search - required.
    • n_sentences_before: int, Number of sentences before the answer to include in the context - optional, default is 2.
    • n_sentences_after: int, Number of sentences after the answer to include in the context - optional, default is 2.
    • lexical_interpolation: float, The amount of lexical interpolation to use - optional, default is 0.005.

    Returns:

    • The response from Vectara, including the matching search results.

Analysis Tools

  • correct_hallucinations: Identify and correct hallucinations in generated text using Vectara's VHC (Vectara Hallucination Correction) API.

    Args:

    • generated_text: str, The generated text to analyze for hallucinations - required.
    • documents: list[str], List of source documents to compare against - required.
    • query: str, The original user query that led to the generated text - optional.

    Returns:

    • JSON-formatted string containing corrected text and detailed correction information.
  • eval_factual_consistency: Evaluate the factual consistency of generated text against source documents using Vectara's dedicated factual consistency evaluation API.

    Args:

    • generated_text: str, The generated text to evaluate for factual consistency - required.
    • documents: list[str], List of source documents to compare against - required.
    • query: str, The original user query that led to the generated text - optional.

    Returns:

    • JSON-formatted string containing factual consistency evaluation results and scoring.

Note: API key must be configured first using setup_vectara_api_key tool or VECTARA_API_KEY environment variable.

Configuration with Claude Desktop

To use with Claude Desktop, update your configuration to use STDIO transport:

{
  "mcpServers": {
    "Vectara": {
      "command": "python",
      "args": ["-m", "vectara_mcp", "--stdio"],
      "env": {
        "VECTARA_API_KEY": "your-api-key"
      }
    }
  }
}

Or using uv:

{
  "mcpServers": {
    "Vectara": {
      "command": "uv",
      "args": ["tool", "run", "vectara-mcp", "--stdio"]
    }
  }
}

Note: Claude Desktop requires STDIO transport. While less secure than HTTP, it's acceptable for local desktop use.

Usage in Claude Desktop App

Once the installation is complete, and the Claude desktop app is configured, you must completely close and re-open the Claude desktop app to see the Vectara-mcp server. You should see a hammer icon in the bottom left of the app, indicating available MCP tools, you can click on the hammer icon to see more detail on the Vectara-search and Vectara-extract tools.

Now claude will have complete access to the Vectara-mcp server, including all six Vectara tools.

Secure Setup Workflow

First-time setup (one-time per session):

  1. Configure your API key securely:
setup-vectara-api-key
API key: [your-vectara-api-key]

After setup, use any tools without exposing your API key:

Vectara Tool Examples

  1. RAG Query with Generation:
ask-vectara
Query: Who is Amr Awadallah?
Corpus keys: ["your-corpus-key"]
  1. Semantic Search Only:
search-vectara
Query: events in NYC?
Corpus keys: ["your-corpus-key"]
  1. Hallucination Detection & Correction:
correct-hallucinations
Generated text: [text to check]
Documents: ["source1", "source2"]
  1. Factual Consistency Evaluation:
eval-factual-consistency
Generated text: [text to evaluate]
Documents: ["reference1", "reference2"]

Security Best Practices

  1. Always use HTTP transport for production - Never expose STDIO transport to the network
  2. Keep authentication enabled - Only disable with --no-auth for local testing
  3. Use HTTPS in production - Deploy behind a reverse proxy with TLS termination
  4. Configure CORS properly - Set VECTARA_ALLOWED_ORIGINS to restrict access
  5. Rotate API keys regularly - Update VECTARA_API_KEY and VECTARA_AUTHORIZED_TOKENS
  6. Monitor rate limits - Default 100 req/min, adjust based on your needs

See SECURITY.md for detailed security guidelines.

Support

For issues, questions, or contributions, please visit: https://github.com/vectara/vectara-mcp

FAQ

What is the Vectara MCP server?
Vectara is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Vectara?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    Vectara is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated Vectara against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: Vectara is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    Vectara reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend Vectara for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: Vectara surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    Vectara has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, Vectara benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired Vectara into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    Vectara is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.