RAG Memory▌

by ttommyth
Enhance persistent memory with RAG Memory, merging Pinecone vector database and vector search with knowledge graph relat
Provides a knowledge graph-enhanced retrieval system that combines vector search with graph-based relationships for persistent memory and contextual information retrieval
best for
- / AI agents needing persistent memory across sessions
- / Building knowledge bases from document collections
- / Researchers organizing and connecting information
- / RAG applications requiring contextual retrieval
capabilities
- / Store and chunk documents for processing
- / Create entities and relationships in knowledge graph
- / Perform hybrid search combining vector similarity with graph traversal
- / Generate semantic embeddings for documents and entities
- / Add observations to continuously enrich entity context
- / Extract potential entity terms from documents
what it does
Creates a persistent knowledge graph with vector search that stores documents, entities, and relationships for intelligent information retrieval. Combines traditional graph-based connections with semantic similarity search.
about
RAG Memory is a community-built MCP server published by ttommyth that provides AI assistants with tools and capabilities via the Model Context Protocol. Enhance persistent memory with RAG Memory, merging Pinecone vector database and vector search with knowledge graph relat It is categorized under ai ml. This server exposes 20 tools that AI clients can invoke during conversations and coding sessions.
how to install
You can install RAG Memory in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
license
MIT
RAG Memory is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
readme
rag-memory-mcp
An advanced MCP server for RAG-enabled memory through a knowledge graph with vector search capabilities. This server extends the basic memory concepts with semantic search, document processing, and hybrid retrieval for more intelligent memory management.
Inspired by: Knowledge Graph Memory Server from the Model Context Protocol project.
Note: This server is designed to run locally alongside MCP clients (e.g., Claude Desktop, VS Code) and requires local file system access for database storage.
✨ Key Features
- 🧠 Knowledge Graph Memory: Persistent entities, relationships, and observations
- 🔍 Vector Search: Semantic similarity search using sentence transformers
- 📄 Document Processing: RAG-enabled document chunking and embedding
- 🔗 Hybrid Search: Combines vector similarity with graph traversal
- ⚡ SQLite Backend: Fast local storage with sqlite-vec for vector operations
- 🎯 Entity Extraction: Automatic term extraction from documents
Tools
This server provides comprehensive memory management through the Model Context Protocol (MCP):
📚 Document Management
storeDocument: Store documents with metadata for processingchunkDocument: Create text chunks with configurable parametersembedChunks: Generate vector embeddings for semantic searchextractTerms: Extract potential entity terms from documentslinkEntitiesToDocument: Create explicit entity-document associationsdeleteDocuments: Remove documents and associated datalistDocuments: View all stored documents with metadata
🧠 Knowledge Graph
createEntities: Create new entities with observations and typescreateRelations: Establish relationships between entitiesaddObservations: Add contextual information to existing entitiesdeleteEntities: Remove entities and their relationshipsdeleteRelations: Remove specific relationshipsdeleteObservations: Remove specific observations from entities
🔍 Search & Retrieval
hybridSearch: Advanced search combining vector similarity and graph traversalsearchNodes: Find entities by name, type, or observation contentopenNodes: Retrieve specific entities and their relationshipsreadGraph: Get complete knowledge graph structure
📊 Analytics
getKnowledgeGraphStats: Comprehensive statistics about the knowledge base
Usage Scenarios
This server is ideal for scenarios requiring intelligent memory and document understanding:
- Research and Documentation: Store, process, and intelligently retrieve research papers
- Knowledge Base Construction: Build interconnected knowledge from documents
- Conversational Memory: Remember context across chat sessions with semantic understanding
- Content Analysis: Extract and relate concepts from large document collections
- Intelligent Assistance: Provide contextually aware responses based on stored knowledge
Client Configuration
This section explains how to configure MCP clients to use the rag-memory-mcp server.
Usage with Claude Desktop / Cursor
Add the following configuration to your claude_desktop_config.json (Claude Desktop) or mcp.json (Cursor):
{
"mcpServers": {
"rag-memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"]
}
}
}
With specific version:
{
"mcpServers": {
"rag-memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp@1.0.0"]
}
}
}
With custom database path:
{
"mcpServers": {
"rag-memory": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"],
"env": {
"MEMORY_DB_PATH": "/path/to/custom/memory.db"
}
}
}
}
Usage with VS Code
Add the following configuration to your User Settings (JSON) file or .vscode/mcp.json:
{
"mcp": {
"servers": {
"rag-memory-mcp": {
"command": "npx",
"args": ["-y", "rag-memory-mcp"]
}
}
}
}
Core Concepts
Entities
Entities are the primary nodes in the knowledge graph. Each entity has:
- A unique name (identifier)
- An entity type (e.g., "PERSON", "CONCEPT", "TECHNOLOGY")
- A list of observations (contextual information)
Example:
{
"name": "Machine Learning",
"entityType": "CONCEPT",
"observations": [
"Subset of artificial intelligence",
"Focuses on learning from data",
"Used in recommendation systems"
]
}
Relations
Relations define directed connections between entities, describing how they interact:
Example:
{
"from": "React",
"to": "JavaScript",
"relationType": "BUILT_WITH"
}
Observations
Observations are discrete pieces of information about entities:
- Stored as strings
- Attached to specific entities
- Can be added or removed independently
- Should be atomic (one fact per observation)
Documents & Vector Search
Documents are processed through:
- Storage: Raw text with metadata
- Chunking: Split into manageable pieces
- Embedding: Convert to vector representations
- Linking: Associate with relevant entities
This enables hybrid search that combines:
- Vector similarity (semantic matching)
- Graph traversal (conceptual relationships)
Environment Variables
MEMORY_DB_PATH: Path to the SQLite database file (default:memory.dbin the server directory)
Development Setup
This section is for developers looking to modify or contribute to the server.
Prerequisites
- Node.js: Check
package.jsonfor version compatibility - npm: Used for package management
Installation (Developers)
- Clone the repository:
git clone https://github.com/ttommyth/rag-memory-mcp.git
cd rag-memory-mcp
- Install dependencies:
npm install
Building
npm run build
Running (Development)
npm run watch # For development with auto-rebuild
Development Commands
- Build:
npm run build - Watch:
npm run watch - Prepare:
npm run prepare
Usage Example
Here's a typical workflow for building and querying a knowledge base:
// 1. Store a document
await storeDocument({
id: "ml_intro",
content: "Machine learning is a subset of AI...",
metadata: { type: "educational", topic: "ML" }
});
// 2. Process the document
await chunkDocument({ documentId: "ml_intro" });
await embedChunks({ documentId: "ml_intro" });
// 3. Extract and create entities
const terms = await extractTerms({ documentId: "ml_intro" });
await createEntities({
entities: [
{
name: "Machine Learning",
entityType: "CONCEPT",
observations: ["Subset of artificial intelligence", "Learns from data"]
}
]
});
// 4. Search with hybrid approach
const results = await hybridSearch({
query: "artificial intelligence applications",
limit: 10,
useGraph: true
});
System Prompt Suggestions
For optimal memory utilization, consider using this system prompt:
You have access to a RAG-enabled memory system with knowledge graph capabilities. Follow these guidelines:
1. **Information Storage**:
- Store important documents using the document management tools
- Create entities for people, concepts, organizations, and technologies
- Build relationships between related concepts
2. **Information Retrieval**:
- Use hybrid search for comprehensive information retrieval
- Leverage both semantic similarity and graph relationships
- Search entities before creating duplicates
3. **Memory Maintenance**:
- Add observations to enrich entity context
- Link documents to relevant entities for better discoverability
- Use statistics to monitor knowledge base growth
4. **Processing Workflow**:
- Store → Chunk → Embed → Extract → Link
- Always process documents completely for best search results
Contributing
Contributions are welcome! Please follow standard development practices and ensure all tests pass before submitting pull requests.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Built with: TypeScript, SQLite, sqlite-vec, Hugging Face Transformers, Model Context Protocol SDK
FAQ
- What is the RAG Memory MCP server?
- RAG Memory is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
- How do MCP servers relate to agent skills?
- Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
- How are reviews shown for RAG Memory?
- This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
Ratings
4.5★★★★★10 reviews- ★★★★★Shikha Mishra· Oct 10, 2024
RAG Memory is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.
- ★★★★★Piyush G· Sep 9, 2024
We evaluated RAG Memory against two servers with overlapping tools; this profile had the clearer scope statement.
- ★★★★★Chaitanya Patil· Aug 8, 2024
Useful MCP listing: RAG Memory is the kind of server we cite when onboarding engineers to host + tool permissions.
- ★★★★★Sakshi Patil· Jul 7, 2024
RAG Memory reduced integration guesswork — categories and install configs on the listing matched the upstream repo.
- ★★★★★Ganesh Mohane· Jun 6, 2024
I recommend RAG Memory for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.
- ★★★★★Oshnikdeep· May 5, 2024
Strong directory entry: RAG Memory surfaces stars and publisher context so we could sanity-check maintenance before adopting.
- ★★★★★Dhruvi Jain· Apr 4, 2024
RAG Memory has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.
- ★★★★★Rahul Santra· Mar 3, 2024
According to our notes, RAG Memory benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.
- ★★★★★Pratham Ware· Feb 2, 2024
We wired RAG Memory into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.
- ★★★★★Yash Thakker· Jan 1, 2024
RAG Memory is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.