ai-ml

AI Memory

by scanadi

AI Memory is a production-ready vector database server that manages and retrieves contextual knowledge with advanced sem

Production-ready semantic memory management server that stores, retrieves, and manages contextual knowledge across sessions using PostgreSQL with pgvector for vector similarity search, featuring intelligent caching, multi-user support, memory relationships, automatic clustering, and background job processing for persistent AI memory and knowledge management systems.

github stars

44

Local embeddings — no API calls requiredProduction-ready with PostgreSQL + RedisGraph-based memory relationships

best for

  • / AI agents needing persistent memory across conversations
  • / Multi-user AI applications with isolated contexts
  • / Knowledge management systems with semantic search
  • / AI assistants building long-term understanding

capabilities

  • / Store contextual knowledge with vector embeddings
  • / Retrieve memories using semantic similarity search
  • / Create relationships between memories (references, contradicts, supports)
  • / Traverse memory graphs with BFS/DFS algorithms
  • / Manage memory lifecycle with automatic decay and archiving
  • / Isolate memory contexts across multiple users

what it does

A semantic memory system for AI agents that stores and retrieves contextual knowledge across sessions using PostgreSQL vector search. Enables persistent memory with relationships, automatic clustering, and multi-user support.

about

AI Memory is a community-built MCP server published by scanadi that provides AI assistants with tools and capabilities via the Model Context Protocol. AI Memory is a production-ready vector database server that manages and retrieves contextual knowledge with advanced sem It is categorized under ai ml.

how to install

You can install AI Memory in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

AI Memory is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

MCP AI Memory

npm version License: MIT

A production-ready Model Context Protocol (MCP) server for semantic memory management that enables AI agents to store, retrieve, and manage contextual knowledge across sessions.

📖 System Prompt Available: See SYSTEM_PROMPT.md for a comprehensive guide on how to instruct AI models to use this memory system effectively. This prompt helps models understand when and how to use memory tools, especially for proactive memory retrieval.

Features

Core Capabilities

  • TypeScript - Full type safety with strict mode
  • PostgreSQL + pgvector - Vector similarity search with HNSW indexing
  • Kysely ORM - Type-safe SQL queries
  • Local Embeddings - Uses Transformers.js (no API calls)
  • Intelligent Caching - Redis + in-memory fallback for blazing fast performance
  • Multi-Agent Support - User context isolation
  • Token Efficient - Embeddings removed from responses

Advanced Memory Management

  • Graph Relationships - Rich relationship types (references, contradicts, supports, extends, causes, precedes, etc.)
  • Graph Traversal - BFS/DFS algorithms with depth limits and filtering
  • Memory Decay - Automatic lifecycle management with exponential decay
  • Memory States - Active, dormant, archived, and expired states
  • Preservation - Protect important memories from decay
  • Soft Deletes - Data recovery with deleted_at timestamps
  • Clustering - Automatic memory consolidation
  • Compression - Automatic compression of archived memories

Prerequisites

  • Node.js 18+ or Bun
  • PostgreSQL with pgvector extension
  • Redis (optional - falls back to in-memory cache if not available)

Installation

NPM Package (Recommended for Claude Desktop)

npm install -g mcp-ai-memory

From Source

  1. Install dependencies:
bun install
  1. Set up PostgreSQL with pgvector:
CREATE DATABASE mcp_ai_memory;
\c mcp_ai_memory
CREATE EXTENSION IF NOT EXISTS vector;
  1. Create environment file:
# Create .env with your database credentials
touch .env
  1. Run migrations:
bun run migrate

Usage

Development

bun run dev

Production

bun run build
bun run start

Troubleshooting

Embedding Dimension Mismatch Error

If you see an error like:

Failed to generate embedding: Error: Embedding dimension mismatch: Model produces 384-dimensional embeddings, but database expects 768

This occurs when the embedding model changes between sessions. To fix:

  1. Option 1: Reset and Re-embed (Recommended for new installations)

    # Clear existing memories and start fresh
    psql -d your_database -c "TRUNCATE TABLE memories CASCADE;"
    
  2. Option 2: Specify a Consistent Model Add EMBEDDING_MODEL to your Claude Desktop config:

    {
      "mcpServers": {
        "memory": {
          "command": "npx",
          "args": ["-y", "mcp-ai-memory"],
          "env": {
            "MEMORY_DB_URL": "postgresql://...",
            "EMBEDDING_MODEL": "Xenova/all-mpnet-base-v2"
          }
        }
      }
    }
    

    Common models:

    • Xenova/all-mpnet-base-v2 (768 dimensions - default, best quality)
    • Xenova/all-MiniLM-L6-v2 (384 dimensions - smaller/faster)
  3. Option 3: Run Migration for Flexible Dimensions If you're using the source version:

    bun run migrate
    

    This allows mixing different embedding dimensions in the same database.

Database Connection Issues

Ensure your PostgreSQL has the pgvector extension:

CREATE EXTENSION IF NOT EXISTS vector;

Claude Desktop Integration

💡 For Best Results: Include the SYSTEM_PROMPT.md content in your Claude Desktop system prompt or initial conversation to help Claude understand how to use the memory tools effectively.

Quick Setup (NPM)

Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "mcp-ai-memory"],
      "env": {
        "DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db"
      }
    }
  }
}

With Optional Redis Cache

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "mcp-ai-memory"],
      "env": {
        "DATABASE_URL": "postgresql://username:password@localhost:5432/memory_db",
        "REDIS_URL": "redis://localhost:6379",
        "EMBEDDING_MODEL": "Xenova/all-MiniLM-L6-v2",
        "LOG_LEVEL": "info"
      }
    }
  }
}

Environment Variables

VariableDescriptionDefault
DATABASE_URLPostgreSQL connection stringRequired
REDIS_URLRedis connection string (optional)None - uses in-memory cache
EMBEDDING_MODELTransformers.js modelXenova/all-MiniLM-L6-v2
LOG_LEVELLogging levelinfo
CACHE_TTLCache TTL in seconds3600
MAX_MEMORIES_PER_QUERYMax results per search10
MIN_SIMILARITY_SCOREMin similarity threshold0.5

Available Tools

💡 Token Efficiency: Default limits are set to 10 results to optimize token usage. Increase only when needed.

Core Operations (Most Important)

  • memory_search - SEARCH FIND RECALL - Search stored information using natural language (USE THIS FIRST! Default limit: 10)
  • memory_list - LIST BROWSE SHOW - List all memories chronologically (fallback when search fails, default limit: 10)
  • memory_store - STORE SAVE REMEMBER - Store new information after checking for duplicates
  • memory_update - UPDATE MODIFY EDIT - Update existing memory metadata
  • memory_delete - DELETE REMOVE FORGET - Delete specific memories

Advanced Operations

  • memory_batch - BATCH BULK IMPORT - Store multiple memories efficiently
  • memory_batch_delete - Delete multiple memories at once
  • memory_graph_search - GRAPH RELATED - Search with relationship traversal (alias for memory_traverse)
  • memory_consolidate - MERGE CLUSTER - Group similar memories
  • memory_stats - STATS INFO - Database statistics
  • memory_relate - LINK CONNECT - Create memory relationships
  • memory_unrelate - UNLINK DISCONNECT - Remove relationships
  • memory_get_relations - Show all relationships for a memory

Graph & Decay Operations (New)

  • memory_traverse - TRAVERSE EXPLORE - Traverse memory graph with BFS/DFS algorithms
  • memory_graph_analysis - ANALYZE CONNECTIONS - Analyze graph connectivity and relationship patterns
  • memory_decay_status - DECAY STATUS - Check decay status of a memory
  • memory_preserve - PRESERVE PROTECT - Preserve important memories from decay

Resources

  • memory://stats - Database statistics
  • memory://types - Available memory types
  • memory://tags - All unique tags
  • memory://relationships - Memory relationships
  • memory://clusters - Memory clusters

Prompts

  • load-context - Load relevant context for a task
  • memory-summary - Generate topic summaries
  • conversation-context - Load conversation history

Architecture

src/
├── server.ts           # MCP server implementation
├── types/              # TypeScript definitions
├── schemas/            # Zod validation schemas
├── services/           # Business logic
├── database/           # Kysely migrations and client
└── config/             # Configuration management

Environment Variables

# Required
MEMORY_DB_URL=postgresql://user:password@localhost:5432/mcp_ai_memory

# Optional - Caching (falls back to in-memory if Redis unavailable)
REDIS_URL=redis://localhost:6379
CACHE_TTL=3600                  # 1 hour default cache
EMBEDDING_CACHE_TTL=86400       # 24 hours for embeddings
SEARCH_CACHE_TTL=3600           # 1 hour for search results
MEMORY_CACHE_TTL=7200           # 2 hours for individual memories

# Optional - Model & Performance
EMBEDDING_MODEL=Xenova/all-mpnet-base-v2
LOG_LEVEL=info
MAX_CONTENT_SIZE=1048576
DEFAULT_SEARCH_LIMIT=10                 # Default 10 for token efficiency
DEFAULT_SIMILARITY_THRESHOLD=0.7

# Optional - Async Processing (requires Redis)
ENABLE_ASYNC_PROCESSING=true    # Enable background job processing
BULL_CONCURRENCY=3              # Worker concurrency
ENABLE_REDIS_CACHE=true          # Enable Redis caching

Caching Architecture

The server implements a two-tier caching strategy:

  1. Redis Cache (if available) - Distributed, persistent caching
  2. In-Memory Cache (fallback) - Local NodeCache for when Redis is unavailable

Async Job Processing

When Redis is available and ENABLE_ASYNC_PROCESSING=true, the server uses BullMQ for background job processing:

Features

  • Async Embedding Generation: Offloads CPU-intensive embedding generation to background workers
  • Batch Import: Processes large memory imports without blocking the main server
  • Memory Consolidation: Runs clustering and merging operations in the background
  • Automatic Retries: Failed jobs are retried with exponential backoff
  • Dead Letter Queue: Permanently failed jobs are tracked for manual intervention

Running Workers

# Start all workers
bun run workers

# Or start individual workers
bun run worker:embedding   # Embedding generation worker
bun run worker:batch       # Batch import and consolidation worker

# Test async processing
bun run test:async

Queue Monitoring

The memory_stats tool includes queue statistics when async processing is enabled:

  • Active, waiting, completed, and failed job counts
  • Processing rates and performance metrics
  • Worker health status

Cache Invalidation

  • Memory updates/deletes automatically invalidate

FAQ

What is the AI Memory MCP server?
AI Memory is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for AI Memory?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    AI Memory is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated AI Memory against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: AI Memory is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    AI Memory reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend AI Memory for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: AI Memory surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    AI Memory has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, AI Memory benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired AI Memory into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    AI Memory is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.