ai-mldeveloper-tools

In Memoria

by pi22by7

In Memoria delivers persistent code analysis via hybrid Rust-TypeScript, using Tree-sitter for abstract syntax tree Pyth

Provides persistent intelligence infrastructure for codebase analysis through hybrid Rust-TypeScript architecture that combines Tree-sitter AST parsing with semantic concept extraction, developer pattern recognition, and SQLite-based persistence to build contextual understanding of codebases over time, learning from developer behavior and architectural decisions.

github stars

156

Persistent memory across AI sessionsHybrid Rust-TypeScript with Tree-sitter parsingSQLite-based intelligence storage

best for

  • / Developers tired of re-explaining architecture to AI assistants
  • / Teams wanting consistent coding patterns across features
  • / Large codebases where finding relevant files is challenging

capabilities

  • / Analyze codebase structure and extract semantic patterns
  • / Search code by text matching or regex patterns
  • / Predict which files to modify for specific tasks
  • / Generate coding recommendations based on existing patterns
  • / Track developer conventions and architectural decisions
  • / Store and retrieve discovered insights across sessions

what it does

Builds persistent intelligence about your codebase by analyzing code patterns, architecture, and developer conventions, then provides this context to AI assistants across sessions so they remember your project's specifics.

about

In Memoria is a community-built MCP server published by pi22by7 that provides AI assistants with tools and capabilities via the Model Context Protocol. In Memoria delivers persistent code analysis via hybrid Rust-TypeScript, using Tree-sitter for abstract syntax tree Pyth It is categorized under ai ml, developer tools. This server exposes 14 tools that AI clients can invoke during conversations and coding sessions.

how to install

You can install In Memoria in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

In Memoria is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

In Memoria

npm version npm downloads License: MIT Discord

Giving AI coding assistants a memory that actually persists.

Quick Demo

asciicast

Watch In Memoria in action: learning a codebase, providing instant context, and routing features to files.


The Problem: Session Amnesia

You know the drill. You fire up Claude, Copilot, or Cursor to help with your codebase. You explain your architecture. You describe your patterns. You outline your conventions. The AI gets it, helps you out, and everything's great.

Then you close the window.

Next session? Complete amnesia. You're explaining the same architectural decisions again. The same naming conventions. The same "no, we don't use classes here, we use functional composition" for the fifteenth time.

Every AI coding session starts from scratch.

This isn't just annoying, it's inefficient. These tools re-analyze your codebase on every interaction, burning tokens and time. They give generic suggestions that don't match your style. They have no memory of what worked last time, what you rejected, or why.

The Solution: Persistent Intelligence

In Memoria is an MCP server that learns from your actual codebase and remembers across sessions. It builds persistent intelligence about your code (patterns, architecture, conventions, decisions) that AI assistants can query through the Model Context Protocol.

Think of it as giving your AI pair programmer a notepad that doesn't get wiped clean every time you restart the session.

Current version: 0.6.0 - See what's changed

What It Does

  • Learns your patterns - Analyzes your code to understand naming conventions, architectural choices, and structural preferences
  • Instant project context - Provides tech stack, entry points, and architecture in <200 tokens (no re-analysis needed)
  • Smart file routing - Routes vague requests like "add password reset" directly to relevant files
  • Semantic search - Finds code by meaning, not just keywords
  • Work memory - Tracks current tasks and architectural decisions across sessions
  • Pattern prediction - Suggests how you'd solve similar problems based on your history

Example Workflow

# First time: Learn your codebase
npx in-memoria learn ./my-project

# Start the MCP server
npx in-memoria server

# Now in Claude/Copilot:
You: "Add password reset functionality"
AI: *queries In Memoria*
    "Based on your auth patterns in src/auth/login.ts, I'll use your
     established JWT middleware pattern and follow your Result<T>
     error handling convention..."

# Next session (days later):
You: "Where did we put the password reset code?"
AI: *queries In Memoria*
    "In src/auth/password-reset.ts, following the pattern we
     established in our last session..."

No re-explaining. No generic suggestions. Just continuous, context-aware assistance.

Quick Start

Installation

# Install globally
npm install -g in-memoria

# Or use directly with npx
npx in-memoria --help

Connect to Your AI Tool

Claude Desktop - Add to your config (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "in-memoria": {
      "command": "npx",
      "args": ["in-memoria", "server"]
    }
  }
}

Claude Code CLI:

claude mcp add in-memoria -- npx in-memoria server

GitHub Copilot - See Copilot Integration section below

Learn Your Codebase

# Analyze and learn from your project
npx in-memoria learn ./my-project

# Or let AI agents trigger learning automatically
# (Just start the server and let auto_learn_if_needed handle it)
npx in-memoria server

How It Works

In Memoria is built on Rust + TypeScript, using the Model Context Protocol to connect AI tools to persistent codebase intelligence.

Architecture

┌─────────────────────┐    MCP     ┌──────────────────────┐    napi-rs    ┌─────────────────────┐
│  AI Tool (Claude)   │◄──────────►│  TypeScript Server   │◄─────────────►│   Rust Core         │
└─────────────────────┘            └──────────┬───────────┘               │  • AST Parser       │
                                              │                           │  • Pattern Learner  │
                                              │                           │  • Semantic Engine  │
                                              ▼                           │  • Blueprint System │
                                   ┌──────────────────────┐               └─────────────────────┘
                                   │ SQLite (persistent)  │
                                   │ SurrealDB (in-mem)   │
                                   └──────────────────────┘

The Core Components

Rust Layer - Fast, native processing:

  • Tree-sitter AST parsing for 12 languages (TypeScript, JavaScript, Python, PHP, Rust, Go, Java, C/C++, C#, Svelte, SQL)
  • Blueprint analyzer (detects project structure, entry points, architecture patterns)
  • Pattern learner (statistical analysis of your coding style)
  • Semantic engine (understands code relationships and concepts)

TypeScript Layer - MCP server and orchestration:

  • 13 specialized tools for AI assistants (organized into 4 categories)
  • SQLite for structured data, SurrealDB with SurrealKV for persistent vector embeddings
  • File watching for incremental updates
  • Smart routing that maps features to files

Storage - Local-first:

  • Everything stays on your machine
  • SQLite for patterns and metadata
  • SurrealDB with SurrealKV backend for persistent vector embeddings
  • Local transformers.js for embeddings (Xenova/all-MiniLM-L6-v2)

What Makes It Different

This isn't just another RAG system or static rules engine:

  • Learns from actual code - Not manually-defined rules, but statistical patterns from your real codebase
  • Predicts your approach - Based on how you've solved similar problems before
  • Token efficient - Responses optimized to minimize LLM context usage (<200 tokens for project context)
  • Routes to files - "Add login" → automatically suggests src/auth/login.ts
  • Remembers context - Tracks work sessions, tasks, and architectural decisions
  • Multi-mode search - Semantic (meaning), text (keywords), or pattern-based

What's New in v0.5.x

We recently completed Phases 1-4 of the implementation roadmap:

🗺️ Project Blueprints (Phase 1)

Instant project context without full learning. Ask about a codebase and get tech stack, entry points, key directories, and architecture all in under 200 tokens.

💼 Work Context System (Phase 2)

AI agents can now track work sessions, maintain task lists, and record architectural decisions. Resume work exactly where you left off.

🧭 Smart File Routing (Phase 3)

Feature-to-file mapping across 10 categories (auth, API, database, UI, etc.). Vague requests like "add password reset" get routed to specific files automatically.

⚡ Smooth Progress Tracking (v0.5.3)

No more janky console spam. Progress bars update in-place with consistent 500ms refresh rates.

MCP Tools for AI Assistants

In Memoria provides 13 specialized tools that AI assistants can call via MCP. They're organized into 4 categories (down from 16 after Phase 4 consolidation merged redundant tools):

🎯 Core Analysis (2 tools)

  • analyze_codebase - Analyze files/directories with concepts, patterns, complexity (Phase 4: now handles both files and directories)
  • search_codebase - Multi-mode search (semantic/text/pattern)

🧠 Intelligence (7 tools)

  • learn_codebase_intelligence - Deep learning to extract patterns and architecture
  • get_project_blueprint - Instant project context with tech stack and entry points ⭐ (Phase 4: includes learning status)
  • get_semantic_insights - Query learned concepts and relationships
  • get_pattern_recommendations - Get patterns with related files for consistency
  • predict_coding_approach - Implementation guidance with file routing ⭐
  • get_developer_profile - Access coding style and work context
  • contribute_insights - Record architectural decisions

🤖 Automation (1 tool)

  • auto_learn_if_needed - Smart auto-learning with staleness detection ⭐ (Phase 4: includes quick setup functionality)

📊 Monitoring (3 tools)

  • get_system_status - Health check
  • get_intelligence_metrics - Analytics on learned patterns
  • get_performance_status - Performance diagnostics

Phase 4 Consolidation: Three tools were merged into existing tools for better AX (agent experience haha):

  • get_file_content → merged into analyze_codebase
  • get_learning_status → merged into get_project_blueprint
  • quick_setup → merged into auto_learn_if_needed

For AI agents: See AGENT.md for complete tool reference with usage patterns and decision trees.

GitHub Copilot Integration

In Memoria works with GitHub Copilot through custom instructions and chat modes.

Setup

This repository includes:

  • .github/copilot-instructions.md - Automatic guidance for Copilot
  • .github/chatmodes/ - Three specialized chat modes:
    • 🔍 inmemoria-explorer - Intelligent codebase navigation
    • 🚀 inmemoria-feature - Feature implementation with patterns
    • 🔎 inmemoria-review - Code review with consistency checking

GitHub Copilot Integration (VS Code)

In Memoria integrates with **GitH