ai-mldeveloper-tools

Think

by marcopesani

Think offers a dedicated thinking space for step-by-step analysis, memory retention, and encouraging feedback on complex

Provides a dedicated thinking space for complex reasoning with encouraging feedback, enabling step-by-step analysis and memory retention without external actions.

github stars

24

Based on Anthropic's researchNo external actions or data changesImproves consistency in decision making

best for

  • / Complex reasoning tasks requiring multiple steps
  • / Policy-heavy environments with detailed guidelines
  • / Multi-step tool usage workflows
  • / Problems requiring careful analysis of previous outputs

capabilities

  • / Record intermediate thoughts during problem solving
  • / Maintain reasoning history across conversation steps
  • / Analyze tool outputs before proceeding
  • / Cache important information for later reference
  • / Support sequential decision making processes

what it does

Provides Claude AI with a dedicated thinking space to perform step-by-step reasoning and retain memory for complex multi-step problems without taking external actions.

about

Think is a community-built MCP server published by marcopesani that provides AI assistants with tools and capabilities via the Model Context Protocol. Think offers a dedicated thinking space for step-by-step analysis, memory retention, and encouraging feedback on complex It is categorized under ai ml, developer tools.

how to install

You can install Think in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport. This server supports remote connections over HTTP, so no local installation is required.

license

MIT

Think is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

think-mcp-server

smithery badge

A minimal MCP Server based on the Anthropic's "think" tool research

Overview

This project implements a minimal Message Control Protocol (MCP) server that provides Claude AI models with the "think" tool capability. Based on Anthropic's research published on March 20, 2025, this implementation enables Claude to perform better on complex reasoning tasks involving multi-step tool usage.

What is the "think" tool?

The "think" tool gives Claude the ability to include an additional thinking step—with its own designated space—as part of reaching a final answer. Unlike extended thinking (which happens before response generation), the "think" tool allows Claude to pause during response generation to consider whether it has all necessary information to proceed.

Key benefits:

  • Improves complex problem-solving performance
  • Enhances policy adherence in tool usage
  • Increases consistency in decision making
  • Helps with multi-step problems requiring careful reasoning

Implementation

This server implements the "think" tool with the following specification:

{
  "name": "think",
  "description": "Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed.",
  "input_schema": {
    "type": "object",
    "properties": {
      "thought": {
        "type": "string",
        "description": "A thought to think about."
      }
    },
    "required": ["thought"]
  }
}

When to Use the "think" Tool

Based on Anthropic's research, this tool is most beneficial for:

  1. Tool Output Analysis: When Claude needs to process previous tool call outputs before acting
  2. Policy-Heavy Environments: When Claude must follow detailed guidelines
  3. Sequential Decision Making: When each action builds on previous ones and mistakes are costly

Implementation Best Practices

Strategic Prompting

For best results, include clear instructions in your prompts on when and how to use the "think" tool. Consider providing domain-specific examples that show:

  • Expected detail level in reasoning
  • How to break down complex instructions into steps
  • Decision trees for common scenarios
  • Information verification processes

System Prompt Integration

Complex guidance works best when placed in the system prompt rather than the tool description itself.

How It Works

The server operates using the Model Context Protocol (MCP) to communicate with Claude and similar AI assistants. It:

  • Runs as a standalone process using stdio for communication
  • Registers the "think" tool for Claude to use during reasoning
  • Returns structured responses that can be processed by AI assistants
  • Logs thinking steps without affecting the external environment

Features

Tools

  • think - Enables Claude to think about a problem or analyze information
    • Required: thought (string containing Claude's thinking process)

Development

Install dependencies:

npm install

Build the server:

npm run build

For development with auto-rebuild:

npm run watch

Debugging

Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector:

npm run inspector

The Inspector will provide a URL to access debugging tools in your browser.

Installation

Installing via Smithery

To install think-mcp-server for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @marcopesani/think-mcp-server --client claude

Installing via npm

npm install -g think-mcp-server

Claude Desktop

Add the server config at:

  • MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "think": {
      "command": "npx",
      "args": ["-y", "think-mcp-server"]
    }
  }
}

Cline

  1. Open the Cline extension settings
  2. Open "MCP Servers" tab
  3. Click on "Configure MCP Servers"
  4. Add the server config:
{
  "mcpServers": {
    "github.com/marcopesani/think-mcp-server": {
      "command": "npx",
      "args": ["-y", "think-mcp-server"],
      "disabled": false,
      "autoApprove": ["think"]
    }
  }
}

Additional Cline configuration options:

  • disabled: Set to false to enable the server
  • autoApprove: List of tools that don't require explicit approval for each use

Cursor

  1. Open the Cursor settings
  2. Open "Features" settings
  3. In the "MCP Servers" section, click on "Add new MCP Server"
  4. Choose a name, and select "command" as "Type"
  5. In the "Command" field, enter the following:
npx -y think-mcp-server

Docker

You can also run the server using Docker. First, build the image:

docker build -t think-mcp-server .

Then run the container:

docker run -it think-mcp-server

For development, you might want to mount your source code as a volume:

docker run -v $(pwd):/app think-mcp-server

Getting Started

  1. Install the server using one of the methods above
  2. Configure your AI client to use the think-mcp-server
  3. In your prompts to Claude, include instructions for using the "think" tool
  4. For best results, add examples of effective thinking patterns in your system prompt

TypeScript Development Example

Here's an example prompt focused on TypeScript development to help Claude leverage the "think" tool effectively:

## Using the think tool for TypeScript Development

Before implementing any TypeScript code or responding to technical questions, use the think tool to:
- Analyze type definitions and interfaces that might be relevant
- Check for potential type errors or edge cases
- Reason through generics and type constraints
- Verify the correct usage of TypeScript features

Here's an example of TypeScript-specific reasoning with the think tool:

<think_tool_typescript_example>
User wants to implement a generic state management hook in React TypeScript
- Analysis of requirements:
  * Need to handle any data type (generic typing)
  * Must preserve type information throughout state lifecycle
  * Should provide typesafe update methods
  * Need to consider immutability patterns

- Type constraints to consider:
  * Should T extend object, or allow primitives?
  * Is Partial<T> appropriate for updates?
  * Should we use Record<string, unknown> as fallback?

- Implementation considerations:
  * useState<T> as foundation
  * Add typed setter methods that preserve generics
  * Consider callback pattern for atomic updates
  * Check for compatibility with React's rules of hooks

- Approach:
1. Define generic interface for the hook's return value
2. Implement useState with proper generic typing
3. Create typed update methods that maintain type safety
4. Add JSDoc comments for better developer experience
5. Return immutable state with properly typed methods
</think_tool_typescript_example>

When helping with TypeScript challenges:
1. First analyze the type system implications
2. Consider TypeScript-specific edge cases
3. Reason through type narrowing and widening
4. Check for compatibility with the TypeScript compiler version
5. Consider type inference and explicit type annotations

Performance Benefits

Anthropic's evaluations showed significant improvements when using the "think" tool:

  • 54% relative improvement on τ-Bench airline domain (pass^1 metric: 0.570 vs 0.370 baseline)
  • Improved performance on SWE-bench by 1.6% on average
  • Enhanced consistency across multiple trials

References

This implementation is based on Anthropic's research article "The 'think' tool: Enabling Claude to stop and think in complex tool use situations" published March 20, 2025.

FAQ

What is the Think MCP server?
Think is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Think?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    Think is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated Think against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: Think is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    Think reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend Think for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: Think surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    Think has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, Think benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired Think into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    Think is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.