developer-tools

Scorecard

by scorecard-ai

Scorecard: Evaluate and optimize LLM systems with thorough testing, actionable metrics, and performance insights to impr

Evaluate and optimize LLM systems with comprehensive testing and metrics

github stars

0

Comprehensive LLM evaluation frameworkAutomated testing workflows

best for

  • / AI developers building LLM applications
  • / Teams implementing continuous testing for AI systems
  • / Organizations measuring LLM performance in production
  • / Researchers comparing different language models

capabilities

  • / Run automated test suites against LLM applications
  • / Collect performance and accuracy metrics
  • / Generate evaluation reports with detailed analytics
  • / Compare model performance across different versions
  • / Track quality metrics over time
  • / Export test results in multiple formats

what it does

Tests and evaluates LLM applications by running automated test suites and collecting performance metrics. Helps developers measure accuracy, reliability, and quality of their AI systems.

about

Scorecard is an official MCP server published by scorecard-ai that provides AI assistants with tools and capabilities via the Model Context Protocol. Scorecard: Evaluate and optimize LLM systems with thorough testing, actionable metrics, and performance insights to impr It is categorized under developer tools.

how to install

You can install Scorecard in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server supports remote connections over HTTP, so no local installation is required.

license

Apache-2.0

Scorecard is released under the Apache-2.0 license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

Scorecard TypeScript API Library

NPM version npm bundle size

This library provides convenient access to the Scorecard REST API from server-side TypeScript or JavaScript.

The REST API documentation can be found on docs.scorecard.io. The full API of this library can be found in api.md.

It is generated with Stainless.

MCP Server

Use the Scorecard MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.

Add to Cursor Install in VS Code

Note: You may need to set environment variables in your MCP client.

Installation

npm install scorecard-ai

Usage

The full API of this library can be found in api.md.

<!-- prettier-ignore -->
import Scorecard, { runAndEvaluate } from 'scorecard-ai';

async function runSystem(testcaseInput) {
  // Replace with a call to your LLM system
  return { response: testcaseInput.original.toUpperCase() };
}

const client = new Scorecard({
  apiKey: process.env['SCORECARD_API_KEY'],
});

const run = await runAndEvaluate(
  client,
  {
    projectId: '314', // Scorecard Project
    testsetId: '246', // Scorecard Testset
    metricIds: ['789', '101'], // Scorecard Metrics
    system: runSystem, // Your LLM system
  }
);

console.log(`Go to ${run.url} to view your Run's scorecard.`);

Request & Response types

This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:

<!-- prettier-ignore -->
import Scorecard from 'scorecard-ai';

const client = new Scorecard({
  apiKey: process.env['SCORECARD_API_KEY'], // This is the default and can be omitted
});

const testset: Scorecard.Testset = await client.testsets.get('246');

Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.

Handling errors

When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of APIError will be thrown:

<!-- prettier-ignore -->
const testset = await client.testsets.get('246').catch(async (err) => {
  if (err instanceof Scorecard.APIError) {
    console.log(err.status); // 400
    console.log(err.name); // BadRequestError
    console.log(err.headers); // {server: 'nginx', ...}
  } else {
    throw err;
  }
});

Error codes are as follows:

Status CodeError Type
400BadRequestError
401AuthenticationError
403PermissionDeniedError
404NotFoundError
422UnprocessableEntityError
429RateLimitError
>=500InternalServerError
N/AAPIConnectionError

Retries

Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.

You can use the maxRetries option to configure or disable this:

<!-- prettier-ignore -->
// Configure the default for all requests:
const client = new Scorecard({
  maxRetries: 0, // default is 2
});

// Or, configure per-request:
await client.testsets.get('246', {
  maxRetries: 5,
});

Timeouts

Requests time out after 1 minute by default. You can configure this with a timeout option:

<!-- prettier-ignore -->
// Configure the default for all requests:
const client = new Scorecard({
  timeout: 20 * 1000, // 20 seconds (default is 1 minute)
});

// Override per-request:
await client.testsets.get('246', {
  timeout: 5 * 1000,
});

On timeout, an APIConnectionTimeoutError is thrown.

Note that requests which time out will be retried twice by default.

Auto-pagination

List methods in the Scorecard API are paginated. You can use the for await … of syntax to iterate through items across all pages:

async function fetchAllTestcases(params) {
  const allTestcases = [];
  // Automatically fetches more pages as needed.
  for await (const testcase of client.testcases.list('246', { limit: 30 })) {
    allTestcases.push(testcase);
  }
  return allTestcases;
}

Alternatively, you can request a single page at a time:

let page = await client.testcases.list('246', { limit: 30 });
for (const testcase of page.data) {
  console.log(testcase);
}

// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
  page = await page.getNextPage();
  // ...
}

Advanced Usage

Accessing raw Response data (e.g., headers)

The "raw" Response returned by fetch() can be accessed through the .asResponse() method on the APIPromise type that all methods return. This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.

You can also use the .withResponse() method to get the raw Response along with the parsed data. Unlike .asResponse() this method consumes the body, returning once it is parsed.

<!-- prettier-ignore -->
const client = new Scorecard();

const response = await client.testsets.get('246').asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object

const { data: testset, response: raw } = await client.testsets.get('246').withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(testset.id);

Logging

[!IMPORTANT] All log messages are intended for debugging only. The format and content of log messages may change between releases.

Log levels

The log level can be configured in two ways:

  1. Via the SCORECARD_LOG environment variable
  2. Using the logLevel client option (overrides the environment variable if set)
import Scorecard from 'scorecard-ai';

const client = new Scorecard({
  logLevel: 'debug', // Show all log messages
});

Available log levels, from most to least verbose:

  • 'debug' - Show debug messages, info, warnings, and errors
  • 'info' - Show info messages, warnings, and errors
  • 'warn' - Show warnings and errors (default)
  • 'error' - Show only errors
  • 'off' - Disable all logging

At the 'debug' level, all HTTP requests and responses are logged, including headers and bodies. Some authentication-related headers are redacted, but sensitive data in request and response bodies may still be visible.

Custom logger

By default, this library logs to globalThis.console. You can also provide a custom logger. Most logging libraries are supported, including pino, winston, bunyan, consola, signale, and @std/log. If your logger doesn't work, please open an issue.

When providing a custom logger, the logLevel option still controls which messages are emitted, messages below the configured level will not be sent to your logger.

import Scorecard from 'scorecard-ai';
import pino from 'pino';

const logger = pino();

const client = new Scorecard({
  logger: logger.child({ name: 'Scorecard' }),
  logLevel: 'debug', // Send all messages to pino, allowing it to filter
});

Making custom/undocumented requests

This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used.

Undocumented endpoints

To make requests to undocumented endpoints, you can use client.get, client.post, and other HTTP verbs. Options on the client, such as retries, will be respected when making these requests.

await client.post('/some/path', {
  body: { some_prop: 'foo' },
  query: { some_query_arg: 'bar' },
});

Undocumented request pa


FAQ

What is the Scorecard MCP server?
Scorecard is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Scorecard?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    Scorecard is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated Scorecard against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: Scorecard is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    Scorecard reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend Scorecard for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: Scorecard surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    Scorecard has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, Scorecard benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired Scorecard into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    Scorecard is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.