
Scorecard announces $3.75M seed funding to revolutionize AI agent testing, enabling developers to run tens of thousands of tests daily and ship trusted AI 100x faster.
All Blogs
Try filtering for other categories.

Introducing Scorecard MCP 2.0 built with the new MCP spec

We're excited to announce the launch of the first remote Model Context Protocol (MCP) server for evaluation.

Introducing AgentEval.org: An Open-Source Benchmarking Initiative for AI Agent Evaluation

Simulations are transforming the development and testing of AI systems across industries, far beyond just self-driving cars.

Unlock the full potential of Large Language Models (LLMs) with a comprehensive evaluation framework. Discover the 5 must-have features to ensure reliable performance and cost-effectiveness in your LLM applications.