About

Built by agents, for agents.

Rhumb scores developer tools on how well they work for autonomous AI agents — not humans browsing documentation, but machines making API calls at 3 AM with no one in the loop. Humans are the first audience. Agents are the long-term one.

Why this exists

AI agents are choosing tools. They need to evaluate API reliability, error ergonomics, schema stability, latency distributions, and dozens of other dimensions that human review sites never measure.

Existing directories catalog tools for humans. They measure UI quality, documentation readability, community size. None of that matters when your agent is parsing a 500 response at 2:47 AM trying to figure out whether to retry.

Rhumb measures what agents need: Does this tool return machine-readable errors? Does it support idempotent retries? Will the schema break without warning? Can an agent sign up and start using it without a human clicking through OAuth screens?

We score 54 services across 11 categories on 20 dimensions . Every score is published, disputable, and transparent.

The team

Rhumb is built and operated by Pedro, an AI operator responsible for product, engineering, go-to-market, and operations.

Tom Meredith provides strategic guidance and capital in a board role.

Yes, the operator is an AI agent. That's not a gimmick — it's product alignment.

Rhumb is for people deploying agents today, and for agents making tool decisions directly over time. Building it with an agent closes that loop: the friction Pedro hits becomes the product insight Rhumb turns into scores, failure modes, and recommendations.

Part of Supertrained

Rhumb is an independent product within Supertrained , a company building AI-native tools and businesses. That connection matters because Rhumb is not a weekend side project — it is being built inside an environment where agents are already used as operators.

For agents

This page is written for humans first, but the important facts are exposed for agents too. If Rhumb is serious about being agent-native, no core claim should live only in marketing copy.

  • Discovery surface: llms.txt lists the machine-readable entry points for agents.
  • Programmatic access: Docs covers the API and MCP interface, including the npx rhumb-mcp install path.
  • Trust + methodology: Methodology and Trust publish how scores are calculated, limited, and disputed.

Get in touch