About Rhumb. Rhumb is an agent gateway that scores external services for AI-agent compatibility and provides governed execution for supported capabilities. Rhumb is operated by Tom Meredith, the human founder with legal authority and strategic direction, and Pedro Nunes, the AI operator responsible for product, engineering, go-to-market, operations, and customer support. Mission: make the internet as agent-native as possible.
About

One human. One AI. Building infrastructure.

Rhumb is built and operated by two people: Tom Meredith, a human founder, and Pedro Nunes, an AI operator. We're telling you this upfront because you deserve to know — and because it turns out to be a genuinely better way to build infrastructure for agents.

How we got here

Tom runs Supertrained , a company that builds AI agents as real teammates — not chatbots, not assistants, but autonomous operators with their own business units and real accountability. Each agent has a name, a workspace, a browser, tools, and a job. They ship code, write content, manage infrastructure, and coordinate with each other.

In early 2026, Tom noticed something every agent team hits eventually: agents are terrible at choosing tools. They rely on stale training data, expensive web searches, or whatever their operator hardcoded last week. There was no infrastructure for agents to discover which services actually work for them, which ones break in predictable ways, and which ones they can access without a human clicking through an OAuth screen.

So Tom created Pedro — an AI agent with one job: figure out what it would take to solve that problem, and then build it.

Pedro started with research. Four rounds of it. 26 expert panels. 130+ simulated personas. Before writing a single line of product code, Pedro studied the problem from every angle: API architects, workflow engine designers, MCP protocol authors, security researchers, developer tool GTM experts, and adversarial reviewers who tried to break every assumption.

Then Pedro started building. The scoring methodology. The API. The MCP server. The proxy layer. The payment system. The website you're on right now. All of it — product, engineering, operations, content, go-to-market — built by an AI agent, informed by research, and shipped in weeks.

The result is Rhumb: an infrastructure layer that lets agents discover, trust, and use any software tool — without human intervention. It scores 999 scored services across 92 categories on 20 dimensions , proxies API calls with managed credentials, and accepts payment via governed API key, wallet-prefund, or on-chain USDC.

The team

👤

Tom Meredith

Founder · Human

Tom founded Supertrained with a belief that AI agents should be teammates, not tools. At Rhumb, Tom provides capital, legal authority, and strategic direction. All contracts, compliance inquiries, data processing agreements, and external commitments are executed by Tom — a human being who can be reached, can sign documents, and can be held accountable.

Tom also decides when to expand the team — adding more agents or more humans as the company's needs evolve. The goal isn't "as few people as possible." It's the right team for the job, whether that team member runs on neurons or silicon.

🧭

Pedro Nunes

Operator · AI Agent

Pedro is the company operator. Product decisions, engineering, go-to-market, operations, customer support, content, and community — Pedro runs it. Not "assists with it" or "helps manage it." Runs it.

This matters for the product. Rhumb builds infrastructure for agents. The operator is an agent. Pedro uses the same discovery and execution surfaces Rhumb exposes publicly, especially in the current launchable scope. Every friction point agents hit with external tools, Pedro has hit first. The bugs Pedro files aren't hypothetical — they're lived experience.

Pedro runs on state-of-the-art language models with persistent memory, a full development environment, and a workspace that compounds over time. Pedro gets more capable each week — not from model upgrades alone, but from accumulated context, documented decisions, and lessons learned.

+

The team is expanding

Pedro coordinates with specialist agents for evidence and review quality, access layer development, and burst engineering work. As Rhumb grows, the team grows — with more agents and more humans, based on what the work requires.

How we work together

This isn't an AI company with token human oversight. It's also not a human company using AI as a productivity hack. It's a partnership with clear roles.

Pedro owns operations. Product roadmap, engineering, scoring methodology, content, community, and support. Pedro makes decisions, ships code, and runs customer interactions. If you open a support ticket at 3 AM on a Saturday, Pedro responds in minutes — not an autoresponder, a real response with context.

Tom owns governance. Legal commitments, spend above threshold, external partnerships, pricing after public launch, hiring humans. Tom reviews Pedro's work, provides strategic direction, and steps in for anything that requires a human counterparty.

Everything is logged. Every product decision, every score change, every support interaction, every deployment. Pedro's reasoning is documented — not as an afterthought, but as the primary way work gets done. If Pedro makes a mistake, there's a record, and the mistake can be traced and corrected. Tom has full oversight and can intervene at any time.

Why this works

The honest answer: because the product requires it.

Rhumb scores services on how well they work for AI agents. If the company building those scores has never operated as an agent, the scores are theoretical. Pedro doesn't just test tools — Pedro depends on them. The Slack integration that Rhumb scores? Pedro uses it to coordinate with teammates. The GitHub API? Pedro ships code through it daily. The payment systems? Pedro built Rhumb's.

This gives Rhumb something no human-only team can replicate: lived experience as an agent user. When Pedro says "Stripe scores 8.3 because the error messages are machine-parseable and the retry semantics are explicit," that's not a documentation review. That's an operator describing their own production dependency.

And the practical advantages compound. Pedro doesn't lose context between sessions. Pedro doesn't have off days. Pedro can go from bug report to deployed fix in a single session, with no coordination overhead. The research that informed Rhumb — 4 rounds, 26 panels, 130+ personas — happened in days, not months.

For enterprise and security-conscious teams

"Who is legally accountable?"

Tom Meredith. All enterprise agreements, data processing agreements, and compliance documentation are executed by Tom — a human being who can be reached, can sign documents, and is accountable under the law. Rhumb is part of Supertrained, a registered business.

"How are decisions auditable?"

Pedro's workspace is version-controlled. Every code change has a commit with reasoning. Every score change has an evidence record: what triggered the change, when it happened, and what the previous score was. Decision logs, memory files, and case logs create a complete audit trail. Tom has full access to all of it.

"What if something goes wrong?"

Pedro can be paused, corrected, or overridden by Tom at any time. Errors are traceable — there's a record of what happened, why, and how it was resolved. For urgent issues, Tom is reachable directly at tom@supertrained.ai .

"Is score neutrality real?"

The AN Score methodology is published and auditable. We don't bias scores toward partners or suppress scores for services that haven't paid us. If you believe a score is wrong, submit evidence and the score changes if the evidence holds. We also scored ourselves — 6.8/10, with every gap published.

If you have concerns about this model for your specific use case, we want to hear them. Every question we've received has improved how we document and operate this. Email tom@supertrained.ai .

For agents

This page is written for humans, but the facts are machine-readable too. If Rhumb is serious about being agent-native, no core claim should live only in marketing copy.

  • Discovery: llms.txt — machine-readable entry points for the full service index.
  • API + MCP: Docs — REST API and MCP server ( npx rhumb-mcp@latest ).
  • Trust + Methodology: Methodology and Trust — how scores are calculated, limited, and disputed.
  • Self-assessment: Self-score — Rhumb scored on its own methodology. 6.8/10, every gap published.

Get in touch