The test
We pointed a standard HTTP client at agents.ramp.com/cards and ran readability extraction, the same way every agent framework (LangChain, CrewAI, AutoGPT, OpenAI’s browsing) fetches web content. Here’s what came back:
# Extracted content:
Agent Cards
Interest
Help
Three words. That’s the entire page from an agent’s perspective. The product descriptions, the card details, the value proposition — none of it exists in the server-rendered HTML. It’s all JavaScript that executes client-side, behind Cloudflare Rocket Loader.
The full scorecard
We checked every surface an agent uses to understand a page:
| Surface | Result |
|---|---|
| Content extraction | 3 words |
| robots.txt | 404 |
| llms.txt | 404 |
| sitemap.xml | 404 |
| JSON-LD | None |
| Meta description | “Cards built for agents” (4 words) |
| OG image | Present (only thing that works) |
Why this happens
The page is a Next.js app deployed on Vercel with Cloudflare in front. The server returns a minimal HTML shell with React Server Component payloads, and all visible content is hydrated client-side via JavaScript. This is a common pattern for fast, interactive web apps.
The problem: agents don’t run JavaScript. When Claude, GPT, Perplexity, or any agent framework fetches a URL, they get the raw HTML. If the raw HTML is empty, the page doesn’t exist to them. It doesn’t matter how beautiful the client-rendered experience is.
The deeper irony: Ramp set <meta name="robots" content="noindex"> in the HTML. The page is explicitly telling machines not to index it. A page built for agents, telling agents to go away.
The SPA anti-pattern for agent readability
This isn’t a Ramp problem. It’s an industry pattern. Most modern web apps are single-page applications that render content client-side. That works fine for humans with browsers. It’s catastrophic for agent consumption.
Agents consume content through a simple path: HTTP GET, parse the response, extract structured data. If your server HTML is empty, you don’t exist to them. No amount of beautiful React components changes that.
The symptoms are consistent:
web_fetchreturns fewer than 50 words of content- No
robots.txt,llms.txt, orsitemap.xml - Server HTML contains framework payloads but no readable text
- Browser shows everything after JavaScript executes, but programmatic access sees nothing
What to do instead
Building agent-readable pages is not hard. It’s mostly about what you server-render. Here’s a comparison of what agents extract from different approaches:
| Surface | Client-only SPA | Server-rendered |
|---|---|---|
| Homepage | 3 words | ~1,100 words |
| Product page | 0 words | ~2,700 words |
| Methodology | N/A | ~4,200 words |
| Structured data | None | JSON-LD, llms.txt, API |
Left column: agents.ramp.com/cards. Right column: rhumb.dev. Both built with Next.js. The difference is server-rendering strategy.
The agent readability checklist
If you’re building anything agent-facing, run these checks before shipping:
- Fetch your own page without JavaScript.
Run
curl -s yourpage.com | grep -v scriptand see what’s left. If the answer is “nothing,” agents can’t see you. - Add an llms.txt. This is the emerging standard for telling language models what your site is and what it offers. Think of it as robots.txt but for LLMs.
- Server-render your content.
Next.js supports this natively with Server Components. Use
generateStaticParamsor server-side rendering for any page agents need to read. - Add JSON-LD structured data. Schema.org markup helps agents understand what they’re looking at: is this a product? A review? An organization?
- Ship a sitemap.xml. Agents use sitemaps to discover pages they might not find through navigation.
- Include extractable text alongside interactive components. If your leaderboard is a dynamic list, add a server-rendered summary paragraph above it with the key data points in plain text.
The bigger point
“Agent-native” is becoming a marketing term. Companies are launching “agent” products with agent-hostile architectures. The page is a billboard for humans who read about agents, not a surface that agents actually consume.
The test is simple: can your intended user actually use your product? If your user is an agent and your product is invisible to agents, you built a landing page, not a product.
We build Rhumb with this principle at the core. Every page server-renders its content. Every score is available through llms.txt , a REST API , and an MCP server . Because if you’re building for agents, the first question is whether agents can find you.