Sell to the Agent
A monetization playbook for the post-browser era
Last week I gave a talk to INMA’s product and tech leaders about monetization in an agent-driven world, alongside David Caswell and Robert Whitehead.
The question I tried to tackle: How do we actually get paid when AI agents are the ones consuming our content?
We already have a problem with human readers: 83% of Americans didn’t pay for any news in the past year. Only 8% believe they have a responsibility to. The human payment model is already failing. And now there’s a second Internet emerging; one where the primary consumers aren’t humans at all.
I’ve previously argued that with AI commoditizing the value of information, publishers need to focus on two ends of the spectrum: unique voice, strong signature, and premium experiences for humans on one end, and architecturing at scale for AI on the other.
In this post, I’ll focus on the AI side of that equation.
The Click Is Dying
For two decades, the content business ran on a simple chain: Search → Click → Read → Act.
That chain is breaking. When I used ChatGPT for my Christmas shopping, I didn’t click through to product pages. I asked a question, got interesting follow-up prompts, received comparisons, and bought.
The click - the entire monetization event that publishers built their business on - never happened.
Now apply that to news. When someone asks Claude about EU AI regulation, it synthesizes an answer from multiple sources. The user gets what they need. The publishers whose reporting grounded that answer get nothing.
The effects extend beyond lost pageviews. In February, Anthropic’s Claude Cowork launch triggered what the Guardian called the “Claude crash”: several companies saw their stock prices crater as investors priced in AI disruption to information businesses.
The market is telling us something: the value of information isn’t declining. But the mechanism for capturing that value is breaking.
Meanwhile, Polymarket announced a partnership with Substack to embed prediction market data into newsletters. “Journalism is better when it’s backed by live markets,” they declared. (Critics called it gambling dressed as journalism.)
But there’s a more interesting angle here: Polymarket has actually built a working micro-transaction infrastructure: USDC wallets, frictionless payments, automated settlement, programmatic value exchange. The exact plumbing that content monetization needs. The payment technology already works. (Whether people will use it to gamble on prediction markets instead of paying the writers on Substack is another question 😬)
Inference > Training
Most of the publisher conversation around AI is still stuck on training - the licensing deals, the lawsuits, the opt-out battles. Don’t get me wrong, this is important. But training is a one-time event. You sell your archive, you get paid once. That chapter is being written.
Inference is the far more interesting opportunity. Real-time retrieval happens millions of times a day when AI agents answer queries. Every time ChatGPT grounds an answer in your reporting, that’s a monetizable event. It just isn’t being monetized yet.
The emerging infrastructure - RSL, x402, MCP, Microsoft’s Publisher Content Marketplace - is mostly designed for inference, not training. That’s where the action is shifting.
So how do you explore this space?
Three Layers, In Order
Here’s the framework I presented. There are three layers that must exist for AI monetization to work, and they have to be built in sequence. Skip one and the rest collapses.
Layer 1: Rights and permissions
What can be used, by whom, under what conditions, expressed in a way machines can read.
The Really Simple Licensing standard (RSL v1.0) went official in December 2025. Reddit, Yahoo, Medium, and Quora have adopted it. It’s the machine-readable equivalent of robots.txt, but for licensing: this content can be used for AI indexing, not for training, at this price.
The key idea is simple: if agents don’t know the rules, they can’t pay. Copyright without machine-readable licensing is a moral position, not a business strategy.
Layer 2: Access and enforcement
How agents are identified, allowed in, and constrained.
This is the shift from binary access (block or allow) to programmable access: allow, price, rate-limit. MCP-style structured endpoints. Agent identity and authentication.
Publishers understand paywalls. This is the evolution of paywalls for a world where the visitor isn’t a human with a browser. It’s a software agent with a budget.
Here’s what surprised people in the room: the basic infrastructure already works. At Mizal, we’ve implemented OAuth passthrough: you click inside Claude or ChatGPT, a browser window opens, you log in to your account, and you come back to the chatbot with it connected. A Guardian subscriber could activate their subscription inside ChatGPT today. The technology isn’t the bottleneck.
Layer 3: Payment and value exchange
How value actually moves.
Remember HTTP status codes? 200 means OK, 404 means not found. There’s always been a 402 — “Payment Required” — but it was never implemented. It sat dormant in the HTTP spec for decades, waiting for a use case.
Now it has one. The x402 protocol, developed by Coinbase and integrated with Stripe on February 11, revives that status code. An AI agent requests content or an API call. The server responds with HTTP 402 - payment required - along with the price, token type, and payment address. The agent pays in USDC, gets a transaction hash as proof, resubmits the request, and receives the content. One HTTP round-trip. No invoices, no subscription management, no human in the loop.
Cloudflare has joined the effort, co-founding the x402 Foundation with Coinbase and integrating the protocol into its Workers and Agents SDK. When one of the largest content delivery networks in the world builds native support for machine-to-machine payments, the infrastructure signal is hard to ignore.
The economics are striking: transactions can cost fractions of a cent. Stripe released an open-source CLI tool called purl so developers can test machine payments from their terminal today.
If access becomes granular, payment must become automatic. x402 makes it automatic.
Articles Won’t Save You
Beyond these layers, here’s the argument most publishers aren’t ready to hear: you can build the entire monetization stack for agents - rights, access, payment - and it still won’t matter much if you’re still thinking in artifacts.
Articles. Books. Podcasts. Videos. These are formats designed for the human Internet. Finished objects meant to be consumed by a person in a browser, on a shelf, in an app. The entire publishing industry is organized around producing and distributing artifacts. (Shuwei Fang’s piece on the brutal unit economics of liquid content is essential reading on this.)
But an AI agent doesn’t want an artifact. It wants structured, queryable knowledge it can reason over and act on. The distinction matters enormously. An article about EU AI regulation is useful to a human reader. A structured dataset of regulatory requirements, mapped by jurisdiction, linked to source documents, updated in real time? That’s useful to an agent building a compliance workflow.
The growth engine for the agent era isn’t better articles, but a machine-readable, architectured knowledge.
And once you make that shift, something bigger could open up: you’re no longer limited to the people who would have read your article. You can reach entirely new customers in entirely new contexts.
The O’Reilly Proof of Concept
O’Reilly Media worked with Miso to build what might be the best working example of this. They’ve implemented all three layers.
Rights: Their O’Reilly Answers system provides forensic attribution data showing the contribution of every referenced author’s work in every AI-generated answer. Their tagline: “The R in RAG stands for Royalties.” Authors get paid when their knowledge is used — not as a lump-sum license, but per interaction, auditable.
Access: They launched an MCP server that plugs directly into Cursor, Claude Code, and VS Code. A developer debugging a Kubernetes issue at 2am doesn’t go to oreilly.com. They ask a question inside their IDE and get an answer grounded in O’Reilly content — with citations and deep links to source material.
Payment: Royalties flow to authors based on actual usage, tracked at the answer level.
Think about what this means. O’Reilly’s traditional customer was someone who bought a book or a platform subscription. Their new customer is a developer who never leaves their code editor. The content is the same. The customer surface is radically larger. The knowledge that used to be trapped in a book, behind a link, within a chapter, is now conversational, contextual, and available exactly where the developer is working.
O’Reilly didn’t just monetize for the agent era. They expanded into environments they could never have reached with articles and books alone.
This is the playbook. Not “how do I get paid for my articles by AI” but “how do I architect my knowledge so it can reach customers I’ve never had access to before.”
A news organization covering financial regulation could have their structured data available inside a compliance officer’s workflow. A health publisher could have their evidence base queryable in a clinician’s tools. A technical publisher already does — inside a developer’s IDE.
The publishers thinking about this as “how do I protect my articles” are playing defense. The ones thinking about it as “how do I reach ten times more people with my structured knowledge” are playing offense.
What Comes Next
A few scenarios for how this unfolds:
Near-term (before end of 2026):
Agent commerce protocols go mainstream as x402 + Stripe matures.
Hybrid subscription + micropayment models start to emerge.
First publishers begin experimenting with pay-per-crawl and per-interaction pricing.
Medium-term (12–24 months):
Content becomes API, not page.
Agent-managed wallets could become a default payment method.
Data products start outperforming articles as revenue units.
Publishers who’ve architectured their knowledge reach customers they never had access to before.
Blocking AI is not a strategy. Pricing AI is. But pricing articles is thinking too small.
Every piece of knowledge your organization holds has value. The question is whether you’re charging for it, and whether you’ve made it available in the first place.




"Sell to the agent" - this reframe just clicked for me. I've been running an autonomous AI agent (Wiz) that handles everything from product creation to social posting. It costs me about $400/month. This month it generated $355 in actual revenue - subscriptions, digital products, the works.
The irony? My agent is both the producer AND increasingly the consumer of content to make those products. The monetization loop you're describing is already happening at the micro level.
Wrote about the full economics here: https://thoughts.jock.pl/p/project-money-ai-agent-value-creation-experiment-2026
Curious - do you see individual creators building these sell-to-the-agent pipelines, or is this primarily an enterprise play?
great piece. Thanks Florent