Nexus Data #005 - AI
Welcome to the fifth edition of Nexus Data Labs. Our aim is to highlight what matters most in the fast-developing world of onchain finance.
Thank you to J.W. and Diego for contributing to this issue.
Setting the Scene
The intersection of AI and blockchain is rapidly evolving from concept to infrastructure. Over the past year, autonomous agents, payment rails, and specialized data layers have begun operating directly within onchain systems, not as experiments, but as functional components of an emerging tech stack.
This shift raises an important question for the onchain economy: what happens when AI agents, rather than humans, become its primary users? For onchain analysts, the question is more immediate: how is AI changing the research toolkit?
This week's edition explores how the transition is beginning to take shape, through a key development at the frontier: a new generation of Large Language Models (LLMs), Model Context Protocols (MCPs), and Command Line Interfaces (CLIs) tools allowing AI systems to interface directly with structured data providers.
Across all the offerings, a common thread is emerging: the tools used to analyze the onchain economy are evolving alongside the economy itself.
The Agent Interface Layer
How data providers are opening their datasets to AI agents, through LLMs, MCPs and CLIs
Over the past few weeks, a growing number of data providers have started experimenting with interfaces designed specifically for AI agents. By exposing their datasets and analytics tools through LLMs, MCPs, and CLIs, these teams are shifting agents’ role from passive assistants to active research operators, capable of querying, analyzing, and synthesizing information directly from the source.
In the section below, a few of the teams at the frontier of onchain data share how they see analysis evolving in the age of AI agents.
Note: The tools featured here were chosen for relevance, not commercial relationships.
Allium
AI agents in crypto can already transact and make decisions without human intervention. What they cannot do reliably is verify the state of the blockchain they are operating on.
Two recent standards address parts of this problem. ERC-8004 gives agents a way to identify and establish trust with each other. The x402 payment protocol lets them transact autonomously per request. Taken together, they remove two significant constraints on autonomous agent activity in crypto.
But neither standard solves the data layer. Agents need to read protocol state, validate outcomes, and act on what is actually happening onchain. Raw blockchain data does not support this. It is fragmented across chains, encoded in low-level formats, and inconsistent without significant preprocessing. An agent operating on that layer does not fail immediately. It keeps executing on incorrect data, at machine speed, until enough errors accumulate to surface.
The bottleneck for agentic economies is not model capability or transaction infrastructure. It is whether agents have access to structured, accurate onchain data. Without it, AI applications, MCPs, and agent skills cannot operate reliably regardless of how sophisticated the models above them are.
Without writing a single line of code, the Allium MCP and Claude Cowork enabled us to produce a working x402 data dashboard.
Dune
Historically, extracting value from onchain data required SQL fluency, Dune’s schema knowledge, and an account. Dune’s recent AI developments and features remove that barrier.
Over the past two weeks, Dune shipped the access layer.
The Dune MCP Server gives GUI-based AI apps like Claude, ChatGPT, and Gemini structured access to the full Dune workflow: discover tables, write DuneSQL, execute queries, and generate visualizations in a single conversation.
The Dune CLI & Skills extends that same access to terminal-native agents such as Claude Code, Cursor, OpenCode, via a Go-based CLI and a bundled Skills.md file that lets any agent understand Dune’s full toolset with no custom integration required.
MPP (Machine-Protocol-Payments built by Stripe & Tempo) completes the stack. Agents can now discover tables, write DuneSQL, execute queries, and pay for results with no account, no API key, and no human in the loop.
The pattern is simple: agents should query data the way analysts think about it. Dune is building the infrastructure layer to make that real.
We used Dune’s MCP and Claude to build a prediction markets dashboard that pulls data directly from community queries on the platform with no coding involved.
Surf
Although crypto is one of the most data-rich industries, analysis across sources remains a challenge even with AI.
In most industries, data is accessible enough that AI can work with it effectively. In crypto, data is fragmented, and connecting the dots requires context that cuts across sources.
The industry’s attempt to solve this led to over-applying AI vertically. AI was layered onto existing data products, making individual platforms easier to use. Understanding one source at a time became simpler, but synthesis did not.
Comprehensive analysis requires orchestrating multiple data sources, agents, and models around a single objective. Optimizing for one piece of the puzzle adds to the fragmentation rather than resolving it.
As this trend of verticalization accelerates, so will demand for products that take a different approach. Surf is building AI designed for crypto and capable of reasoning across all of its data.
Using Surf Studio, a chatbot-style interface, we built a Hyperliquid dashboard highlighting HIP-3 and HIP-4 testnet data with only language prompts.
Token Terminal
AI assistants make it possible to query onchain data by chatting, removing the need for manual workflows such as navigating dashboards, exporting data, or writing queries.
However, this introduces a constraint: human queries are ambiguous, while data systems require precise definitions. Without a mechanism to translate intent into machine-readable instructions, the model cannot reliably map questions to data, resulting in inconsistent outputs.
This limitation is compounded by context constraints in the underlying language models, which cannot reason over a complex data system’s full structure in a single pass.
Token Terminal MCP addresses this through an intermediate reasoning layer. User queries are first processed by an offline discovery system that maps intent to a structured taxonomy spanning market sectors, projects, metrics, and methodologies. This resolves ambiguity before any query is executed.
The result is a deterministic interface between natural language and structured data. The model operates within the constraints of a system of record, enabling reliable access to standardized onchain data.
We explored Token Terminal’s MCP and produced a comprehensive non-USD stablecoins dashboard. Although the project is still experimental, it is already one of the most extensive dashboards tracking local stablecoins available today.
Closing Thoughts
The infrastructure connecting AI agents to onchain systems is still early, but the direction is clear. Payment rails, data access layers, and agent-native protocols are moving from whitepaper concepts to functional deployments.
Unlike traditional tools built around human workflows, this new layer is optimized for machine consumption. Data providers across the ecosystem are actively experimenting with these integrations.
At the same time, the role of human analysts remains central. AI can accelerate research workflows, assist with data exploration, and automate parts of the analytical process. But effective research still depends on asking the right questions and interpreting results within the right context.
The opportunity ahead is not replacing analysts, but expanding what they can do. As these tools mature, the bottleneck shifts from data access to problem framing. Researchers who use that leverage well will move faster, cover more ground, and extract clearer signal from the onchain economy.


