a16z Warns That AI Agents Are Beginning to Reproduce DeFi Exploits

a16z Warns That AI Agents Are Beginning to Reproduce DeFi Exploits
Table of Contents

TL;DR:

  • a16z warns that AI agents can already reproduce exploits in DeFi protocols, with success rates close to 70% in simple attacks.
  • The firm argues that the traditional model of point-in-time audits is insufficient and proposes security based on formal specifications and invariants.
  • Composability between protocols amplifies the problem: an exploit detected by AI in one contract can trigger systemic failures across the entire network.

a16z cryptoĀ published aĀ research paperĀ that exposes a security problem in DeFi:Ā artificial intelligence agentsĀ no longer merely assist in defending protocols — theyĀ are capable of autonomously identifying and reproducing price manipulation vulnerabilities.

Preliminary results indicateĀ success rates close to 70%Ā when agents had access to known exploit paths and structured knowledge, though theyĀ still show limitations in complex multi-step attacks.

AI Agents

The Audit Model Is No Longer Enough

For years, security in DeFi followed a predictable pattern: protocolsĀ launched code, commissionedĀ audits, patched detected issues, and trustedĀ that the review was sufficient. That model already looked fragile when human attackers outpaced audit cycles. AI agents widened that gap substantially.

A system capable ofĀ continuously testing exploit pathsĀ does not wait for the next scheduled review. It keeps searching. That is why a16z argues that the DeFi ecosystem mustĀ abandon the “code is law” logicĀ and move toward security based onĀ formal specifications: proving what a protocol is allowed to do, rather than reacting only after an attack has already occurred.

A16Z Post

a16z: The Asymmetry Favors the Attacker

What makes AI particularly dangerous is its scale.Ā AnĀ agentĀ does not need creativityĀ in the human sense:Ā it needs repetition and enough reasoning capacityĀ to test assumptions faster than defenders can respond. If it can simulate thousands of exploit paths across lending pools, oracles, bridge logic, and liquidation mechanics,Ā the attacker only needs one to work. The defender must protect all of them.

According to a16z,Ā composability also worsens the outlook. A vulnerability in an isolated contract is dangerous. In a bridge or a cross-chain collateral structure,Ā it can become systemic. AI agents do not distinguish between “core” and “peripheral” failures: they evaluate whether the system’s assumptions break down, and they do so at machine speed.

The a16z research also notes that, historically,Ā the attack arrives before the defense. Attackers experiment without needing governance approval or internal consensus. They only need one opening. According to initial reports, AI agentsĀ show greater effectiveness exploiting vulnerabilities than safely remediating them. Detection is simpler than safe remediation. That should unsettle every DeFi protocol operating today.

RELATED POSTS

Ads

Follow us on Social Networks

Crypto Tutorials

Crypto Reviews