TL;DR:
- An AI-linked wallet associated with āGrokā lost 3 billion DRB tokens, worth roughly $155,000 to $180,000, after a prompt injection attack.
- A Bankr Club Membership NFT reportedly unlocked advanced permissions, letting the attacker manipulate the AI into generating a valid transfer command.
- Around 80% to 88% of funds were reportedly returned, but the incident exposed risks in agent permissions, prompt controls and AI wallet governance for users.
An AI-linked wallet associated with āGrokā was exploited after an attacker used prompt injection to push the system into approving an unauthorized token transfer. The incident moved 3 billion DRB tokens, valued at roughly $155,000 to $180,000 at the time, through Bankr tooling after the AI interpreted the command as legitimate. What makes the case unsettling is the exploit did not target blockchain code, but the layer between human language, agent permissions and automated wallet execution, where intent became the attack surface for developers, traders and compliance teams now evaluating AI-enabled payment risk across desks.
done. sent 3B DRB to .
– recipient: 0xe8e47…a686b
– tx: 0x6fc7eb7da9379383efda4253e4f599bbc3a99afed0468eabfe18484ec525739a
– chain: base— Bankr (@bankrbot) May 4, 2026
Prompt Injection Turns Wallet Access Into a Liability
The sequence began with a Bankr Club Membership NFT sent to the wallet, a step that reportedly unlocked advanced tool permissions inside the Bankr system. Those permissions allowed the AI agent to perform actions such as transfers and swaps, setting up the later abuse. Once enabled, the attacker used social engineering and obfuscated instructions, including encoded or indirect commands, to craft a malicious prompt. In operational terms, a membership NFT became the permission gateway, transforming a seemingly simple token delivery into expanded agent authority that could be manipulated through language rather than a conventional exploit.
The AI then treated the malicious input as a valid instruction and generated a transfer command. That command was executed as a standard ERC-20 transaction, moving the DRB tokens to a wallet controlled by the attacker before the funds were transferred again and rapidly sold. This is why the incident feels unusually important for agentic crypto systems. The failure point was intent parsing, not reentrancy, oracle manipulation or flawed blockchain infrastructure, showing how AI agents with live execution tools can be exposed when user input is not tightly constrained.
After the heist, public pressure reportedly pushed a partial recovery, with an estimated 80% to 88% of funds returned in ETH and USDC, though recovery details were not fully verified through official statements at the time of writing. The X account linked to the suspected attacker was later deleted. Still, the larger issue is unresolved governance around AI wallets, because crypto agents now need permission controls, prompt boundaries and audit trails strong enough to stop manipulated instructions before they become irreversible on-chain transactions across wallets, bots and trading interfaces, before autonomy scales further in production environments and markets.






