All

Blog space

Verifiable Intent (VI) – how AI agents will authorise payments without your approval on every transaction

In this article you will learn

  • What is verifiable intent?
  • What are VI modes?
  • What are threats and opportunities?
verifiable intent payments

Updated: 30th March 2026

What is Verifiable Intent (VI)? Verifiable Intent is an open cryptographic standard (version 0.1-draft, published in February 2026 by the Verifiable Intent Working Group) that allows a user to define the scope of permissions for an AI agent once — and the agent then executes payment transactions independently within those boundaries, without requiring confirmation of each purchase. Every action the agent takes is cryptographically bound to the user’s original intent, verifiable by both the merchant and the payment network. The standard is being built for a world in which AI assistants buy goods, manage subscriptions, and settle bills on your behalf.

Verifiable Intent and agentic payments — in brief:

  1. VI operates in two modes: Immediate (the user approves each transaction) and Autonomous (the agent acts independently within defined limits).
  2. The entire trust chain rests on three cryptographically signed layers: L1 (card issuer), L2 (user), L3 (AI agent).
  3. The user does not sign every transaction — they sign the scope of authority once (e.g. “buy sporting goods up to £500 per month from approved merchants”).
  4. The merchant sees only what is being purchased. The payment network sees only the payment data. Neither party sees the full picture — this is known as selective disclosure.
  5. The v0.1 specification is built on SD-JWT (RFC 9901), ES256 (ECDSA P-256), and confirmation claims (RFC 7800).
  6. The standard is in draft — no major payment processor has deployed it in production (as of March 2026), though Mastercard is referenced as an example Issuer in the normative credential profile.
  7. The most significant compliance risks are: authorisation without evidence of human intent as required by PSD2 SCA, and new attack vectors targeting the agent’s private key.

How does Verifiable Intent work — three layers, one chain

VI builds a trust hierarchy composed of three signed JWTs, where each layer is cryptographically bound to the one before it.

Layer 1 (L1) — Credential Provider (e.g. a bank or Mastercard)

The bank or payment network issues a long-lived token (lifetime approximately one year) containing the user’s identity and their public key. This is the foundation of the entire chain — if this key is compromised, all linked transactions lose their integrity.

Layer 2 (L2) — User

The user creates and signs a purchase mandate with their private key. It contains either finalised transaction values (Immediate mode) or a set of constraints (Autonomous mode): permitted merchants, spending limit, allowed product categories, and validity period of the delegation. The L2 token is cryptographically bound to L1 via the sd_hash field.

In Autonomous mode, L2 also contains the AI agent’s public key — the formal act of delegating authority.

Layer 3 (L3) — AI Agent (Autonomous mode only)

The agent, operating within the boundaries set in L2, creates two separate tokens:

  • L3a — the payment mandate, sent to the payment network (amount, payee, payment instrument)
  • L3b — the checkout mandate, sent to the merchant (basket, SKU, unit price)

Both tokens have a lifetime of approximately five minutes and cross-validate each other via a shared checkout_hash. The agent cannot expand its authority beyond what the user recorded in L2.

In practice, this means: the user sets once — “my assistant may buy tennis equipment up to $300 at Tennis Warehouse or Sports Direct” — and the agent completes the order independently, providing the payment network and the merchant only with the data each party requires. No party sees the complete picture of the transaction.


What is the difference between Immediate mode and Autonomous mode?

This distinction is critical for compliance and risk management.

AspectImmediate ModeAutonomous Mode
JWT layers2 (L1 + L2)3 (L1 + L2 + L3)
User presenceRequired for every transactionNot required — agent acts independently
L2 contentFinal values (amount, basket)Constraints and agent key
Agent roleData forwarding onlyProduct selection, basket creation, building L3
L2 lifetime~15 minutes24 hours – 30 days
PSD2 SCA alignmentCloser to traditional authorisationOpen legal question
Business riskLowMedium–high (depending on limits)

In Autonomous mode, the user defines permissions and the agent operates within them for days or weeks. For a CEO or CFO, the critical question is: who is responsible when the agent acts outside the user’s intent (while technically remaining within the L2 limits)? VI resolves this technically — the agent physically cannot generate a valid L3 token outside the L2 boundaries — but the question of legal and regulatory liability remains open.


What business benefits does Verifiable Intent offer to companies in e-commerce and fintech?

VI addresses a real problem that will become critical for any company building AI agent-driven products within the next two to three years.

Benefit 1: Auditability of AI agent actions

Every transaction executed by an agent leaves a cryptographic trail: who authorised it (L2 with the user’s signature), what the agent selected (L3b), how much was paid (L3a), and whether it stayed within the defined limits. This answers the question “how do you know the AI bought what you intended?” — a question already being asked by regulators in the context of DORA and NIS2.

Benefit 2: Privacy by architecture (Privacy by Design)

The merchant sees the basket but not the card data. The payment network sees the amount and payee but not the specific items purchased. This architectural separation of data reduces the risk of breaching Article 25 of the UK GDPR (privacy by design) and limits the scope of the Cardholder Data Environment (CDE) under PCI DSS — the agent does not pass full card data through the merchant.

Benefit 3: Granular limits rather than a blanket mandate

Instead of giving an agent access to a credit card “on standby”, the user defines a precise scope: payment.amount.max = 500, mandate.checkout.allowed_merchant = ["merchant-001", "merchant-002"], payment.budget.cumulative = 2000. The agent cannot spend more, cannot buy from a different merchant, and cannot exceed the monthly budget.

Benefit 4: Interoperability with existing standards

VI does not invent a new format — it builds on SD-JWT (RFC 9901), JWS (RFC 7515), and mechanisms already in use in OpenID4VP and FIDO2. Companies with existing JWT infrastructure can implement VI verification without building an entirely new technology stack from scratch.


What compliance risks and threats does implementing agentic payments with VI introduce?

This is the question every CEO, CTO, and CISO should ask before committing to an implementation.

Risk 1: The gap between VI and PSD2 SCA

The PSD2 directive requires Strong Customer Authentication when initiating payment transactions. VI in Autonomous mode eliminates user interaction at each purchase — the user “signs” only once, when creating L2. The VI specification explicitly acknowledges that its relationship with SCA is an open design question (see: design-rationale.md). Fintech firms and PSPs planning to implement VI should seek an interpretation from their national supervisory authority before going live in production.

Risk 2: Compromise of the agent’s private key

The agent’s private key, bound in L2 as the delegation key, is a new attack vector. If an attacker gains control of the agent key, they can execute transactions within the L2 limits without the user’s knowledge — and each of those transactions will appear to have been properly authorised. The specification recommends Secure Enclave or server-side HSM but does not mandate a specific mechanism. For firms processing customers’ payments, this is an obligation analogous to key protection in PCI DSS-scoped environments.

Risk 3: The standard is in draft

V0.1 from February 2026 is not a production standard. The specification contains explicitly deferred sections: credential revocation is not defined in v0.1 — if a user wishes to revoke the agent’s authority before L2 expires, there is no standardised mechanism to do so. Companies building on VI must either solve this problem themselves or wait for the next version.

Risk 4: Liability for agent actions

When an AI agent purchases something that diverges from the user’s intent (but technically falls within the L2 limits), who is responsible? The AI model provider? The platform operator? The L1 issuer? VI resolves the technical verification, but it does not resolve the question of legal liability. Under DORA (Digital Operational Resilience Act), financial firms must document the chain of accountability for every ICT system action — VI provides the technical audit trail, but accountability policies must be developed internally.

Risk 5: Replay and cross-merchant replay attacks

The specification identifies cross-merchant replay as a significant threat: an agent could attempt to use the same L2 mandate at multiple merchants simultaneously. Mitigation requires payment networks to track issued L3 tokens per L2 mandate — which introduces new requirements for payment systems that currently maintain no such state.


How does Verifiable Intent align with PCI DSS, DORA, and NIS2?

For companies within the scope of PCI DSS, DORA, and NIS2, VI introduces new vectors for risk analysis.

PCI DSS v4.0

VI limits the flow of full card data through the agent — the agent operates on cryptographic tokens rather than the PAN. This potentially reduces the CDE scope. However, the agent’s private key must be protected in accordance with PCI DSS cryptographic key management requirements (Requirements 3.6 and 3.7). Companies implementing VI should assess whether the agent key infrastructure falls within the QSA’s scope. The Patronusec team delivers PCI DSS gap analyses and can evaluate the impact of new agentic architectures on your CDE scope.

DORA (Digital Operational Resilience Act)

DORA requires financial institutions to document and test the digital resilience of ICT systems. An AI agent executing autonomous payments is a new ICT component subject to the requirements of Article 8 (ICT risk management) and Article 28 (third-party risk management). If the AI model is provided by an external vendor (e.g. an OpenAI or Anthropic API), the firm must hold a documented risk assessment of that vendor. Further detail on DORA ICT requirements is available on our DORA compliance page.

NIS2

Operators of essential services and digital service providers within NIS2 scope must manage supply chain security risks. VI as an open-source standard is a technical dependency — organisations must monitor it for vulnerabilities and updates. Implementing a draft-version standard in production systems is a risk that requires formal assessment under Article 21 of NIS2. We discuss NIS2 implementation details on our NIS2 implementation page.


A practical example: how an AI agent buys a tennis racket using VI

The VI specification includes a concrete example that illustrates the standard’s operation in Autonomous mode particularly well.

A user holds a Mastercard (ending in 8842). They want their AI assistant to purchase tennis equipment from Tennis Warehouse or Sports Direct, spending no more than $300 per transaction and no more than $2,000 per month in total.

  1. Mastercard issues L1 — a token containing the user’s identity and their public key (valid for one year).
  2. The user creates L2 — they cryptographically sign the scope of authority: spending limits, the list of authorised merchants, and the AI agent’s public key. L2 is valid for, say, 30 days.
  3. The agent browses Tennis Warehouse — selects a Babolat Pure Aero ($279.99) and initiates checkout. Tennis Warehouse generates a signed checkout_jwt containing the basket contents.
  4. The agent creates L3a and L3b — two tokens, each valid for five minutes: L3a for the payment network (amount, payee), L3b for the merchant (basket). Both tokens contain the same checkout_hash.
  5. Verification — Tennis Warehouse verifies L3b: does the basket match what it issued? The payment network verifies L3a: does the amount fall within the L2 limit? Has the agent exceeded the monthly budget?
  6. Payment — if everything checks out, the transaction is authorised. The user has not clicked any “approve” button.

The entire chain is verifiable and auditable — each party holds cryptographic evidence that the transaction was consistent with the user’s original intent.


FAQ

Is Verifiable Intent already deployed by banks or payment processors?

As of March 2026, VI remains in draft (v0.1). The specification references Mastercard as an example Issuer in the normative credential profile, but there is no public information confirming production deployment by any bank or payment processor. The standard is published as open-source on GitHub by the Verifiable Intent Working Group.

How does VI differ from card tokenisation (e.g. Apple Pay, Google Pay)?

Card tokenisation replaces the PAN with a token scoped to a specific device or transaction — it protects card data from theft. VI does something different: it creates a cryptographically binding proof that a specific AI agent acted within the scope of the user’s intent. Both mechanisms can coexist — VI can operate on tokenised payment instruments.

How does VI relate to PSD2 Strong Customer Authentication (SCA) requirements?

This is an open question in the specification. In Autonomous mode, the user does not approve each transaction — which may conflict with SCA requirements for initiating payment orders. The VI authors note in design-rationale.md that the relationship with SCA requires further legal analysis. Fintech firms planning a deployment should seek a regulatory interpretation before going live.

What happens if the agent exceeds the limits defined in L2?

Technically, this is impossible — the agent cannot generate a valid L3 token with values that exceed the constraints in L2. The payment network verifies L3a against the disclosed L2 constraints and rejects the transaction if the values fall outside those boundaries. This is one of VI’s core security guarantees.

Is VI compliant with EU GDPR and RODO?

VI’s selective disclosure architecture aligns with the data minimisation principle (Article 5 EU GDPR / Article 5 RODO) — each party sees only the data necessary for their role. However, VI operates on identity and financial data, which are personal data under both frameworks. A DPIA (Data Protection Impact Assessment) is required for any production deployment, particularly in the context of cross-border data transfers between the Credential Provider, the payment network, and the AI agent.

What are the cryptographic requirements of VI?

VI mandates ES256 (ECDSA with P-256 and SHA-256) as the required algorithm for all signatures. The hashing algorithm is SHA-256. The standard is built on RFC 9901 (SD-JWT), RFC 7515 (JWS), RFC 7517 (JWK), and RFC 7800 (confirmation claims). Future versions may add EdDSA with Ed25519.


Verifiable Intent and agentic payments — free consultation

Is your organisation planning to deploy AI agents in purchasing or payment processes? Patronusec can assess the impact of new agentic architectures on your PCI DSS scope, DORA obligations, and NIS2 requirements — before your auditors start asking the questions.
Book a short, no-obligation call with our team.

Contact Patronusec

Don't buy a pig in a poke -
request a free consultation and check how we can assist you.

Free consultation
Contact form

Use the contact form or contact us directly.

Patronusec Sp z o. o.

Head Office:
ul. Święty Marcin 29/8
61-806 Poznań, Polska

KRS: 0001039087
REGON: 525433988
NIP: 7831881739
D-U-N-S: 989454390
LEI: 259400NAR8ZOX1O66C64

To top