arrow_back Back to Archive / Article: Deep Dive
Manifesto

Your AI Made 50,000 Claims Last Month

March 22, 2026

April 2026

Your AI Made 50,000 Claims Last Month

Can You Trace Any of Them?

March 2026


A commercial claim is any factual assertion a company makes about its products, pricing, capabilities, competitive positioning, compliance status, or customer evidence through any channel - including websites, sales conversations, proposals, and AI-generated interactions. In the age of AI-powered revenue operations, the average B2B company generates tens of thousands of commercial claims per month, nearly all of them untraceable.

Let me ask you a straightforward question.

Your AI chatbot answered 800 prospect questions last month. Your AI SDR sent 4,000 emails. Your proposal generator created 45 documents. Your internal copilot fielded 300 rep queries. Your content tool generated 50 social posts and blog drafts.

Across those interactions, your AI tools collectively made - conservatively - 50,000 individual commercial claims. Each email contained 3-5 factual assertions. Each chatbot conversation involved 4-8 product claims. Each proposal included 20-40 specific statements about capabilities, pricing, and customer evidence.

Now: for any one of those 50,000 claims, can you answer these three questions?

  1. What exactly did the AI say?
  2. What source did it rely on to say it?
  3. When was that source last verified as accurate?

If you’re like 81% of companies deploying AI in customer-facing roles, the answer to all three is: no (KPMG AI Governance Readiness Report, 2025).


The Invisible Output Problem

Here’s what I find remarkable about the current state of AI in revenue.

We measure everything around AI output. Open rates. Reply rates. Conversion rates. Engagement scores. Time-to-response. User satisfaction scores. We have dashboards for all of it.

We measure nothing about AI output quality. Not accuracy. Not consistency. Not source reliability. Not claim currency.

This is like measuring a factory’s output in widgets-per-hour without ever inspecting whether the widgets work. The volume metrics are excellent. The quality metrics don’t exist.

And the consequence isn’t theoretical. When your AI chatbot tells a prospect in Munich that you’re “GDPR compliant with all data processed in the EU” - and this claim was sourced from a security FAQ written before you added a US-based analytics sub-processor - that’s not just an inaccuracy. Under the EU AI Act (effective August 2, 2026), that’s a traceable compliance violation with penalties up to 7% of global annual turnover or €35 million (European Commission, 2024).

The regulation doesn’t care about your open rates. It cares about whether you can trace the claim, identify the source, and demonstrate that the source was verified at the time the claim was made.


What Traceability Actually Means

Let me be precise about what traceability requires, because most companies confuse “logging” with “tracing.”

Logging means recording what happened: the chatbot said X at time Y to user Z. Most AI tools provide some form of logging. It’s a start - but it’s not traceability.

Traceability means connecting the output to its source, and the source to its verification status. It requires three linked records:

Record 1: The claim. The specific factual assertion the AI system made. “We are SOC 2 Type II certified.” “Our platform integrates with 38 tools.” “Acme Corp achieved a 35% reduction in processing time.”

Record 2: The source. The specific knowledge base entry, document, or data point the AI relied on to generate this claim. Not “the AI accessed the content library.” Which document? Which section? Which version?

Record 3: The verification. When was this source last confirmed as accurate? Who confirmed it? What evidence supports it?

A traceable claim chain looks like: “On March 15, 2026, the chatbot told prospect X that we are SOC 2 Type II certified. This claim was sourced from Knowledge Base Entry #SOC2-001, last verified on February 28, 2026, by the Director of Security, based on our Type II audit report dated January 2026.”

An untraceable claim chain - which is what most companies have - looks like: “The chatbot probably said something about SOC 2? Check the help center article. I think someone updated it.”

According to research from Forrester, only 12% of companies using AI in customer-facing roles can trace a specific AI output back to its underlying source document (Forrester AI Governance Survey, 2025). The other 88% have AI tools making thousands of claims on their behalf with no source attribution.


The Audit Scenario

Let me walk through what actually happens when someone asks to trace a claim.

Scenario: a prospect in Germany receives an AI-generated email from your company that states “our platform is GDPR compliant with data residency options in the EU and US.” The prospect later discovers that their data was processed through a Singapore-based CDN node as part of your infrastructure. They file a complaint with their local data protection authority.

The authority investigation requires you to demonstrate:

  1. What your AI system said (the specific claim)
  2. What information the claim was based on (the source)
  3. Whether that information was verified and current at the time (the verification status)

Step 1: Can you retrieve the exact email? Probably. Most AI SDR tools log sent emails.

Step 2: Can you identify the specific source the AI relied on to make the GDPR compliance claim? Here’s where it breaks down. The AI SDR was configured to pull from your website, a product FAQ, and a security one-pager. Which one informed this specific claim? Most RAG systems don’t provide granular source attribution at the claim level. They retrieve chunks and synthesize. The synthesis obscures the source.

Step 3: Even if you identify the source, can you show when it was last verified? This is the critical gap. The security one-pager might exist. But when was it last verified? By whom? Does it reflect the Singapore CDN node that was added after the document was written? Nobody knows, because documents don’t carry verification metadata.

The investigation reveals: your AI made a claim, sourced from a stale document, with no verification trail. Under the AI Act, that’s a governance failure with potential financial consequences.

Research from PwC shows that the average cost of a regulatory compliance investigation - before any penalties are assessed - is $2.4M in legal fees, internal resources, and operational disruption (PwC Regulatory Impact Analysis, 2024). The penalty for a substantive AI Act violation adds to this base cost.


The Scale of Untraced Claims

Fifty thousand untraceable claims per month might sound abstract. Let me make it concrete.

In a single month, your AI tools make commercial claims about:

  • Pricing: 8,000+ instances across emails, chatbot conversations, and proposals. How many reflect the current pricing? Unknown.
  • Product capabilities: 15,000+ instances across all channels. How many reflect features that have shipped versus features on the roadmap versus features that were deprioritized? Unknown.
  • Customer evidence: 3,000+ instances where a customer name, case study, or testimonial is cited. How many reference customers who are still active? Unknown.
  • Competitive positioning: 5,000+ instances where a competitor is mentioned or compared. How many reflect the competitor’s current product? Unknown.
  • Compliance claims: 2,000+ instances where security, privacy, or regulatory certifications are referenced. How many reflect the current certification status? Unknown.

The word “unknown” appears five times in that list. Not because the information doesn’t exist somewhere - it probably does, in some combination of documents, systems, and people’s heads. But because no system connects the AI’s output to a verified, current source for each specific claim.

According to Validity, poor data quality in CRM alone costs organizations an average of $12.9 million per year (Validity, 2024). The cost of poor knowledge quality in AI systems - where errors propagate to market-scale distribution - is likely significantly higher but has never been independently measured, because the measurement tools don’t exist.


Building the Trace Chain

The good news is that this is an engineering problem, not a philosophical one. The trace chain can be built.

It requires three architectural components that most companies are missing:

Claim-level knowledge management. Instead of documents, manage discrete claims with metadata: source, verification date, confidence score, downstream dependencies. When the AI makes a statement, it’s not “pulling from a document” - it’s referencing a specific, versioned claim.

Source attribution in AI output. When the AI generates a response, every factual assertion is linked to the specific claim(s) in the knowledge base that informed it. Not “this response was generated from your content library.” This sentence was informed by Claim #PRICING-001 (last verified March 28, 2026) and Claim #INTEGRATION-003 (last verified March 15, 2026).

Verification lifecycle management. Claims don’t exist indefinitely without review. Each claim has an expiration trigger - a time or event after which it must be re-verified. A pricing claim expires after a pricing change. A competitive claim expires after 90 days without re-verification. A compliance claim expires after any change to the relevant certification.

Together, these three components create a closed-loop trace chain: from claim to source to verification, for every AI-generated interaction.


Frequently Asked Questions

How many commercial claims do AI tools generate per month?

The average B2B company with 5-8 AI tools generates approximately 50,000 individual commercial claims per month across AI SDR emails, chatbot conversations, proposals, and internal copilot responses. Each interaction contains multiple factual assertions about pricing, capabilities, competitive positioning, and customer evidence.

What does AI claim traceability mean?

AI claim traceability is the ability to connect a specific AI-generated statement to the underlying source it relied on and demonstrate that the source was verified and current at the time the statement was made. This requires three linked records: the claim (what was said), the source (what it was based on), and the verification status (when the source was last confirmed as accurate).

Can current AI tools trace their claims to sources?

Most cannot. Only 12% of companies using AI in customer-facing roles can trace AI outputs back to underlying source documents (Forrester, 2025). Standard RAG implementations retrieve document chunks and synthesize responses, obscuring which specific source informed which specific claim.

Why does the EU AI Act make traceability mandatory?

The EU AI Act holds deploying companies - not AI vendors - responsible for the accuracy of their AI systems’ outputs. If a regulator investigates an AI-generated commercial claim, the deploying company must demonstrate what the AI said, what source it relied on, and when that source was last verified. Companies that cannot provide this trace chain face penalties of up to 7% of global annual turnover or €35 million.