We Spent $2 Trillion on AI
April 2026
We Spent $2 Trillion on AI
We Spent $0 Making Sure It’s Accurate.
February 2026
The AI accuracy gap is the disconnect between investment in AI model capabilities (speed, fluency, scale, and reasoning) and investment in the accuracy of the information AI systems operate from (knowledge governance, source verification, and Commercial Truth management). Global investment in AI capabilities exceeded $150 billion in 2024. Investment in ensuring AI outputs are factually correct received a fraction of that - estimated at less than $5 billion (Stanford HAI AI Index Report, 2025).
Here is a trillion-dollar observation.
Between 2020 and 2025, the world invested roughly $2 trillion in artificial intelligence. Foundation models. Inference infrastructure. GPU manufacturing. AI SaaS products. AI agents. AI assistants. AI copilots. An extraordinary mobilization of capital toward making AI systems faster, smarter, more fluent, and more capable.
Somewhere in that $2 trillion, how much was spent on making sure AI says the right thing?
Not politically right. Not ethically right. Factually right. How much was invested in ensuring that when your AI SDR emails a prospect about your pricing, the pricing is correct? That when your chatbot tells a visitor about your product capabilities, the capabilities description is current? That when your proposal generator cites a customer reference, the customer is still a customer?
The honest answer is: close to zero, relative to the overall investment.
We built the most sophisticated communication technology in human history and forgot to make sure it has something accurate to communicate.
The Investment Paradox
This is genuinely paradoxical, and I want to spend a moment on why.
The entire value proposition of AI in revenue is predicated on accuracy. AI SDRs are valuable because they communicate the right things to the right people at the right time. AI chatbots are valuable because they answer prospect questions correctly. AI proposal generators are valuable because they produce accurate, relevant documents quickly.
Take away accuracy and what’s left? Speed. AI is fast. It can send 5,000 emails in an hour. It can handle 50 simultaneous chat conversations. It can generate a proposal in 20 minutes.
But speed without accuracy is not just worthless - it’s actively harmful. Fast wrong is worse than slow right. Because fast wrong means your mistakes reach market scale before anyone notices.
And yet the investment has overwhelmingly gone toward speed and scale, not toward accuracy. Because speed is measurable (“10x more outbound!”), scale is impressive (“50,000 touchpoints per month!”), and accuracy is boring (“we verified our knowledge base”).
Here’s the investment breakdown as I understand it:
| AI Investment Category | Estimated Annual Investment (2024) | Purpose |
|---|---|---|
| Foundation models | $80-100B | Make AI smarter |
| Infrastructure (GPUs, cloud) | $40-60B | Make AI faster |
| Application layer | $15-25B | Deploy AI in workflows |
| Safety & alignment | $3-5B | Prevent harmful outputs |
| Knowledge governance & truth | <$1B | Ensure factual accuracy |
According to CB Insights, AI-related venture funding exceeded $65 billion in 2024 alone (CB Insights AI Funding Report, 2024). Of that, less than 2% went to companies focused on AI data quality, knowledge governance, or output verification.
Why Accuracy Doesn’t Get Funded
I think there are three reasons the accuracy layer is so dramatically underfunded.
Reason 1: Accuracy isn’t sexy. In VC pitch meetings, “we make AI 10x faster” gets attention. “We make AI’s knowledge base accurate” gets polite nods. The former sounds like a technical breakthrough. The latter sounds like data hygiene. And data hygiene, despite being one of the highest-ROI investments a company can make, has never attracted the kind of capital that novel technology does.
Reason 2: Accuracy is a shared-responsibility problem. When an AI SDR sends wrong information, whose fault is it? The AI vendor? The deploying company? The person who set up the knowledge base nine months ago? The enablement team that was supposed to maintain it? Shared responsibility means nobody invests, because nobody feels fully accountable.
Reason 3: Accuracy failures are invisible. When an AI tool makes a mistake, the feedback loop is broken. The prospect who receives wrong pricing doesn’t reply to correct it. The buyer who encounters inconsistent information doesn’t call to complain. The deal that dies from accumulated truth erosion shows up as “no decision” in the CRM. The damage is real but unattributed, and unattributed damage doesn’t generate investment.
Research from Validity shows that poor data quality costs organizations an average of $12.9 million per year in direct impact (Validity Data Quality Report, 2024). Yet only 3% of data quality budgets are allocated proactively - the remaining 97% is spent reactively, cleaning up problems after they’ve caused damage.
The Missing Thesis
I think there’s an investment thesis hiding in this gap, and it’s straightforward:
The companies that win the AI era won’t be the ones with the best models. They’ll be the ones with the most accurate knowledge.
Here’s why.
Models are commoditizing. GPT-4, Claude, Gemini, Llama - they’re all very good, and getting better rapidly. The performance gap between the best model and the fifth-best model is narrowing every quarter. In two years, the model layer will be a commodity - excellent everywhere, differentiated nowhere.
What won’t commoditize is the knowledge layer. The accuracy of what your AI knows about your company - your pricing, your capabilities, your competitive positioning, your customer evidence - is proprietary, company-specific, and requires continuous governance. No foundation model provides this. No AI vendor maintains it. It’s your responsibility, and it’s irreducibly unique to you.
The model is the engine. The knowledge is the fuel. A Ferrari running on contaminated fuel performs worse than a Honda running on premium. And right now, the entire AI revenue industry is investing in faster engines while pouring whatever’s lying around into the tank.
According to a Bain & Company analysis on AI competitive advantage, by 2027, “proprietary data quality and governance” will be a more important differentiator than “AI model selection” for 60% of enterprise AI applications (Bain AI Strategy Report, 2025).
The Correction That’s Coming
I think this investment imbalance will correct - not because of enlightened foresight, but because of two forcing functions.
Forcing function 1: Regulation. The EU AI Act makes the deploying company responsible for the accuracy of its AI systems’ outputs. Penalties up to 7% of global annual turnover or €35 million (European Commission, 2024). When inaccuracy has a legal price tag, investment in accuracy follows. Regulation doesn’t care about your model architecture. It cares about whether your AI told the truth.
Forcing function 2: Economic pain. The companies that deployed AI fastest are now discovering the damage: AI SDRs with 70-80% churn rates because the knowledge was stale. Chatbots that contradict the sales team. Proposal generators that cite churned customers. At some point, the ROI on AI investment turns negative if accuracy isn’t addressed - and that inflection point is arriving now.
The correction will create enormous value for the companies that solve the accuracy layer. Because every company that has invested in faster, better AI - and that’s most of them - will eventually realize they also need accurate AI. The demand for knowledge governance will be proportional to the installed base of AI tools. And the installed base is very, very large.
The Opportunity
Here’s the scope of the opportunity, stated plainly.
Every company that uses AI in its revenue stack needs a governed source of Commercial Truth. Every company that uses a chatbot needs to ensure the chatbot’s knowledge base is current. Every company that uses an AI SDR needs to verify the information the SDR operates from. Every company that uses a proposal generator needs to guarantee that the proposals contain accurate claims.
The addressable market is proportional to the total deployment of AI in revenue - which is growing at 45% CAGR (Forrester, 2025).
The alternative - AI that is fast, fluent, and wrong - is becoming less and less acceptable to buyers, regulators, and the companies that deploy it. The market for making AI right is the fastest-growing invisible market in enterprise software.
The $2 trillion has already been spent on speed. The accuracy investment is just beginning. And the companies that make this investment - in truth infrastructure, knowledge governance, and source verification - won’t just have better AI. They’ll have the only AI that’s worth deploying.
Frequently Asked Questions
How much has been invested in AI capabilities versus AI accuracy?
Global investment in AI capabilities (models, infrastructure, applications) exceeded $150 billion in 2024. Investment in AI data quality, knowledge governance, and output verification received less than $5 billion - a roughly 30:1 ratio of capability investment to accuracy investment (Stanford HAI AI Index, 2025).
Why is AI accuracy underfunded compared to AI capability?
Three primary reasons: (1) accuracy-focused solutions are perceived as “data hygiene” rather than breakthrough technology, making them less attractive to investors; (2) accuracy is a shared-responsibility problem with no clear single owner; and (3) accuracy failures are invisible - they manifest as “no decision” deal outcomes rather than traceable errors, creating weak feedback loops that don’t generate investment urgency.
Will AI model improvements solve the accuracy problem?
No. Better models reduce hallucination (AI inventing facts from nothing) but do not address source error (AI accurately retrieving outdated information). As models improve, source error becomes the dominant failure mode - because the AI retrieves your stale documents more precisely and presents them with greater confidence. The accuracy problem is in the knowledge layer, not the model layer.
What is the Commercial Truth infrastructure market opportunity?
Every B2B company using AI in its revenue stack (projected at 80%+ of companies above $10M ARR by 2027) needs governed Commercial Truth to ensure AI accuracy. The addressable market is proportional to total AI revenue deployment, growing at 45% CAGR. As regulation (EU AI Act) creates legal liability for AI inaccuracy, the market will transition from “nice to have” to “mandatory infrastructure.”