Listen now on YouTube | Spotify | Apple Podcasts
Over the past 20 years, companies spent $2 trillion on Google AdWords—the foundation of how B2B companies get customers to find them online. ChatGPT killed that model in 18 months. The same pattern is emerging with LinkedIn marketing and will likely repeat with AI-based search. Organic inbound traffic, the bedrock of modern B2B marketing, is quickly dying.
“Web traffic for the organic is way down,” says Eric Kavanagh, AI analyst and host of DM Radio. “Companies are freaking out. Google’s empire is teetering.”
About Eric Kavanagh
Eric Kavanagh is an AI analyst and host of DM Radio. Since 2005, he’s conducted over 2,000 podcasts, radio shows, and webinars, including 20+ years with The Data Warehousing Institute (TDWI). He’s watched every major technology wave: the dotcom boom, big data, cloud computing, and now AI. He’s not a futurist making predictions—he’s a pattern-spotter who’s seen this chaos before.
While Google scrambles, most companies aren’t even paying attention to that disruption. They’re too busy making worse mistakes with their own AI initiatives. Eighty to ninety-five percent of AI projects fail because nobody asked basic questions before starting. The technology works fine, but their thinking on how to approach AI doesn’t.
“We’re in this sort of ready, fire, aim mode,” Kavanagh says. “Many companies are pulling the trigger on programs, and that’s why you’re seeing 80% failure rates, 95% failure rates. People didn’t really think through what they were trying to do with this stuff.”
He’s identified three decisions that separate the 20% who succeed from the 80% who burn money and credibility. Get them right and you’ll deliver measurable value while your competitors chase demos. Get them wrong and you’ll join the companies explaining to the board why that $2M AI initiative produced nothing but expensive lessons.
Decision 1: Choose tedious, focused problems over impressive demos
Eric Kavanagh asked the AI agent Manus to book plane tickets for his wife. He watched it work, clicking through sites, checking prices, navigating booking flows. “I was wildly faster doing things the old-fashioned way than Manus was, and Manus came up with a price that was twice as much.” This is the pattern haunting many AI projects; they look impressive in demos, but are useless in practice. Why is it that every agentic demo talks about managing your calendar? Is it really causing you that much grief?
The AI systems that work aren’t very glamorous for demos. They often handle boring tasks like anti-money laundering and regulatory compliance checks. “Doing things in the background or on the side,” Kavanagh says. Success looks like “our system processed 10,000 transactions with 99.5% accuracy,” and not “our business leaders transformed how we think about customer engagement.”
New AI models come out constantly, and new capabilities are announced on a daily basis. AI vendors promise the moon while legacy players AI-wash their capabilities. When everything changes this fast, companies reach for whatever looks impressive instead of figuring out what’s actually useful and strategic. “Take a deep breath,” Kavanagh says. “Figure out, what are we trying to accomplish with these things? What are the cost models?” Most companies skip that step. They want transformation when they need Tuesday’s invoices processed correctly.
The Manus test is simple: Watch your AI work and ask yourself if it’s genuinely better than doing it the old way. If the honest answer is no, you’ve built an expensive demo that costs twice as much and delivers half the value.
Even when you choose focused, boring problems that pass the Manus test, you still face a second decision. How do you actually govern these systems when nobody — including their creators — understands how they work?
Decision 2: Audit logs over guardrails
“Guardrails in general are overrated.”
That’s Eric Kavanagh’s position on AI safety. While Anthropic calls itself “the safety company” and OpenAI reportedly trains models to “only blackmail as a last resort,” Kavanagh thinks most of it is theater. Not because safety doesn’t matter, but because guardrails don’t actually work.
The evidence shows up everywhere. Google’s Gemini refuses to answer anything remotely political which may make it too careful to be useful. Grok gave one user detailed instructions for hacking a smartphone to improve battery life — too loose to provide real protection.
There’s no sweet spot because you can’t build perfect guardrails around systems that nobody fully understands, including the people who created them. “The big models, even the people who designed them don’t know exactly how they work,” Kavanagh points out. Imagine regulators trying to audit OpenAI’s systems. “Think about going into the offices at OpenAI and saying, All right, well, let me see the system you’re using, right? There’s no way, dude, you can’t survey all that.”
Someone tested this opacity recently. They published a webpage with a “don’t index” tag and invented a new word. The next day, ChatGPT found it. Nobody knows how. The systems are doing things nobody can verify or explain. But, that’s par for the course. Why would we expect anything different from these companies with over-inflated valuations?
This is the reality. The systems are opaque and their behavior is unknowable. With billions of parameters and the nature of neural networks, audits are impossible. And despite this, companies are pouring resources into guardrails that either block legitimate work or fail to stop harmful use.
Kavanagh’s alternative strips away the complexity:
“Audit logs. Just audit logs, as long as you’ve got some log file that says what it did where. I mean, that’s what all the AI agent companies are talking about doing. It’ll log what it does, and that way you can go back and watch it. You have to be able to kill the pods basically, that launch whatever structure you have.”
— Eric Kavanagh, AI Analyst and host of DM Radio
The framework comes down to four questions. What did it do? Where did it do it? Can you stop it? If your AI agent starts sending emails to customers or updating financial records, you need to be able to shut it down immediately and not wait for a deployment cycle. Can you fix the damage if something goes wrong?
“That’s basic governance,” Kavanagh says. It’s not sophisticated and it’s not perfect. But, you can actually build it with today’s tools, unlike the fantasy of perfect guardrails.
Guardrails try to prevent disasters that nobody can predict in systems nobody understands. Audit logs accept that reality. They document what happened and give you the ability to respond when things go wrong, albeit, probably too late.
It’s unglamorous governance for systems nobody fully understands. But it’s honest about the trade-offs. And it’s what the 20% who succeed are quietly building while everyone else argues about prompt engineering and fine-tuning.
Audit logs tell you what happened. But there’s a third decision that determines whether you should have let it happen in the first place.
Decision 3: Never mix deterministic with probabilistic systems
Fifty percent error rate! That’s how often AI gets medical recommendations wrong, according to recent studies Kavanagh cites. Chain multiple AI calls together in workflows and the math gets worse. “There’s an error rate at every step of the way, and when you multiply that out, the error winds up being like 40% or something.”
For systems recommending Netflix shows, that’s fine. For systems approving credit, processing transactions, or managing regulatory compliance, it’s lightning in a bottle.
“Databases aren’t going away. Transactional systems aren’t going away. They will be aided and abetted by these other systems, but you have to be careful not to mix those two... For most business decisions, you need to be very sure about what you’re doing.”
— Eric Kavanagh, AI Analyst and host of DM Radio
The danger isn’t AI suddenly replacing your deterministic systems. The danger is subtler. Probabilistic systems slowly become deterministic as humans get tired of reviewing recommendations. The AI didn’t stage a coup and people just got tired of clicking “override.” That’s how you end up with 40% error rates making high-stakes calls.
Here’s how it happens. Your customer support team starts using AI to route tickets. It works well — let’s say at about 90% accuracy. Then someone realizes the AI is pretty good at suggesting refund amounts too. Within six months, support reps stop reviewing the suggestions. Nobody decided to let AI make refund decisions. It just happened. The next thing you know is when you discover the AI has been approving fraudulent refund requests at a 35% error rate for three months. That’s the risk companies face.
Deterministic systems like databases and transaction processors must be certain. They make final decisions with legal, financial, or regulatory consequences. Probabilistic systems like AI are helpful but unreliable. They can be wrong, as long as they’re not making decisions that require certainty.
Here’s where to draw the line. Financial transactions, credit approvals, compliance reporting, and access control need to be deterministic. Support ticket routing, content recommendations, and marketing personalization can be probabilistic.
Kavanagh believes “small language models, or just old-fashioned, deterministic AI models, are going to be ruling the day, at least I hope so, because the big stuff is too big, and it’s unwieldy and you can’t trust it.” The complexity of large models creates unpredictability. Size doesn’t equal performance. It equals loss of control.
Watch for this pattern in your own systems. Are your probabilistic systems creeping into deterministic territory? The 40% error rate doesn’t announce itself. It hides in the gap between “AI recommends” and “nobody checks anymore.”
While big tech burns billions, you can win with boring
Google is watching $2 trillion in AdWords revenue evaporate but you can bet they’ll figure this out. Most companies are so busy chasing their own AI strategies that they haven’t noticed the ground shifting beneath them.
Who ends up in the 20% who succeed? Not the companies with the biggest models or flashiest demos.
“We’re in the age of execution right now. The data is everywhere. The algorithms are everywhere. It’s a question of applying them to your particular business to get something done.”
— Eric Kavanagh, AI Analyst and host of DM Radio
McKinsey’s consulting knowledge? “Out in the wild,” Kavanagh says. Proprietary algorithms? Commoditized. Kavanagh remembers the dotcom boom, sitting in the Empire State Building asking which way money was flowing. Same chaos, different technology. The companies that survived weren’t the ones with the best ideas—they were the ones that executed.
“The big guys, they are hemorrhaging cash in the hopes of securing this battleground, this land, you know, like it’s Eastern Ukraine or something,” Kavanagh observes. “But by the time it’s all said and done, I think all the buildings are going to be blown up.”
OpenAI, Microsoft, Google, and Anthropic are burning billions on infrastructure wars. Your battleground is different: execution. Tedious problems. Audit logs. Deterministic decisions.
Unglamorous? Absolutely. Effective? That’s what 20 years of watching AI implementations taught Kavanagh.
Look at your current AI initiatives. Which ones pass the Manus test? Which ones confuse audit logs with bureaucracy? Which ones let probabilistic systems make deterministic decisions?
Here’s the uncomfortable truth. You’re already in one of the two groups. You’re building what works, or you’re building what demos well. You’re in the 20% who asked the hard questions before starting, or the 80% who are about to learn expensive lessons.
Eric Kavanagh has watched this movie before. The ending isn’t a mystery.
Listen to the full conversation with Eric Kavanagh on the Data Faces Podcast.
Connect with Eric: info@dmradio.biz | DM Radio on YouTube (500+ episodes)
Based on insights from Eric Kavanagh, AI analyst, syndicated radio host of DM Radio, featured on the Data Faces Podcast.
Podcast Highlights - Key Takeaways from the Conversation
[2:38] The ready-fire-aim problem killing AI projects “We’re in this sort of ready fire aim mode. Many companies are pulling the trigger on programs, and that’s why you’re seeing 80% failure rates, 95% failure rates. People didn’t really think through what they were trying to do with this stuff.”
[5:53] The Manus test: When impressive demos fail reality Kavanagh tested AI agent Manus by asking it to book plane tickets for his wife. “I was wildly faster myself doing things the old fashioned way than Manus was, and Manus came up with a price that was twice as much.” The pattern: AI looks impressive in demos but fails when you actually use it.
[5:53] What AI success actually looks like “It’s personal optimization of your time, of your productivity.” The implementations that work aren’t sexy—they’re doing tedious, focused tasks in the background. Anti-money laundering. Compliance automation. Things where success is obvious and repeatable.
[9:01] AI judges and juries are coming “In a few years, you’ll start to see some front runners do this, you’ll be able to choose AI judge and jury, or real judge and jury. And my recommendation is, if you’re guilty, go with the real judge and jury. If you’re innocent, go with the machines.”
[11:52] Why guardrails are overrated “Guardrails in general are overrated. I think that they’re very difficult to enforce.” The problem: you can’t build perfect guardrails around systems that nobody fully understands, including the people who designed them.
[11:52] The minimal viable governance framework “Audit logs. Just audit logs, as long as you’ve got some log file that says what it did where... You have to be able to kill the pods basically that launch whatever structure you have.” Four questions: What did it do? Where? Can you stop it? Can you remediate?
[16:10] The $2 trillion disruption nobody planned for “About $2 trillion have been paid to Google AdWords in the past 20 years... That’s out the window now.” ChatGPT gives answers directly—organic web traffic is plummeting. “Companies are freaking out. Google’s empire is teetering.”
[17:44] How error rates compound A company chaining multiple LLM calls together found “there’s an error rate at every step of the way, and when you multiply that out, the error winds up being like 40% or something.” Medical AI is getting recommendations wrong 50% of the time.
[19:39] Never mix deterministic with probabilistic systems “Databases aren’t going away. Transactional systems aren’t going away. They will be aided and abetted by these other systems, but you have to be careful not to mix those two.” For most business decisions, you need certainty.
[22:00] Small models will rule the day “I think small language models, or just old fashioned, deterministic AI models, are going to be ruling the day, at least I hope so, because the big stuff is too big, and it’s unwieldy and you can’t trust it.”
[24:27] IP is dead—we’re in the age of execution “IP is kind of dead. Intellectual property is dead. Copyright is dead... We’re in the age of execution right now. The data is everywhere. The algorithms are everywhere. It’s a question of applying them to your particular business to get something done.”
[38:00] Big tech is hemorrhaging money “Open AI, Microsoft, Google, Anthropic, they’re hemorrhaging money because they want engagement. They’re trying to win this battle... but they’re forgetting that we can leave.”
About David Sweenor
David Sweenor is an AI, generative AI, and product marketing expert. He brings this expertise to the forefront as the founder of TinyTechGuides and host of the Data Faces podcast. A recognized top 25 analytics thought leader and international speaker, David specializes in practical business applications of artificial intelligence and advanced analytics.
Books
Artificial Intelligence: An Executive Guide to Make AI Work for Your Business
Generative AI Business Applications: An Executive Guide with Real-Life Examples and Case Studies
The Generative AI Practitioner’s Guide: How to Apply LLM Patterns for Enterprise Applications
The CIO’s Guide to Adopting Generative AI: Five Keys to Success
Modern B2B Marketing: A Practitioner’s Guide to Marketing Excellence
The PMM’s Prompt Playbook: Mastering Generative AI for B2B Marketing Success
With over 25 years of hands-on experience implementing AI and analytics solutions, David has supported organizations including Alation, Alteryx, TIBCO, SAS, IBM, Dell, and Quest. His work spans marketing leadership, analytics implementation, and specialized expertise in AI, machine learning, data science, IoT, and business intelligence.
David holds several patents and consistently delivers insights that bridge technical capabilities with business value.
Follow David on Twitter@DavidSweenor and connect with him onLinkedIn.