Listen now on YouTube | Spotify | Apple Podcasts
I’ve often equated people with birds – they’re always chasing the next shiny object that comes along, in this case, agentic AI. Sadly, I’ve watched a scene like this play out at nearly every company I’ve worked with. A team builds an AI assistant, loads it with pristine data, asks it well-crafted questions, and the demo goes flawlessly. Leadership sees the result and greenlights production.
Six months later, the project has lost its luster, the team is frustrated, and the models underlying the assistant that looked brilliant in a conference room can’t survive contact with the real world. What happened? The model didn’t break. The environment changed, and the carefully curated conditions that keynote the shiny demo simply don’t exist at scale across a messy, fragmented enterprise.
In a recent episode of the Data Faces Podcast, I sat down with Asa Whillock to talk about what it actually takes to move AI from pilot to production. Asa has spent 35 years in software and has lived this challenge from every angle, inside large enterprises like Adobe and Alteryx, and now as the founder of his own AI company. His core argument is one that every data and AI leader needs to hear: the distance between a brilliant demo and a production system has very little to do with model capability. It’s a data context problem. And most organizations are only scratching the surface of what “context” really means.
“When you think about what makes AI production-ready, it is really not so much about the model. When you talk about demos and pilots, you’re almost always talking about a cultivated set of data and a cultivated set of questions so that the result is just outstanding. You’re like, this is amazing, why would we not deploy this everywhere?” — Asa Whillock, CEO and Founder, Euphonic AI
About Asa Whillock
Asa Whillock is the CEO and founder of Euphonic AI, a growth acceleration agent serving revenue operations and demand generation leaders who engineer organizational growth. His career spans 35 years in software across major enterprises, including Adobe, Alteryx, Intel, and AOL, with deep experience in analytics, product, and go-to-market strategy. He also spent two years performing stand-up comedy, which explains his talent for making complex enterprise AI concepts feel accessible through unexpected analogies. In our conversation on Episode 32 of the Data Faces Podcast, we discuss:
Why AI pilots deceive leadership about production readiness
The three categories of data context that most organizations are missing
Why chasing frontier models is a distraction from the real work
How to connect AI investment to the metrics that actually drive your business
The three buckets of context that make AI production-ready
So if the model isn’t the problem, what is? Asa frames it with an analogy that will feel familiar to anyone who has worked inside a large organization: enterprise data is “scattered like toys across a two-year-old’s bedroom.” From the outside, you’d assume that companies with thousands of employees and massive technology budgets have their data neatly organized and ready to fuel AI systems. Well, you know what they say about assumptions, and nothing could be further from the truth. It’s the reason so many promising AI initiatives have failure to launch on their way to production.
Asa breaks the challenge into three categories of context that AI needs to function at production quality. Most organizations are actively working on the first category and are barely aware that the other two exist.
Machine-driven data context
This is the layer that gets the most attention. Machine-driven data is the row-level, tabular data living in your CRM, ERP, and HRM systems. Data teams have been working with this information for years, and the challenge here isn’t a lack of awareness. The challenge is that it’s spread across roughly 350 systems per enterprise, and each system has its own opinion on what data should look like. Connecting them is painful, expensive, and slow, but at least organizations recognize this work needs to happen.
Operational metadata context
This is the layer most teams overlook entirely. Operational metadata includes the configuration settings, workflow routing rules, and log files that determine how data actually moves through an organization. Asa describes this as “the train control that determines if this train goes to Seattle or Albuquerque.” A single configuration switch can change the entire path a customer record follows, and that switch often lives buried in a UI screen that no API can reach. When AI operates without awareness of these controls, it’s making recommendations based on data flows it doesn’t actually understand.
Human decision data context
“Think about the institutional memory of your organization. You can almost always name that person. Imagine living your life without that person in every decision you ever made. You’d be this uninformed AI, guessing. When you have the machine data, the metadata, and the human decision context together, now your AI is ready for production.” — Asa Whillock, CEO and Founder, Euphonic AI
This is the hardest layer to capture and the most valuable once you have it. Human decision data is the institutional knowledge that lives in people’s heads, the reasoning behind why an organization made specific choices, abandoned certain approaches, or accepted particular architectural tradeoffs. Every organization has an institutional memory holder who knows where the skeletons are, the person everyone turns to when they need to understand why a decision was made three years ago. That person knows which approaches were tried and abandoned, which compromises became permanent, and which workarounds nobody ever documented. Asa puts the stakes plainly: imagine making every decision in your organization without that person’s knowledge. That’s the position your AI is operating from right now if you haven’t captured this layer.
The context gap is a $130 billion problem
Each of these gaps is costly on its own, but the compound effect of missing two or three layers simultaneously is what stalls most AI initiatives. Kimberly at Andreessen Horowitz has pointed out that 9 out of 10 automations that could exist today simply don’t, because unlocking the data needed to build them is too difficult. By her firm’s estimate, that represents a $130 billion opportunity in context gaps that have nothing to do with model capability.
Stop chasing models and start hydrating with data context
The industry’s obsession with frontier models is understandable. The capabilities are genuinely impressive, and as Asa puts it, “models are so hot right now.” For data science leaders, the temptation to evaluate, compare, and chase the latest release feels like productive work because the improvements are measurable and exciting.
But Asa argues that this focus is a distraction from the work that will actually determine whether your AI investments succeed or fail. He frames the choice in terms that are hard to ignore. Given the option between a frontier model paired with a poor data context and a slightly less capable model that has been fully hydrated with all three layers of context, the hydrated model wins every time. A world-class model operating without awareness of your operational metadata and institutional decision history will produce outputs that sound polished but miss how your organization actually works. A less flashy model armed with deep context will deliver results that make sense to the people who have to act on them.
“If you’re going to choose a model that is just at the absolute frontier of capability with poor data, or take a model that is maybe half a step backwards in capability but give it all of the context it needs to make an amazing outcome, I’ll tell you which one I will choose every time. Invest in breaking down those barriers of permissions, access, data, those unsexy things.” — Asa Whillock, CEO and Founder, Euphonic AI
The highest-leverage work available right now isn’t evaluating the next model release. It’s the unglamorous, difficult work of breaking down permission barriers, building cross-system data access, and finding ways to capture the institutional knowledge that currently lives only in people’s heads. That’s where production-ready AI gets built, and it’s the work most organizations keep deferring because it doesn’t generate the same excitement as a new model announcement.
Aim your context at the metrics that actually drive your business
Once you’ve mapped your three layers of context, you need to point them to a specific location. This is where Asa’s advice shifts from framework to action, and it starts with a question he hears constantly from leadership: if AI is so transformative, where is the ROI? Asa’s response is to turn the question back on the person asking it.
“I ask leadership, what are the five things that drive your ROI? Have you deployed an AI solution to tune for that? Have you looked at what actually impacts that metric? Almost every time the answer is, well, no, I haven’t really dug into that. If you ask yourself those questions and drive into those key metrics, you will find that transformative ROI.” — Asa Whillock, CEO and Founder, Euphonic AI
What are the five or six metrics that actually drive your business? Not the vanity metrics on a dashboard, but the numbers that directly influence customer acquisition cost, speed to lead, time to value, or whatever levers matter most for your organization. And once you’ve named them, have you dug into what drives those metrics? Most of the leaders Asa talks to haven’t done this work. They’ve deployed AI against surface-level outcomes and are puzzled when the results feel incremental rather than transformational.
One of the most formative experiences in Asa’s career was watching Adobe build what they called a data-driven operating model. Adobe didn’t just track top-line metrics like customer adoption and retention. They decomposed each metric three to five layers deep to understand what specifically influenced it, and then what influenced those influencing factors. That level of decomposition is what turned their AI investments from interesting experiments into systems that moved the business forward.
Next steps
“You can’t be standing back, going, where’s the ROI? You have to have visibility. You have to have headlights on how the metrics that really drive your business, and they’re really tuning up. Focus on what matters for you.” — Asa Whillock, CEO and Founder, Euphonic AI
The organizations that win with AI over the next few years won’t be the ones running the most capable models. They’ll be the ones who did the unglamorous work of stitching their data context together across systems and connecting it to the metrics that actually matter for their business.
If you’re a data or AI leader trying to figure out where to focus, Asa’s framework gives you a practical starting point.
Audit your three buckets of context. For your highest-priority AI initiative, map which layers you actually have coverage on today. Most teams will find they’re reasonably strong on machine-driven data and nearly empty on operational metadata and human decision context.
Decompose the metrics that matter for your business. Name the five or six numbers that actually drive your outcomes and dig three to five layers deep into what influences them. That decomposition is where you’ll find the specific places to aim your AI investments.
Capture institutional knowledge before it disappears. Identify the three to five people in your organization who hold the decision-making context that no system has ever recorded, and start documenting what they know.
Listen to the full conversation with Asa Whillock on the Data Faces Podcast.
Based on insights from Asa Whillock, CEO and Founder at Euphonic AI, featured on the Data Faces Podcast.
Podcast Highlights - Key Takeaways from the Conversation
Podcast highlights
[0:51] Asa introduces his 35-year career in software and the founding of Euphonic AI
[2:22] The Voltron analogy, stand-up comedy, and making enterprise AI concepts accessible
[4:16] Why large enterprises are “relentlessly vertical” and how that creates friction for AI
[7:28] The ebb and flow between horizontal and vertical software cycles
[8:39] Why AI pilots deceive you: cultivated data versus the reality of production
[10:00] The three buckets of data context: machine-driven data, operational metadata, and human decision data
[13:17] The role of unstructured data and why operational context matters more than document archives
[15:01] “What do you want to be great at?” and why companies shouldn’t pivot to becoming AI companies
[19:16] Vibe coding, competitive parity, and why adding the same capabilities as everyone else nets to nothing
[22:24] How to align AI investments with your business differentiation instead of chasing technology
[25:51] Why data context matters more than model capability and the case for the “half-step-back” model
[29:25] The bridge between systems of record and why nobody is incentivized to build it
[32:29] Asa’s one piece of advice: identify the five metrics that drive your business and dig three to five layers deep
About David Sweenor
David Sweenor is the founder and host of the Data Faces podcast, where he talks with the people who are making data, analytics, AI, and marketing work in the real world. He is also the founder of TinyTechGuides and a recognized top 25 analytics thought leader and international speaker who specializes in practical business applications of artificial intelligence and advanced analytics.
With over 25 years of hands-on experience implementing AI and analytics solutions, David has supported organizations including Alation, Alteryx, TIBCO, SAS, IBM, Dell, and Quest. His work spans marketing leadership, analytics implementation, and specialized expertise in AI, machine learning, data science, IoT, and business intelligence. David holds several patents and consistently delivers insights that bridge technical capabilities with business value.
Books
Artificial Intelligence: An Executive Guide to Make AI Work for Your Business
Generative AI Business Applications: An Executive Guide with Real-Life Examples and Case Studies
The Generative AI Practitioner’s Guide: How to Apply LLM Patterns for Enterprise Applications
The CIO’s Guide to Adopting Generative AI: Five Keys to Success
Modern B2B Marketing: A Practitioner’s Guide to Marketing Excellence
The PMM’s Prompt Playbook: Mastering Generative AI for B2B Marketing Success
Follow David on Twitter@DavidSweenor and connect with him on LinkedIn.










