0:00
/
0:00

What if your best AI governance asset already exists?

Insights from Gartner, LSEG, and Solidatus on why data lineage is the foundation for AI trust

Listen now on YouTube

The Data Faces Podcast on location with Philip Dutton, Founder and CEO of Solidatus

I walked the expo floor at the Gartner Data & Analytics Summit in Orlando expecting every conversation to be about AI, agents, and context layers. They were, and plenty of vendors had agent-washed their messaging overnight. The vendors were talking about the future, but the practitioners were pointing to something they already had.

The opening keynote set the tone. Adam Ronthal and Georgia O’Callaghan reported that four out of five organizations are now deploying AI, but only one in five will achieve their stated ROI.[1] Governance, they argued, is a value accelerator and should be treated as one. With AI agents on every vendor’s booth and in nearly every session title, the question of how to govern autonomous systems had real urgency behind it.

I carried that framing into on-location interviews for the Data Faces Podcast with Philip Dutton, CEO and founder of Solidatus, Terrence Hedin, Data and Metadata Platform Director at the London Stock Exchange Group, and Caleb Watkins, Solutions Engineer at Solidatus. Three different roles, three different vantage points, and they all pointed to the same thing. The most valuable AI governance asset many organizations have is the data lineage and metadata infrastructure that their compliance teams built years ago. Solidatus, a data lineage and metadata management platform used by financial services and other regulated industries, served as the common thread across all three conversations.

From second-class citizen to strategic asset

I suggested to Terrence Hedin that before AI changed the conversation, lineage and metadata were treated as second-class citizens. He expanded on that.

“It has evolved. It is a first-class citizen,” Terrence said. “Every business requirement spec includes lineage at an element level. Every tech spec includes how you produce that lineage.”

At LSEG, lineage used to answer a narrow set of questions. Where did this data come from? Can we prove it to regulators? Those questions still matter, but Terrence described how LSEG now brings business metadata, technical metadata, and semantic layers together into a knowledge graph that serves the entire organization.

“We bring our business metadata, our technical metadata, our semantic layers, into a knowledge graph so we can build that true business context. That provides not only human benefit, but machine benefit as well.” Terrence Hedin, Data and Metadata Platform Director, LSEG

LSEG now treats metadata as a data product, published to both internal teams and external customers. Not a theoretical data mesh exercise, but a commercial product. The governance infrastructure they built for compliance became the foundation for a revenue-generating line of business.

Gartner research supports this trajectory. In the session “Trust as the New Currency,” Guido De Simoni presented data showing that organizations with graduated trust models achieve 64% compliance success compared to 23% without them.[2] The trust frameworks that organizations like LSEG built for regulators directly support AI readiness.

Caleb Watkins, a Solutions Engineer at Solidatus, showed me a related capability. Because Solidatus centralizes all data and metadata in one place, organizations can load their regulations as reference models and let the AI assistant evaluate compliance across their data landscape.

“We can train Solidatus up on those regulations, and then we can ask the assistant to assess your models for compliance with these different regulations to make sure that you’re meeting all of your objectives.” Caleb Watkins, Solutions Engineer, Solidatus

Lineage is no longer just a record of where data came from. It’s becoming the system that evaluates whether your data meets the obligations attached to it.

Your compliance operating model already works for AI

I said something to Philip Dutton that surprised us both. “Something that was built for compliance is now incredibly useful for AI.”

Philip didn’t hesitate. Whether it’s an AI consuming data, a BI dashboard pulling reports, or another system sharing information across business lines, the obligations are the same. Purpose limitations, storage rules, and sharing boundaries all travel with the data.

“You don’t have to change your operating model for AI governance. You can use the same operating model that you’ve been using, which the organization knows, and it takes them a long time to get to know it and to feel comfortable with it. So this really gives you a nice accelerator.” Philip Dutton, CEO and Founder, Solidatus

The program already exists, and your teams know how to run it. The organizational trust has been earned over years of practice. Rather than standing up a parallel AI governance function, extend the operating model you already have.

Gartner analyst Andrés García-Rodeja reinforced this point in the session “How to Build the Context Layer for Reliable AI Agents.” By 2028, he estimates, 60% of agentic analytics projects relying solely on the Model Context Protocol will fail due to the lack of a consistent semantic layer.[3] The metadata and lineage infrastructure that compliance teams maintain is exactly the kind of semantic foundation AI agents need to operate reliably. For AI teams building production pipelines and deploying agents, the implication is direct: the semantic layer your models need may already exist in your governance program.

And that operating model is getting faster. Caleb walked me through code scanning with the AI Lineage Assistant, which reduces what used to take several days of manual analysis to five to ten minutes, with 10x to 100x acceleration across broader governance workflows. In the session “Using Active Metadata to Support Data Agents,” Gartner analyst Mark Beyer presented research showing that metadata volume grows exponentially with agentic AI.[4] Manual approaches to lineage and governance won’t survive that scale. Organizations like LSEG that automated their metadata workflows early have a compounding advantage over those still relying on spreadsheets and tribal knowledge.

AI trust starts with what you can see

Every conversation I had at the summit circled back to trust. De Simoni found that more than 50% of vendors identify trust as the top barrier to agentic AI adoption.[5] Gartner expects unsupervised AI deployment to remain below 10% through 2028. The industry is building AI agents faster than it’s building the trust infrastructure to support them.

Philip put it simply: “If we can’t see it, if we can’t understand it, how do we trust it?” Solidatus renders data lineage as interactive visual maps rather than rows of metadata in a spreadsheet. Visualization isn’t a nice-to-have for governance. When people can see their data lineage mapped out and confirm it matches their understanding of the organization, they trust it. When they’re poring over raw metadata for hours, they generate questions, not confidence.

That principle extends to AI outputs as well. Solidatus built hallucination protection directly into the AI Lineage Assistant. If the LLM returns a response that isn’t grounded in metadata within the platform, the system rejects it and forces a new attempt. The response has to be anchored in real data before it reaches the user. In financial services and other regulated industries, where human-in-the-loop oversight is standard, that validation layer is non-negotiable.

Terrence described how trust and lineage connect at enterprise scale. LSEG’s data trust program is built on four elements of trust, with Solidatus providing the lineage foundation.

“If we don’t understand what that data is, it’s very difficult for us to understand how we can use it, how we should use it, what value it can provide,” Terrence said.

Trust becomes even more critical as AI agents grow more autonomous. Philip pointed out that much of what vendors call “AI agents” today are chatbots running on request-response. True agentic AI creates its own plan, executes across 20 to 50 steps, and self-corrects along the way. Without lineage and metadata infrastructure, organizations have no way to verify what an agent did or why.

What to do with the infrastructure you already have

The data lineage and metadata systems that compliance teams built over the past decade are becoming the critical infrastructure layer for AI trust, AI agents, and AI governance. LSEG proved that by turning their lineage program into a strategic asset and a commercial data product. Solidatus proved it by extending a governance platform into an AI-accelerated workflow engine.

If your organization has invested in data lineage for compliance, the next step isn’t building a separate AI governance program. Audit what you already have. Identify where it covers AI use cases. Close the gaps. If you lead an AI or data science team, ask your governance counterpart what lineage coverage already exists for your training data, production models, and agent workflows. The organizations that connect these functions now will govern AI with confidence. The ones that start from scratch will spend the next two years catching up.

Listen to the full conversations with Philip Dutton, Terrence Hedin, and Caleb Watkins on the Data Faces Podcast.


Based on insights from Philip Dutton, CEO and Founder at Solidatus, Terrence Hedin, Data and Metadata Platform Director at LSEG, and Caleb Watkins, Solutions Engineer at Solidatus, featured on the Data Faces Podcast.


Frequently asked questions

What is Solidatus?

Solidatus is a data lineage and metadata management platform that maps, visualizes, and governs data flows across the enterprise. It is used primarily by financial services and other regulated industries to track how data moves through systems, meet compliance obligations, and build organizational trust in data. The platform recently introduced an AI Lineage Assistant that adds natural language interaction, automated code scanning, and regulatory compliance assessment to its existing governance capabilities.

Why is data lineage important for AI governance?

Data lineage documents where data comes from, how it moves through systems, and what obligations are attached to it. Those obligations, including purpose limitations, storage rules, and sharing boundaries, apply to AI the same way they apply to BI dashboards or regulatory reports. Organizations with mature lineage programs can extend their existing governance operating model to cover AI use cases without building a separate framework. Gartner research presented at the 2026 D&A Summit showed that organizations with graduated trust models achieve 64% compliance success compared to 23% without them.

How does data lineage differ from AI governance?

Data lineage is a component of AI governance, not a separate discipline. Lineage tracks how data flows through an organization and what happens to it along the way. AI governance addresses the broader question of how to ensure AI systems use that data responsibly. The argument from practitioners at LSEG and Solidatus is that the lineage and metadata infrastructure built for regulatory compliance already provides the semantic foundation AI agents need. Rather than creating a parallel AI governance program, organizations can extend what they have.

What is a bring-your-own-LLM model for data governance?

A bring-your-own-LLM model allows organizations to connect their own large language model to a governance platform rather than sending data through a vendor’s AI infrastructure. Unlike vendor-hosted AI models that route customer data through external systems, the BYOLLM approach keeps all data processing within the customer’s own environment. Solidatus uses this approach for its AI Lineage Assistant, meaning no data flows through Solidatus or any third party. This design addresses the primary security concern enterprises have about AI in governance contexts, particularly in regulated industries like financial services.

How does Solidatus prevent AI hallucinations in governance workflows?

Solidatus built hallucination protection directly into the AI Lineage Assistant. When the LLM generates a response, the system validates it against metadata that exists within the platform. If the response isn’t grounded in real data, the system rejects it and forces a new attempt. The response has to be anchored in verified metadata before it reaches the user. This approach ensures that AI outputs in governance contexts are based on actual organizational data rather than fabricated information.

Where should organizations start with AI governance if they already have data lineage?

Start by auditing your existing lineage coverage to identify where it already applies to AI use cases. Philip Dutton, CEO of Solidatus, argues that organizations don’t need a new operating model for AI governance because the one they already use for compliance works. LSEG provides a proof point, having evolved their lineage program from a regulatory tool into a strategic asset and commercial data product. The key is closing gaps rather than starting from scratch.

Podcast highlights

Philip Dutton, CEO and Founder, Solidatus (~15 min)

[0:00] Introduction at the Gartner D&A Summit

[0:28] What is Solidatus and why data lineage matters

[0:54] Data lineage meets AI governance

[1:50] The AI Lineage Assistant and natural language interaction

[3:17] Trust in AI and trust through lineage

[4:45] Human in the loop for financial services

[5:14] Why visualization builds data trust

[7:05] You can’t automate what you don’t understand

[8:27] Data lineage as AI lineage, same operating model

[9:29] What’s on attendees’ minds at Gartner

[10:48] True agentic AI vs. chatbots

[12:00] The future of Solidatus and agentic orchestration

[14:18] LSEG session preview and closing

Terrence Hedin, Data and Metadata Platform Director, LSEG (~6 min)

[0:00] Introduction and upcoming LSEG session preview

[0:23] Overview of the LSEG talk with Philip Dutton

[2:26] Lineage as a first class citizen in the age of AI

[3:32] From regulatory reporting to strategic asset

[4:47] How the Solidatus AI Lineage Assistant is changing workflows

[5:54] Session details and closing

Caleb Watkins, Solutions Engineer, Solidatus (~4 min)

[0:00] Introduction at the Gartner D&A Summit

[0:22] The AI Lineage Assistant and bring-your-own-LLM

[0:44] Trust and security in the AI agent

[1:01] Use case: AI-powered code scanning

[1:42] Days to minutes with automated lineage

[2:01] Use case: regulatory compliance (BCBS 239, AI Act)

[2:56] Customer feedback on the assistant

[3:29] Find Solidatus at Booth #929

About David Sweenor

David Sweenor is the founder and host of the Data Faces podcast, where he talks with the people who are making data, analytics, AI, and marketing work in the real world. He is also the founder of TinyTechGuides and a recognized top 25 analytics thought leader and international speaker who specializes in practical business applications of artificial intelligence and advanced analytics.

With over 25 years of hands-on experience implementing AI and analytics solutions, David has supported organizations including Alation, Alteryx, TIBCO, SAS, IBM, Dell, and Quest. His work spans marketing leadership, analytics implementation, and specialized expertise in AI, machine learning, data science, IoT, and business intelligence. David holds several patents and consistently delivers insights that bridge technical capabilities with business value.

Books

Follow David on Twitter @DavidSweenor and connect with him on LinkedIn


[1] Ronthal, Adam, and Georgia O’Callaghan. “Opening Keynote: The State of Data and Analytics.” Gartner Data & Analytics Summit, March 9-11, 2026, Orlando, FL.

[2] De Simoni, Guido. “Trust as the New Currency.” Gartner Data & Analytics Summit, March 9-11, 2026, Orlando, FL.

[3] García-Rodeja, Andrés. “How to Build the Context Layer for Reliable AI Agents.” Gartner Data & Analytics Summit, March 9-11, 2026, Orlando, FL.

[4] Beyer, Mark. “Using Active Metadata to Support Data Agents.” Gartner Data & Analytics Summit, March 9-11, 2026, Orlando, FL.

[5] De Simoni, Guido. “Trust as the New Currency.” Gartner Data & Analytics Summit, March 9-11, 2026, Orlando, FL.

Discussion about this video

User's avatar

Ready for more?