0:00
/
0:00

Why Truth Beats Hope in Banking

Solidatus' Tina Chace reveals how column-level tracking and business context prevent a cascade of organizational failures

Listen now on YouTube | Spotify | Apple Podcasts

The Data Faces Podcast with Tina Chace, VP Product Management at Solidatus

When you see a dashboard or report in your BI system, do you question whether it’s actually correct? Most people don’t, and assume the data flowing through pipelines must be accurate. That assumption holds until the numbers fail to reconcile.

Tina Chace made that same assumption early in her career. Starting in middle office operations, booking trades, she learned that “the importance of making sure that your data is absolutely correct, especially when it comes to transacting and dealing with money, was drilled into me from the start.” But she didn’t fully appreciate what that meant until she spent six years deploying AI and machine learning models in highly regulated environments, specifically transaction monitoring and Know Your Customer systems for major banks.

The pattern she discovered changed everything.

“I didn’t really start to appreciate the value of data and accuracy and trust in your data until I spent six years working at an AI and machine learning company, specifically rolling out machine learning models in highly regulated spaces...I was continuously running into problems, and 90% of the time it ended up being a data issue.”

— Tina Chace, VP Product Management, Solidatus

About Tina Chace

Tina is Vice President of Product Management at Solidatus. She started in middle-office operations, moved to deploying AI models for major banks, and discovered the hard way that 90% of production problems can be traced to data quality issues. That journey from assuming data worked to proving it works shapes her approach to lineage. While tracking technical details is essential, organizations must never lose sight of the business consequences.

This is an amazing podcast, I better sign up so I can stay informed.

From Naive Trust to Earned Skepticism

Tina’s journey from naive trust to earned skepticism mirrors what most data leaders experience. Working in the middle office where financial transactions required absolute precision, she assumed someone, somewhere, was ensuring data quality. Information moved through pipelines, and transactions were being processed. Her assumption was simple. “This data is of high quality, and we know what’s happening to it.”

Deploying machine learning models shattered that assumption. Someone would make a change upstream and add a new field with sensitive data. Undetected, this data would flow into the model. Only after days of troubleshooting would she identify the culprit: an unexpected upstream change.

Why Knowing More Makes You Trust Less

Tina describes an oddity. “The more you know, the more you almost distrust, right? Like the more educated you are, the more you question your position.”

Five years ago, teams were largely unaware of how data moved through their organizations. Today, they can see the complexity across dozens of systems and hundreds or thousands of transformations. But visibility without control just amplifies anxiety.

“People were more unaware previously, five years ago, and now there’s more visibility, but also with more visibility, there can be a bit more anxiety, because then, you know, it’s not covered.”

— Tina Chace, VP Product Management, Solidatus

The stakes vary by context. Sending marketing coupons carries a different risk than processing financial transactions. But the same questions surface. Who verifies report accuracy? What happens when a CFO reviews two reports that should show identical revenue figures? One says $47.3M. The other says $47.5M. Finance blames data engineering. Engineering points to the source system. Three days later, nobody knows which number is correct.

Consider rounding errors, similar to those depicted in the movie Office Space. In small transactions, a rounding discrepancy to the wrong decimal place barely registers. “When you’re dealing with large amounts of money, that rounding error becomes a huge financial burden if it is incorrect.” Contracts between applications can specify different rounding rules without anyone documenting the difference. Left unchecked, the discrepancy silently compounds across every transaction.

Regulatory requirements like BCBS 239 now mandate that organizations “report on certain metrics and ensure the data going into that report is accurate and timely and correctly calculated.” Assumptions no longer suffice, and auditors demand proof.

These trust gaps don’t emerge randomly. They cascade from organizational failures that compound with every system the data touches.

Three Failures That Compound

Consider what happens when a data transformation changes upstream. The data engineer implementing the change often lacks visibility into downstream reports and applications, failing to notify stakeholders. The analyst who owns the report doesn’t speak the engineer’s technical language, so they can’t diagnose the problem when discrepancies appear. And because the transformation was never documented, troubleshooting takes days instead of hours. Three organizational failures, each enabling the next.

Here’s how the cascade works.

First Breakdown: The Ownership Vacuum

Ask your organization this question. Who owns the quality of customer data from CRM entry through analytics warehouse to the executive dashboard? You’ll get different answers from every team.

Ownership is distributed across “various points of its life cycle,” which often means no one owns it. Data engineers build pipelines, analysts create reports, and business leaders consume outputs. Each assumes someone else ensures accuracy. Most organizations can’t point to a single person accountable for end-to-end data quality from the source system through the final report. The gaps between ownership zones are where problems hide.

Second Breakdown: The Language Barrier

Without clear ownership, teams can’t communicate effectively. Engineers discuss systems and ETL processes. Analysts talk about reports and metrics. Executives focus on outcomes and risk—three different vocabularies describing the same data, with no shared language to bridge them.

Technical teams understand data flows but struggle to communicate the implications to business stakeholders or reporting analysts. Engineers know WHAT is happening. Business teams need to understand WHY it matters. Without translation, that gap never closes.

“If you, from a technology standpoint, understand your data flows, but you cannot communicate the meaning of that or the implication of that to your business stakeholders or the reporting analysts looking at the end report, then you’ve got a communication gap.”

— Tina Chace, VP Product Management, Solidatus

Third Breakdown: The Documentation Death Spiral

When nobody owns the whole picture, teams can’t communicate effectively. Without ownership, documentation becomes obsolete immediately. What gets documented? By whom? For whose benefit?

Tina’s troubleshooting experience shows the cost. Finding the root cause takes days, but why? Because transformations aren’t tracked and calculations aren’t recorded. Quality checks run in some systems but not others. And upstream changes propagate downstream invisibly, often only being discovered when something breaks.

Point solutions that map individual systems fail to solve organizational breakdowns. That requires a shared foundation where engineers, analysts, and executives work off the same set of blueprints.

Two Dimensions, One Foundation

Solidatus operates on the premise that technical tracking alone is insufficient. Organizations need both technical and business lineage.

  • Technical lineage tracks “where a column, such as trade date, is flowing through various applications before it gets booked in a report.” It maps the mechanical flow: specific systems, transformations, and connections.

  • Business lineage captures “the quality of your data as it’s flowing through systems, what kind of controls or checks are happening in various systems, and who owns the data at various points of its life cycle.” It answers why this flow matters and who’s accountable at each stage.

Without both, organizations either track data but can’t communicate implications, or discuss business concerns but can’t trace them to technical root causes.

“We at Solidatus think it’s important to have both technical and business lineage...not just tracking where it flows or any kind of calculations, but understanding context, such as the quality of your data as it’s flowing through systems, what kind of controls or checks are happening, and who owns the data at various points of its life cycle.”

— Tina Chace, VP Product Management, Solidatus

Every Transformation, Every System, Every Step

Column-level lineage captures every transformation that data undergoes. At each step and with each application, teams track calculations, rounding rules, and aggregations. By the time data reaches a report, teams are aware of its complete history: every change, every system, every decision point.

This granularity solves the discrepancy problem Tina encountered repeatedly. When two reports should match but don’t, teams can investigate the discrepancy to determine the cause. “We rounded to a specific decimal between applications A and B, but changed the rounding rule between B and C.”

What previously took days of detective work can now be traced in hours.

Proactive vs. Reactive: Shifting Quality Checks Left

Beyond tracking flows, lineage enables quality checks across every single system. Instead of discovering a $200K discrepancy during the board meeting, the quality check flags it at 3 am when upstream rounding logic changes. The data engineer fixes it before anyone downstream notices.

If there’s a failure, teams know immediately which systems are downstream and which specific report fields might be inaccurate. Reactive firefighting becomes proactive prevention.

Governance teams document which checks and policies apply at each stage of the process. Sensitive information either gets obscured or never enters the pipeline. Access controls are enforced at every step, not just endpoints. This is where compliance shifts from aspirational to demonstrable.

Same Blueprint, Different Lenses

Different stakeholders view the data flows through their own lens, all based on the same underlying information. Data engineers see technical flows and transformation logic while analysts understand which quality checks protect their reports. Governance teams track the location of sensitive data and identify the applicable controls. Executives see confidence metrics and risk exposure—all perspectives drawing from the same lineage capture.

This shared foundation solves the language barrier. Engineers and business leaders are no longer translating between different systems. They’re viewing different aspects of the same reality.

This foundation solved problems Tina encountered when deploying models years ago. Now that AI is proliferating across organizations, those problems are multiplying. And the stakes are higher.

Why AI Makes Incomplete Lineage Unacceptable

“With the proliferation and popularity of using AI within companies, I’m even more concerned about understanding the data that flows into it,” Tina explains. Her six years deploying models taught her that incomplete lineage “led to a lot of real-world problems.”

Are You Automating or Just Shifting Where You Monitor?

Demos showcase impressive AI results, but production environments are different. Organizations deploy AI to achieve better metrics, higher productivity, and automated decision-making. But at runtime, teams need confidence in what’s feeding the model.

Without that confidence, the promise of automation loses its appeal. “If I have to monitor it all the time, it didn’t save me any productivity at all. I’m just monitoring instead of doing it manually.”

The decisions AI makes aren’t abstract. They impact mortgage approvals, payment processing accuracy, and the detection of fraudulent transactions. In Tina’s work with Know Your Customer systems at major banks, errors have immediate financial and regulatory consequences.

Privacy Requires Proof

“One of the big concerns about the use of AI is privacy, like, where is my information being used?” Lineage provides attestation. Teams can prove models are “obscuring certain private information that shouldn’t be in it, or it’s not even entering the models at all, unless absolutely necessary.”

Compliance audits shift from defending processes to demonstrating controls. Instead of explaining what should happen, teams show what does happen.

Partial lineage creates blind spots where problems hide. Teams can’t skip systems or steps. Incomplete coverage leaves gaps where transformations remain invisible, precisely where issues emerge during troubleshooting.

Don’t Boil the Ocean

Faced with incomplete or nonexistent lineage, where do organizations begin?

“Be very specific and deliberate on what you’re choosing to address first,” Tina advises. “Then you will have an immediate and tangible win by covering a specific scenario, and you can continue to expand out from there in terms of importance.”

The alternative is “trying to document every single system that you have.” That becomes “this big, nebulous project” where “you won’t have an output until five years from now.” Starting with critical use cases delivers incremental ROI, rather than waiting years for enterprise-wide perfection.

Your First Four Targets

Prioritize data flows that intersect risk and visibility.

  1. Regulatory Reporting: Systems that feed BCBS 239 capital reporting or mandated metrics, where auditors require proof of accuracy.

  2. AI models are currently in production. Transaction monitoring, KYC, fraud detection, or other automated decision systems where bad inputs can have immediate consequences.

  3. Areas with known quality issues. Recent troubleshooting incidents, recurring discrepancies, or pipelines that match that 90% pattern.

  4. High-stakes decision systems. Mortgage approvals, payment processing, or other flows where errors have a direct financial or customer impact.

Start where one of these applies. Demonstrate value. Expand based on business priority, not technical completeness.

From Anxiety to Action

The ‘visibility paradox’ Chace describes—where knowing more creates more anxiety—can only be resolved through deliberate action. Knowing more creates anxiety. Seeing problems without the ability to address them compounds stress rather than relieving it.

“Regulations and recommendations are improving the metrics and trust we have in the data we’re using.” Organizations implementing lineage deliberately implement lineage, starting with critical use cases and building incrementally, establish the foundation for trusted AI and confident decision-making.

The alternative is waiting until the next crisis forces action. When a model makes a costly error, when an auditor asks questions nobody can answer, when that $200K discrepancy appears in the board deck.

Next time you see a dashboard, question it. Then ask whether you could trace that number back to its source. How long would it take, hours, days, weeks? Visibility without traceability isn’t insight, it’s anxiety with data attached. The organizations that can answer “yes” aren’t lucky; they’re prepared. They started with one critical pipeline, proved the value, and expanded from there.

The question isn’t whether you need lineage. It’s whether you’ll implement it before or after your next crisis.

Listen to the full conversation with Tiny Chace on the Data Faces Podcast.


Based on insights from Tina Chace, VP Product Management at Solidatus, featured on the Data Faces Podcast.


Share B2B Marketing Prompts by TinyTechGuides

Leave a comment

Podcast Highlights - Key Takeaways from the Conversation

[0:54] Tina’s Career Journey: From Trust to Skepticism

“I actually started my career in the middle office and booking of trades and trade data. So the importance of making sure that your data is absolutely correct, especially when it comes to transacting and dealing with money, was drilled into me from the start.”

Key Takeaway: Early career assumption was that data quality was guaranteed. Reality proved different.

[2:28] The 90% Data Problem Discovery

“I was continuously running into problems where someone would make a change upstream, maybe they added a new field that had sensitive data, and it would flow into the model. We wouldn’t know that, and we’d have to go back and troubleshoot, and 90% of the time it ended up being a data issue.”

Key Takeaway: Six years of deploying AI/ML models in highly regulated spaces (transaction monitoring, KYC) revealed that 90% of production problems trace back to data issues.

[3:24] What is Data Lineage?

“Data lineage allows you to track how your data is flowing through various systems...not just tracking where it flows or any kind of calculations, but understanding context, such as the quality of your data as it’s flowing through systems, what kind of controls or checks are happening in various systems and who owns the data at various points of its life cycle.”

Key Takeaway: Data lineage requires both technical tracking (where data flows) AND business context (quality, controls, ownership).

[6:47] The Rounding Error Problem

“When you’re dealing with large amounts of money, that rounding error becomes a huge financial burden if it is incorrect...between application A and B, we rounded to this decimal between application B and C. We actually changed the rounding, and that actually ended up in our capital reporting.”

Key Takeaway: Small rounding errors compound at scale. Contracts between applications can specify different rounding rules without documentation.

[7:59] Column-Level Granularity

“By documenting data at the most granular level, which we call column level, in the data lineage world, you can actually document for every step and application that that data flows through. Is there a calculation? Is there a rounding? Are you adding things together? So that by the time it ends up in a report, you actually understand exactly what happened for it to be in that report.”

Key Takeaway: Column-level documentation captures every transformation, enabling precise troubleshooting when discrepancies appear.

[8:26] Bridging the Language Gap

“If you, from a technology standpoint, understand your data flows, but you cannot communicate the meaning of that or the implication of that, to say, your business stakeholders, or the reporting analysts who are looking at the end report, then you’ve got a communication gap.”

Key Takeaway: Technical teams know WHAT is happening; business teams need WHY it matters. Lineage bridges this gap.

[9:46] The Shared Blueprints Approach

“It really brings together all the different stakeholders, and they can view their own lens of the data flows, but it’s based on the same underlying information, so you’re working off of like, the same set of blueprints.”

Key Takeaway: Different stakeholders (engineers, analysts, governance, executives) view different aspects of the same lineage data.

[30:57] Why AI Makes Lineage More Critical

“With the proliferation and popularity of using AI companies, using AI within companies, I’m even more concerned about understanding the data that flows into it...I recognize that not having this data lineage and understanding the data flows led to a lot of real world problems.”

Key Takeaway: The deployment of AI amplifies the need for complete data lineage; incomplete visibility creates unacceptable risk.

[32:14] The Productivity Paradox

“At runtime, I want to be sure that everything that’s going into it is helping it make the best decision, because if I have to monitor it all the time, it actually didn’t save me any productivity at all. I’m just monitoring instead of doing it manually.”

Key Takeaway: AI that requires constant monitoring hasn’t actually automated anything—you’ve just shifted where you spend time.

[32:32] Privacy and AI

“One of the big concerns about the use of AI is privacy, like, where is my information being used? With data lineage, you can attest to the fact that your models are obscuring certain private information that shouldn’t be in them, or it’s not even entering the models at all, unless absolutely necessary.”

Key Takeaway: Lineage enables privacy attestation—proving what data enters AI models, not just promising.

[33:26] The Trust Paradox

“The more you know, the more you almost distrust, right? There’s that paradox, like the more educated you are, the more you question your position...People were less aware five years ago, and now there’s more visibility. Still, also with more visibility, there can be a bit more anxiety, because then, you know, it’s not covered.”

Key Takeaway: Increased visibility into data complexity creates anxiety alongside awareness—seeing problems without solutions amplifies stress.

[34:27] Don’t Boil the Ocean

“Be very specific and deliberate on what you’re choosing to access first, because then you will have an immediate and tangible win by covering a specific scenario, and you can continue to expand out from there in terms of importance...You can get ROI and value out of capturing your most critical use cases first...rather than trying to do a boil the ocean exercise, and you won’t have an output until five years from now.”

Key Takeaway: Start with critical use cases, prove value, expand incrementally. Don’t try to document everything at once.


About David Sweenor

David Sweenor is an expert in AI, generative AI, and product marketing. He brings this expertise to the forefront as the founder of TinyTechGuides and host of the Data Faces podcast. A recognized top 25 analytics thought leader and international speaker, David specializes in practical business applications of artificial intelligence and advanced analytics.

Books

With over 25 years of hands-on experience implementing AI and analytics solutions, David has supported organizations including Alation, Alteryx, TIBCO, SAS, IBM, Dell, and Quest. His work spans marketing leadership, analytics implementation, and specialized expertise in AI, machine learning, data science, IoT, and business intelligence.

David holds several patents and consistently delivers insights that bridge technical capabilities with business value.

Follow David on Twitter@DavidSweenor and connect with him on LinkedIn.

Discussion about this video

User's avatar

Ready for more?