DataCenterNews Canada - Specialist news for cloud & data center decision-makers

Exclusive: Denodo's Dominic Sartorio on perfect data vs the right data

Wed, 29th Apr 2026 (Today)

Businesses accelerating artificial intelligence deployments are encountering a widening "trust gap", driven primarily by shortcomings in data readiness, according to new research from Denodo.

The findings of Denodo's AI Trust Gap Report point to a structural mismatch between how enterprises have historically managed data and the demands of obtaining quality outputs in embedded AI systems.

While organizations have spent years centralising data into cloud platforms for analytics, newer AI systems (particularly agentic models embedded in workflows) require immediate, context-aware access to live operational data. This disconnect is emerging as a primary cause of failed deployments and declining confidence in AI outputs.

According to Denodo, at the centre of the issue is a persistent misconception that modern data estates are already AI-ready. Many organizations equate cloud migration or large-scale data consolidation with preparedness, overlooking the specific requirements of AI systems operating in production environments.

"Businesses want to move so fast with AI, and they don't know that their data is not ready for it," said Dominic Sartorio, Vice President of Product Marketing, Denodo.

In practice, this gap often only becomes visible late in the development lifecycle. Systems may perform adequately in early testing phases but begin to produce inconsistent or unreliable outputs when exposed to real-world conditions, particularly where timing and context are critical.

"They may think it's ready because they just did a mass migration to a cloud-native data architecture, and the vendor says they do AI. That assumption is there - investing a lot in data, and so you think it should be ready, but it's not," said Sartorio.

He added that this delayed realisation creates operational risk. By the time issues surface, such as hallucinations or incorrect responses, organizations may already have committed resources, deployed systems, or integrated AI into customer-facing processes. The resulting loss of trust can stall further adoption.

A key technical driver of this trust gap is the growing requirement for real-time data access. Unlike traditional analytics workloads, which tolerate delays and rely on historical snapshots, AI agents are increasingly embedded directly into business operations, where decisions must reflect current conditions.

In the report, Denodo asked 850 enterprise leaders how much of their enterprise data is needed for trustworthy AI responses. The results: while 20% said up to date by the hour and 19% said by the minute, the largest portion (47%), said the data needs to be in absolute real time.

"You need direct access to your apps, your operational systems," said Sartorio. "The moment you copy [data] from your original apps to some central location, that's where it loses its liveness."

He added that the prevailing enterprise architecture model, which can include copying data into central repositories such as data warehouses, introduces latency that can degrade performance in these contexts. Even short delays can result in outdated or irrelevant responses when systems are expected to react instantly.

"What we've seen is basically the embarrassment factor. The agent's job is to talk to a customer. The customer may have just tried to do something online or prompted it with something, and the agent seems unaware of it. It's reacting and responding as if it's just unaware of what the customer just did," said Sartorio. "If an agent is acting on a factory floor, then lack of live data can be really dangerous."

Beyond timeliness, the research identifies a second layer of complexity: determining which data is appropriate in a given context. Enterprise environments typically contain multiple, overlapping data sources, each reflecting different aspects of a business entity such as a customer, product, or transaction.

While these datasets may be accurate and well-governed individually, selecting the correct source at the right moment remains a challenge for AI systems.

"Your data can be 100% accurate and clean, but it doesn't mean it's the right data," said Sartorio.

An AI agent responding to a customer query must determine which of these sources is most relevant based on the immediate context - a capability that static data catalogues and traditional governance models do not fully support.

Sartorio said this introduces the need for dynamic, context-aware data selection mechanisms. Rather than relying solely on predefined metadata, systems must interpret real-time signals and align them with the most relevant data sources to produce accurate responses.

The third factor shaping trust is the ability to enforce consistent guardrails across distributed and often fragmented data environments. As AI systems take on more autonomous roles, organizations must ensure that outputs comply with regulatory, ethical, and operational requirements. The report shows that 42% of the surveyed enterprises have agents that access over 400 systems.

This scale introduces significant guardrail challenges. Traditional models, which centralize control within a single data platform, are less effective when data remains distributed across hundreds of operational systems. Instead, governance must be extended or "projected" back to those original sources while maintaining consistency.

"Unless all those boxes are checked, the business is not really at a point where it can trust the AI to run in production and make decisions on behalf of its human counterparts."