DIGITAL TWINS
Another promising technology gaining traction this year is digital twins, or virtual replicas of systems, facilities, or even people. “Digital twins, fed by real-time data, are used by industries ranging from gaming to healthcare and supply chains to produce a simulation or digital model of a real product or workflow,” said Glynn Newby, marketing manager for manufacturing, telecommunications, games, and simulation at SAS. “By integrating advanced ML [machine learning] and AI techniques, digital twins can simulate various scenarios with greater precision, analyze complex datasets in real time, and provide insights that drive efficiency and resilience in processes.”
Digital-twin technology “allows enterprises to build a virtual testing ground for real-world applications, producing more accurate business predictions,” Newby added. “Additionally, digital twins can boost the performance of human workers for enterprises grappling with the integration of AI and human workers.”
A vexing challenge mentioned by Newby that is holding back digital-twin initiatives is the issue of data quality. Data quality shortfalls “undermine twin reliability, as fragmented sources and inconsistent formats create inaccurate models. Integration complexity compounds this when connecting twins with legacy systems and diverse data environments.”
Data managers also need to be wary of “the computational demands of high-fidelity simulations often [straining] existing infrastructure, while virtual replicas expand security vulnerabilities by creating additional attack surfaces for sensitive data,” Newby cautioned. And, as with many other technology initiatives, implementing and refreshing digital twins require specialized talent, “combining simulation modeling with data science and systems engineering,” he added.
AI-NATIVE DATA ARCHITECTURE
From this point on, a forward-looking data architecture needs to be configured to support AI applications and workflows. And AI can assist in developing such an architecture. That’s why AI-native data architecture is this year’s hottest initiative, according to Jiaxi Zhu, head of analytics and insights for small-to-medium businesses at Google. “As enterprises accelerate their use of AI, they are moving from monolithic pipelines to domain-owned, event-streaming architectures that treat data as a product,” he explained. “Most existing data architectures were built for reporting or batch processing, which is not sufficient to support the scale, flexibility, and governance required for applied AI.”
An AI-native architecture needs to be designed to “support real-time AI inference, such as personalization, recommendations, churn risk, or anomaly detection across microservices,” Zhu said. “It increases speed-to-insight by scaling across lines of business without having to move through a central data team, which could become a bottleneck. It aligns with privacy-by-design by enabling governance and lineage at the domain level.”
The challenge with developing such an architecture is that it “requires a cultural shift in how organizations think about data ownership, observability, and SLA enforcement across business lines,” Zhu advised. In addition, “It requires robust metadata, schema evolution policies, and data product interfaces.” There is also significant upfront investment required, “especially in setting up stream processing platforms, such as Kafka or Pub/Sub, and in real-time orchestration.”