It seems every enterprise is primed for AI and all the good things it promises to deliver. However, in many cases, organizations are finding that their data infrastructures aren’t ready. More than one-third of data managers responding to a new survey believe their data infrastructure will fail under the weight of AI in the coming year.
In addition, 83% of the 1,125 technology professionals and executives surveyed by Cockroach Labs agree that AI demand will exceed the capacity of most organizations’ data infrastructure in the next 12–18 months.
This is a problem that’s being recognized by enterprise leaders, who are ramping up financial resources to meet the challenge. All are prioritizing investments to improve AI scalability and database performance for 2026, the survey confirms.
At least 30% identified the database as the first point of failure for AI, second only to cloud infrastructures.
“The problem is not running on the cloud, the problem is how design decisions are made with regard to how data is ingested, processed, stored, and moved,” the survey’s authors opined.
In addition, survey respondents point to their corporate leadership as increasing the level of risk seen with the data infrastructure behind AI. Nearly two-thirds (63%) say their leadership teams underestimate how quickly AI demands will outpace existing data infrastructure. “This suggests that while companies have been investing in AI, the investments have been too reactive, and may not truly prevent disaster,” according to the survey’s authors.
Financial resources going to AI efforts are increasing. Most executives and professionals (85%) report that their companies are spending 10% or more of their total IT budget on supporting AI initiatives that place significant demands on data infrastructure, such as compute scaling, model training environments, real-time data processing, and database optimizations for AI workloads.
“The increase in IT spend isn’t just about the scale of workloads, it’s the uniqueness of AI that forces new requirements on underlying systems,” the survey’s authors stated. “Unlike traditional application workloads, AI workloads, especially those involving real-time inference and agentic automation, place sustained, unpredictable pressure on databases. The database is at the heart of AI workloads because it must ingest more data than ever before and process more transactions.”
When asked about new database capabilities most critical to supporting future AI workloads, responses highlighted the need for higher-throughput ingestion and a way to manage rising costs and unpredictable loads due to the sheer scale of AI.
When asked about their challenges with supporting AI workloads, respondents pointed to balancing cost, storage, and performance at Al scale. Data quality also emerges as the second-ranked concern as AI proliferates across enterprises.
Given the number of unknowns regarding the future of AI, companies and leaders are taking a multipronged approach to improving AI scalability and database performance, the survey also found.
“Unprecedented agentic behavior will only stress systems further in ways that traditional architectures have never been tested before,” the survey’s authors also predicted. This calls for a “new kind of architecture, one that is built for resilience at scale is needed more than ever.”
Such an architecture needs to include continuous load support, concurrency under stress, coordination at scale for agentic AI chains, and actions across systems and services.