ADOPTION TRENDS
There is a steep rise underway in the use of graph databases and knowledge graphs, “owing to the ability to intricately model relationships and fuel AI applications,” said Singh. These include real-time analytics, fraud analytics, retail, and logistics. Additional use cases include financial fraud detection to “identify suspicious patterns and anomalies, such as money laundering and unusual connections between entities, which can be difficult to detect using relational databases,” said Nadkarni.
In addition, these data environments can be employed to “track customer relationships, preferences, and purchase histories and enable personalized recommendations, targeted marketing campaigns, and improved customer service experiences.”
The adoption of graph databases and knowledge graphs is now “evolving from siloed experimentation toward deeper integration into enterprise data strategies, said Gnau. “The industry is shifting toward more unified approaches, particularly multi-model data platforms and data fabric architectures that natively integrate graph, semantic, document, and relational capabilities.”
In addition, the technology is maturing, “with better tools, standards, and best practices emerging,” said Nadkarni. “This makes adoption less risky and more straightforward.”
ISSUES AND CHALLENGES
While graph technologies provide powerful platforms for emerging AI-enhanced operations, they require expertise and planning to put in place, experts caution.
“Standalone graph or semantic solutions often require specialized expertise and can introduce integration complexity, especially when layered onto legacy architectures,” said Gnau. “These challenges can slow adoption and limit impact.”
“One of the biggest hurdles for organizations is often perceived to be the lack of in-house expertise; most data professionals are still more comfortable with traditional relational databases,” said Miller. “The tooling around graph technology is still maturing.”
Effectively implementing graph technologies requires “the right combination of talent and technology, and both can be hard to find,” according to Biswas. “Many popular graph databases today still run mostly in-memory, making them tricky to scale for big enterprise applications. Plus, there’s a relatively small but growing group of engineers and data specialists who know how to model graph data and comfortably use graph query languages.”
Graph technologies can be difficult to implement, as “bringing an organization’s data together” also requires enterprises “to bring the teams together that own the data that will be in the graph,” said Erickson. “This can be a political quagmire.” More practically, he added, “It means that no one team has all the subject matter expertise to effectively define and extract the information necessary to build the graph.” “[Plus], much of the data within an organization is locked in unstructured documents, resulting in a need to extract and standardize the information found within them. Large language models can help perform this extraction, but they don’t understand organization-specific guidelines and acronyms.”
Another issue that may inhibit graph technology implementations centers on data quality and consistency, said Singh. “Incorporating data from different sources is still subject to inconsistency or out-of-date information. Validation processes for updating the graph or keeping it current must be strict.”
Scalability also rears its head as data managers attempt to incorporate graph technologies into their business processes, Singh added. “The performance of the query is a challenge when the datasets increase in size and complexity. More often than not, distributed systems or sophisticated indexing techniques are necessitated, which make system design more complicated.”
Data managers need to assess whether more traditional database tools can do the job just as well. “In scenarios where simple transactions, fixed schemas, or high-speed performance at scale is required without the need for deep relationship modeling, relational or document-based systems may be more appropriate,” said Gnau.
Performance trade-offs are also often necessary—another area where sticking with traditional data environments may be called for. “In such situations which require high transaction and low complexity where the relationship is very basic, hybrid models with traditional relational databases are more advantageous than basing solutions on graphs,” Singh advised.
Graph technology only makes sense “if relationships between datapoints are central to your problem—think identity resolution, fraud detection, or recommendations,” said Biswas. “If relationships aren’t key, you’re often better off sticking with simpler databases like key-value, document, or traditional relational systems.”
In evaluating graph technologies, it’s important to remember this point: “There are definitely situations where other databases are a better fit,” Miller agreed. “For example, if you’re dealing with highly structured data and need strong consistency for financial transactions, a relational database is still the gold standard. And for simple similarity searches—like powering a basic chatbot—a vector database might be all you need.”
For example, relational databases are still more suited when structured, tabular data with well-defined columns and rows are involved, said Tom Hodgson, innovation lead at Redgate Software. Relational databases provide more robust mechanisms for ensuring data integrity and consistency, as well as being preferable for applications that require ACID (atomicity, consistency, isolation, and durability) within transactions.
Ultimately, in an era when AI is everywhere, graph technologies need to be considered as one environment that can help ease the delivery of data-driven insights.
“Advanced analytics, semantic searches, and AI-powered insights are made easier with knowledge graphs and graph databases which, in turn, can transform an organization’s data strategies,” said Singh. “Even though the problems of scalability, integration complexity, and lack of skilled personnel hinder full potential exploitation, the requirements of large language models and other AI systems are accelerating adoption. These technologies do not replace traditional databases. Instead, they provide unique value for specific cases such as fraud detection or reasoning within AI applications.”