The world has become increasingly real-time; in any industry, interaction, and experience, people unanimously demand the instantaneous results or service that we have collectively grown used to. Data is no stranger to this phenomenon.
Any downtime is no longer tolerated within the sphere of modern applications, which are simultaneously more data-intensive than ever. How, then, can applications with complex needs and architecture be designed to withstand scaling, as well as meet increasing demands for performance, resiliency, consistency, and locality?
Lee Atchison, cloud strategist and author of Architecting for Scale, and Andrew Marshall, VP of product marketing and developer relations at Cockroach Labs, gathered for DBTA’s webinar, “How to Architect Highly Available Apps for Scale in the Cloud,” to explore the ways in which modern applications are challenged by real-time needs, as well as how to adapt to these complicated requirements.
Cloud and microservice applications have wildly transformed application architecture, according to Atchinson. Days of monolithic applications are fading into the past, as the cloud is entirely relied upon to construct and power modern applications.
The scale challenges of 2023 vary; ranging from complexity to cost efficiency, economic environment conditions, internal organization, data segmentation, and more, it is abundantly clear that there are ample obstacles standing in the way of scalable applications.
Migrating from monolithic architectures is certainly advantageous, but comes at a cost.
“Innovation increases dramatically as you move away from monolithic to microservice architectures,” said Atchinson. “But the flipside of innovation is complexity, because as you give flexibility and the ability to innovate to developers, guess what: they innovate. They try new things, they try new ideas; sometimes they work, sometimes they don’t.”
Scalable solutions also intersect with cost efficiency, Marshall pointed out. How can enterprises balance the drive to innovate with the obstacle of financial drain?
The push for innovation is born from business needs, Atchinson explained. It is as technical as it is economical; though innovation does not necessarily equate economic downturn, the advancement toward innovation can, for some enterprises, become a financial issue. The key strategy to balance technical-economical obstacles shapes as stable, incremental scaling.
Internal organization between departments—especially for architects—has a role to play in cultivating modern applications. Consistency, both in plan and operations, is critical for maintaining a constant course to guide teams in unison toward success. This established stability further aids architects, as offering a sturdy foundation to launch from makes architects’ workflows that much simpler.
“From a strictly philosophical standpoint, I think it’s clear that the more an organization has a stable plan, a stable track record, and a stable growth plan for how it wants to grow and expand, the easier it is to get everyone pointed in the same direction,” explained Atchinson.
Microservices are often associated with data complexity—from the data workers’ perspectives. The code, responsibilities, and the services of the architecture themselves, tend to be more focused on than the data itself. The answer lies in formatting and owning your data in the exact fashion of microservices.
“I see so many microservice-architected diagrams that show a series of services all working together, talking to a single, large data source at the bottom,” said Atchinson. “It is critical that your data be architected and divided in the same way that your services are.”
Single-source data unattached to services become messy and conflicting; if data is not segregated, you will have to depend on data formatted in a particular way until someone else needs it iterated differently, causing a multitude of problems for everyone involved.
“The problem with scaling with data has to do with how the data is used, who uses it, and how it is managed,” explained Atchinson. “It’s not about the size of your database, it’s not about the horizontal scalability of your database, it’s about how the data is used. That’s what impacts your ability to scale your application.”
It is inevitable that the needs of data, especially in terms of scaling and the cloud, will evolve just as quickly as the data itself. This future calls for an emphasis on ML and AI within applications and how we think of data in that regard, according to Atchinson.
People think of data as information used to solve a problem, segmenting and dividing data to find the “right” data in order to tackle issues. AI and ML, on the other hand, exist as the opposite; all data is good data, and more data fed into its system will help it deduce a solution. Atchinson attributes this conflict as impeding the way people and AIs interact. We either need to change the way people think about data or the way AI strategies work with data to increase alignment.
Yet another issue arises; generally, as the quantity of data increases, the quality decreases. Though an AI may be making decisions, it can be poor decisions due to the lack of quality data. Quality-controlled data despite its quantity is another problem to be solved.
To learn more about building scalable, modern applications, you can view an archived version of the webinar here.