Newsletters




Leveraging APIs to Drive Data Accessibility and Connectivity


Democratizing data has become a foremost concern for enterprises desiring accessible, streamlined data connectivity, regardless of its consumers’ skill level. As data volume has dramatically increased—and continues to do so by the minute—new techniques have surfaced in driving data connectivity across an organization.

DBTA recently held a webinar, “Six Steps to Data Democratization Using APIs,” joined by Todd Wright, senior product marketing manager at Progress, and David Loshin, principal consultant at Knowledge Integrity, to explore how APIs can be leveraged to increase data democratization and continuous accessibility while enabling self-service reporting and maintaining data protection and compliance.

Centering the discussion on Loshin’s white paper of the same title, a growing challenge exists for companies modernizing their data landscapes as they migrate to the cloud: by distributing data assets across multiple cloud-based environments, data connectivity and accessibility are inherently complicated.

Providing seamless, continuous accessibility to existing applications, enabling simple ways to access data, ensuring data is available throughout sophisticated storage and file formats, and maintaining data governance policies to mitigate unauthorized use pose significant difficulty for enterprises seeking to modernize.

APIs stand as a solution for this data democratization conundrum, acting as a standardizable mode of connectivity between two or more operating systems. Today, APIs have become an extremely powerful tool to connect different end-users with their information needs, enabling enterprises to tackle the aforementioned challenges of data connectivity and accessibility, according to Loshin.

Step 1: Identify and Categorize the Data Consumption Use Cases

Loshin explained that supporting operational data democracy is not doable unless you know how people want to use data. Understanding the data consumers’ discrete usage scenarios by gathering intelligence on data consumption use cases is the first step toward leveraging APIs for providing data connectivity and availability.

Survey the data consumer groups and note what individuals are doing, what types of data they are accessing, and how they currently access that data. Describe prototypical use cases and map the types of users to those use cases. Then, document what specific logical and physical data assets are being employed. This will aid in determining which class of APIs are built and flatten the learning curve for deployment and adoption.

Step 2: Catalog the Data Assets to be Accessed

To enable data democratization, organizations must identify and catalog data assets that will be accessed through APIs. The previous step helps in highlighting which logical data domains are of significant interest; the ability to compose integrated data views of commonly accessed domains is a requirement for organizations attempting to modernize their data landscape.

Through a variety of classifications and identifications, logical data assets and their metadata will be registered to a visible and accessible data catalog to underpin organizational data democratization and embrace data awareness.

Step 3: Prototype and Test Using an API Gateway

Loshin emphasized that finding platforms that help in prototyping and testing, so that methods of access are developed and deployed quickly, is critical.

Defining an API gateway as a layer that acts as a moderator between the data consumers making requests and the backend data services that must be accessed to satisfy those requests, an API gateway is the framework for prototyping and testing different versions before releasing them. It furthermore enables portability, allowing organizations to test out migrated datasets as an enterprise transitions its data assets from an on-premises server to the cloud.

Step 4: Leverage Reusability

Throughout the process of reviewing usage scenarios for data access, request patterns may surface. These patterns, if used frequently, can present opportunities for API code reuse to accelerate development and deployment.

Loshin pointed to a few methods that may aid in avoiding IT development bottlenecks:

  • API development tools that use low-code/no-code methods
  • Parameterization which version an API according to a set of conditions
  • API code templates in the event that data accessibility prevents a tool’s ability to equip a low-code/no-code approach
  • Versioning an API so that developers can recuse the existing API with incremental modifications

Step 5: Integrate Controls into the APIs

This step emphasizes the building of data protection directly into an API to eliminate instances of vulnerabilities by unauthorized access of sensitive information.

This can be accomplished by standardizing the specification of data protection policies, taking advantage of CSP identity management, and integrating limitations within the API interface. Ultimately, there are several ways to directly integrate data controls, thereby protecting sensitive data within the API framework, according to Loshin.

Step 6: Layer Data Connectivity within Microservices

Deploying portable, lightweight microservices—which can be established in multiple platform environments—can help reduce costs of cloud resources while supporting data consumers’ performance expectations.

This mitigates the price tag of launching computing instances; by leveraging serverless computing, computing instances are dynamically enabled when they are needed, and when the code they execute is completed, they shut down.

Another method of reducing costs is through containers, which combine only the compiled code, libraries, and configurations necessary to provide a service into a lightweight, portable, virtualized environment, explained Loshin.

Both containers and microservices also enable continuous advantage of data connectivity tools already existing within an environment. Acting as a “buffer” layer between the request and the access and delivery of requested data, containers and microservices encapsulates functionality that is already in production, ultimately streamlining data modernization.

For an in-depth discussion of Loshin’s white paper, you can view an archived version of the webinar here.


Sponsors