Newsletters




Google Cloud Bigtable Introduces Autoscaling


Google Cloud is introducing autoscaling for Bigtable, enabling users to automatically add or remove capacity in response to the changing demand of applications.

In a blog post, Anton Gething, Bigtable product manager and Ashish Chopra, cloud-native databases product marketing lead, announced that with autoscaling, users only pay for what they need and can spend more time their business instead of managing infrastructure.

Autoscaling for Bigtable automatically scales the number of nodes in a cluster up or down according to the changing demands of usage. It significantly lowers the risk of over-provisioning and incurring unnecessary costs, and under-provisioning which can lead to missed business opportunities. Bigtable now natively supports autoscaling with direct access to the Bigtable servers to provide a highly responsive autoscaling solution.

In addition to autoscaling, Google recently launched new capabilities for Bigtable that reduce cost and management overhead, including:

  • 2X storage limit that lets you store more data for less, particularly valuable for storage optimized workloads.
  • Cluster groups provide flexibility for determining how you route your application traffic to ensure a great experience for your customers.
  • More granular utilization metrics improve observability, faster troubleshooting, and workload management.

Bigtable nodes now support 5TB per node (up from 2.5TB) for SSD and 16TB per node (up from 8TB) for HDD. This is especially cost-effective for batch workloads that operate on large amounts of data.

Businesses today need to serve users across regions and continents and ensure they provide the best experience to every user no matter the location.

Added to Bigtable is the capability to deploy an instance in up to 8 regions so that data can be placed as close to the end user as possible.

A greater number of regions helps ensure applications are performant for a consistent customer experience, where customers are located. Previously, an instance was limited to four regions.

Having detailed insight and understanding of how Bigtable resources are being utilized to support the business is crucial for troubleshooting and optimizing resource allocation.

The recently launched CPU utilization by app profile metric includes method and table dimensions. These additional dimensions provide more granular observability into the Bigtable cluster's CPU usage and how Bigtable instance resources are being used.

These observability metrics tell users what applications are accessing what tables with what API method, making it much easier to quickly troubleshoot and resolve issues.

Cloud Bigtable is a fully managed, scalable NoSQL database service for large operational and analytical workloads used by leading businesses across industries, such as The Home Depot, Equifax, and Twitter.

For more information about this news, visit https://cloud.google.com/.


Sponsors