Confluent, Inc., the platform to set data in motion, announced the Confluent Q1 ‘22 Launch, including new additions to fully managed data streaming connectors, new controls for cost-effectively scaling massive-throughput Apache Kafka clusters, and a new feature to help maintain trusted data quality across global environments.
These innovations help enable simple, scalable, and reliable data streaming across the business, so any organization can deliver the real-time operations and customer experiences needed to succeed in a digital-first world, according to the company.
“The real-time operations and experiences that set organizations apart in today’s economy require pervasive data in motion,” said Ganesh Srinivasan, chief product officer, Confluent. “In an effort to help any organization set their data in motion, we’ve built the easiest way to connect data streams across critical business applications and systems, ensure they can scale quickly to meet immediate business needs, and maintain trust in their data quality on a global scale.”
Confluent’s newest connectors include Azure Synapse Analytics, Amazon DynamoDB, Databricks Delta Lake, Google BigTable, and Redis for increased coverage of popular data sources and destinations.
Available only on Confluent Cloud, Confluent’s portfolio of over 50 fully managed connectors helps organizations build powerful streaming applications and improve data portability.
These connectors, designed with Confluent’s deep Kafka expertise, provide organizations an easy path to modernizing data warehouses, databases, and data lakes with real-time data pipelines:
- Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift
- Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable
- Data lake connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data Lake Storage Gen 2, Databricks Delta Lake
To simplify real-time visibility into the health of applications and systems, Confluent announced first-class integrations with Datadog and Prometheus.
With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use.
Another new feature introduced in this update are new controls to expand and shrink GBps+ cluster capacity enhance elasticity for dynamic, real-time business demands.
Paired with Confluent’s new Load Metric API, organizations can make informed decisions on when to expand and when to shrink capacity with a real-time view into utilization of their clusters. With this new level of elastic scalability, businesses can run their highest throughput workloads with high availability, operational simplicity, and cost efficiency.
Global data quality controls are critical for maintaining a highly compatible Kafka deployment fit for long term, standardized use across the organization.
With the addition of Schema Linking, businesses now have a simple way to maintain trusted data streams across cloud and hybrid environments with shared schemas that sync in real time.
Paired with Cluster Linking, schemas are shared everywhere they’re needed, providing an easy means of maintaining high data integrity while deploying use cases including global data sharing, cluster migrations, and preparations for real-time failover in the event of disaster recovery.
For more information about these updates, visit www.confluent.io.