Newsletters




Datadog Experiments Embeds Experimentation into Observability to Empower Teams to Innovate Safely


Datadog, Inc., a leading AI-powered observability and security platform, is releasing Datadog Experiments to customers everywhere—enabling teams to design, launch, and measure product experiments and A/B tests directly within the Datadog platform.

According to the company, this gives teams the data and insights they need to understand how every change affects user behavior, application performance, and business outcomes.

“The faster teams ship, the more expensive it becomes to not know what's working. When signals are scattered across disconnected tools, teams make decisions with incomplete information—missing what's actually driving revenue and killing the bold bets that will move the business forward,” said Yanbing Li, chief product officer at Datadog.

Datadog solves this problem with the first experimentation platform that combines business metrics from a customer’s data warehouse with product analytics events and application observability, the company said.

Powered by Datadog’s acquisition of Eppo, Datadog Experiments pairs statistical methods with real-time observability guardrails so companies can test what matters, move quickly, and ship with confidence. The product empowers every product manager, designer, and engineer at a company to take a measured approach to change—a must-have in the age of AI, the company said.

Datadog Experiments enables teams to:

  • Accelerate decisions without the overhead: Experimentation is self-serve and standardized, so teams can move from insight to decision without coordination overhead.
  • Run safer, higher-quality experiments: Built-in guardrails, real-time feedback, and shared standards help teams catch issues early, protect users, and keep experiments valid.
  • Make decisions leaders trust: Results are credible, reproducible, and comparable by measuring impact directly against source-of-truth business metrics in native data warehouses, using consistent methodologies teams can audit and trust.

“AI has increased the pace and complexity of software releases exponentially. Too often, though, teams are flying blind when it comes to measuring the efficacy of new code. That’s because they don’t have a uniform way to validate changes and monitor their impact,” said Li. “With Datadog Experiments, teams have the guardrails needed to safely validate AI-driven changes. By tying experiments to Real User Monitoring (RUM), Product Analytics, APM, and logs, organizations can measure both business impact and performance implications to reduce risk without slowing innovation.”

Datadog Experiments is now generally available.

For more information about this news, visit www.datadoghq.com.


Sponsors