Newsletters




Adopting a Time Series Database with InfluxData


In today's data landscape, organizations often default to complex, expensive data stacks when simpler, more specialized solutions could deliver better results at a fraction of the cost. 

InfluxData senior developer advocate Cole Bowden joined DBTA’s webinar, Choosing the Right Data Tools: When to Use a Time Series Database, cutting through the noise to help attendees make smarter database decisions.

When it comes to considering a time series database, Bowden recommended starting simple. Ask yourself the following questions:

  • Can your data fit in memory?
  • How about on a single hard drive?
  • Just use Postgres (or MongoDB) + BI?

Scale is what makes infrastructure choices difficult, he said. Specialize as you scale: Too many data sources? Add ETL tooling. Messy transformations? Add dbt. Data quality concerns? Cataloging tool. Need a lakehouse? Lakehouse. Performance limitations? Time to rethink storage. “But don’t solve these problems until they’re problems,” he said.

AI tools can make building easier. Most AI tools are very good at writing simple scripts and SQL queries. “You can get away with not knowing how to write things, as long as you know the right questions,” he said. “Do test and check things, though.”

You know you’ve hit the limits when…:

  • Your data exceeds what can be stored on a single server
  • OOM query failures
  • High query volume causes bottlenecks and degradation
  • High write volume outpaces table maintenance
  • “Noisy neighbor” problem causes failures or crashes
  • Complexity is difficult to keep up with

The characteristics of a time series database includes time series data, best-in-class write throughput, efficient queries over time ranges, and extensive scalability and performance.

Time series data is time-stamped, generated in regular (metric) and irregular (event) time intervals, often in large volumes, and is real-time and time-sensitive.

Bowden recommended InfluxDB 3 Core as the choice time series database. Core is built to collect and process data in real-time while persisting it to local disk or object storage. It is optimized for queries against recent data, which operate entirely in RAM. It is focused on being an excellent edge data collector with a bias for speed, simplicity, and effectiveness.

For the full webinar, featuring a more in-depth discussion, Q&A, and more, you can view an archived version of the webinar here.


Sponsors