Newsletters




Machine Learning and Autonomous Databases


The CTO of a major database vendor recently stated at a conference that basically DBAs would soon be out of a job. The gist of the keynote was autopilot flies better than humans, autonomous cars will be safer, so why would we not do the same for our databases (remove the human error)? What is the validity of this line of thought, and how might it be implemented? What does an autonomous database look like?

Since this is a large topic, I’m going to do a series of at least three posts related to this. In this first post, I’ll explore the difference between autonomous and cloud database infrastructure. (Note: Autonomous does not yet exist as a viable product.)

There have been many articles about the cloud killing the DBA. I think most have settled into the opinion that it will not in fact result in the demise of the DBA position, but cloud will cause changes in focus and tasks traditionally associated with DBAs. I touched upon this metamorphosis of the DBA in my June column. If it is true we can agree the DBA position will change in the cloud, then what is different about this new DBA killer—the autonomous database?

Enter Machine Learning

Here’s where machine learning (ML) comes into play. When I first heard about ML, I thought to myself, this is just another hype word for artificial intelligence (AI), which has yet to really prove out and provide tangible benefits. However, looking under the hood a bit, I discovered that ML is really dusting off statistical modeling and probability.

OK, this could have some legs. How do these mathematical topics play into high tech and autonomous databases? Statistical modeling involves grouping datapoints into patterns whereby it is easy to detect abnormal behavior.

A couple of great examples would be using standard deviation or percentiles to determine when things vary from the norm. Knowing intelligently when to use which model to surface anomalies can be taught.

When the datapoints tend to cluster around a mean, standard deviation is likely the method to use. When the standard deviation is too large, it becomes less valuable and another method might be sought out, such as percentiles. Best practice can be “taught” so that machines can more easily and quickly determine the most appropriate method.

Once the norm has been established and we can detect anomalous behavior, we start to record it historically. When we build enough history (being a bit ambiguous intentionally), we can start to bring in probabilities or predictive behavior. For example, every second business day after month-end close, a huge query runs in the accounting database that takes 4 hours and crushes performance for others.

It’s found that CPU and active memory are pegged, which is contributing to the 4 hours. It’s also found that a lot of the data fetching is caused by a lack of an index on a key driving table. During normal operations, the index is not ideal, as it causes overhead for DML operations. CPU and active memory are not under pressure during normal business cycles.

A New Database World Order?

Can you not envision a world in the not-too-distant future where the system knows this and is able to take prescriptive actions prior to the performance meltdown? Allocate additional resources just for the monthly close report. Add the index during maintenance prior to the report run, then drop it post report. This is the idea behind autonomous databases and ML. The machine recognizes the anomalies. The machine predicts when they will occur. The machine will take action based on best results on a probability basis.

Unlike cloud-based database infrastructure where humans somewhere were still required to manage that infrastructure, autonomous databases and ML probe at a more fundamental question. Are machines better equipped to do what DBAs do?


Sponsors