Newsletters




Database Rules Matter: The Top 5 Rules to Enforce for Database Change


Bookmark and Share

The only constant is change. So, get ready for it. Maybe that’s easier said than done. Too often, we, as DBAs, are flooded with change when it comes to database sche­mas, and keeping up with updates isn’t as simple as just being ready. Develop­ers don’t get that, though. Updating the application is pretty easy—simply copy over the existing application binary with the new one, and if there’s something wrong, roll back by getting the old version from the build server.

The database, however, is differ­ent. Since databases have state, we can’t simply create a new version. We have to evolve the database using discrete, serial change—and that change must not com­promise the database from a performance, regulatory compliance, or security per­spective. As such, database professionals must always stand guard and enforce these rules and corporate standards.

Below are the top five rules to enforce sane and reasonable change to your database.

1-Always include a comment and tie the change back to a ticket.

This is just plain good software devel­opment housekeeping. Code check-ins would never be allowed without a comment or without being tied back to a JIRA ticket. Yet too often we see database changes and assume the person executing the database change knows as much about the need for changes as the application developer. That is not the case.

Consider this: The person developing the application only works on a handful of applications and knows them intimately. The person pushing the database change, especially as you get closer to production, has more and more applications they are responsible for. Thus, they will know less about the application.

It is imperative that every proposed change to the database can be described discretely and understood atomically. Oth­erwise, you run the risk of the database change being rejected because the DBA will not understand the intended purpose of the change.

2-Never add a column with a default value.

Though newer databases are more forgiv­ing with this issue, in the past, table locks were a painful reality when adding a new column with a default value set. This was caused by the database updating the new column, row by row. For very large tables, this caused a huge wait time that would often extend the maintenance window with no notice. Service-level agreements were being violated due to this bad behav­ior, and that’s bad for business.

Ideally, new columns should be added without the default value set. Then, an update can set the value of the new column. Finally, the default value attribute can be set. This is a far more efficient method to use to add the column.

Still, using default values leads to lazy programming. In general, being explicit is far superior to being implicit. After all, we should be able to eliminate vari­ables in debugging. Having business logic on table metadata might lead to lengthy and difficult debugging.

3-Limit the number of indexes on any given table.

Indexes on a table are great tools for perfor­mance tuning. However, too many indexes can actually have a negative impact on data­base performance and slow all other cre­ate-read-update-delete (CRUD) operations. The limit on indexes is a fluid number that can change from company to company and even schema to schema. When setting rules for their databases, I encourage DBAs to choose a limit on indexes they feel is valid to warn of a violation instead of failing when that threshold is reached, which would stop all further update processing. Though it will not stop the proposed change, automating an index limit warning will alert the DBAs to be watchful of future performance on that table.

4-Never add an index with more than three columns.

Like the previous rule, sometimes you can have too much of a good thing. In this case, the use of too many columns in an index can actually slow database performance on other CRUD operations. Be mindful of this issue and be ready to justify the index. Like the pre­vious rule, DBAs should create an automated alert warning of violations instead of failing, which prevents further processing. Alerting DBAs to a possible future issue is far supe­rior than completely ignoring it, but it’s not enough to stop further change execution.

5-Never allow words such as “DROP,” “TRUNCATE,” or “GRANT SYSDBA” in your stored procedures.

One of my “favorite” bad behaviors in database development is the use of tem­porary storage tables in stored pro­cedures that are cleaned at the end using TRUNCATE. This creates a race condition in which multi­ple calls to the same stored pro­cedure are made at the same time. It’s possible that an earlier call to the stored procedure could TRUNCATE data from a subsequent call. Good luck debugging that outside of production!

Sometimes, there is a good reason to use temporary storage. However, data should be deleted using a unique identifier, not trun­cated. Truncation is simply lazy use of a large bore weapon.

Moreover, there is absolutely no reason to DROP objects nor GRANT SYSDBA in a stored procedure. This is beyond lazy, opens the business to all sorts of nasty data integ­rity vulnerabilities, and edges toward negligence.

Setting Rules and Enforcing Them

Obviously, setting rules for database changes is one thing, but enforcing them among those “fail fast” developers is another entirely. Recent reports indicate that many companies “doing” DevOps in an effort to speed up development cycles aren’t really achieving DevOps success at all, and it is likely because they are not including the database. Or, perhaps, DBAs are condi­tioned to protect the database from change and wind up manually checking each change sent their way from a development pipeline geared for continuous delivery. This isn’t sustainable, and DBAs should look for ways to get out from under suffocating workloads to take a page out of the DevOps playbook: Automate repeatable processes.

Most database changes are simple and straightforward, and, by automating enforcement of the rules we’ve discussed here, DBAs can free up significant time to focus on more strategic projects. Sane database changes are a must for companies looking to meet the demand for continu­ously delivered updates.

All it takes to stop the floodwaters for us DBAs is a little change to the way we approach our work. Without it, we won’t keep our heads above water for long. So again, get ready for it.

Robert Reeves is CTO and co-founder of Datical, a provider of agile database auto­mation solutions


Sponsors