An ever increasing number organizations are implementing DevOps using continuous delivery process. They are fueled by reports of the benefits, which include quicker time to market, reduced costs and higher quality products.
DevOps is mostly about organizational culture, while continuous delivery and continuous integrations are mainly about automation and tests, which of course require a trustworthy source control. Here's a look at how native code development, continuous integration and continuous delivery processes interact with the source control.
Every organization has its own processes for compiling native code. It can be manual or automatic using Jenkins, Bamboo etc., and the build system can be Maven, Make, or others. But all builds have one thing in common which the build process relies on:
Each build starts with an empty folder and then gets the relevant source code files from the file-based version control repository (SVN, Git, Perforce, Microsoft TFS, IBM RTC etc.). Then it compiles the source code files and, if the compilation succeeds, the process continues to the next step, which is to deploy to an automated test environment. Some organizations also save the compilation phase output (binary artifacts) result in a binary source control repository (SVN, Git, Perforce, TFS, RTC etc.) so deployment process can retrieve the relevant artifacts from the file-based (binaries) source control repository.
Dev ->Build -> Deploy Process
Native Code (C#, Java, php etc.) Development -> Build -> Deploy
Deployment to an automatic test environment process will be done differently in different organizations. One company will do it manually, a second will run scripts and a third will use an application release automation tool, such as IBM UrbanCode Deploy or CA LISA Release Automation etc. The common factor for all deployments is copying the artifact (binary) to the relevant server, while overriding what was done before (as seen in image above).
Every change a developer makes must be documented in the source control repository. If it doesn’t exist in the source control repository, it is not included in the build process. Furthermore, if a developer copies an artifact generated locally to a test environment, the next deployment will override this out-of-process change.
Occasionally, defects will only recreate in test environments but, due to infrastructure limitations (storage, costs, complex architecture), the developer is required to work in the test environment and not in the development environment. In those cases, the developer may need to copy the locally-generated artifact directly to the test environment. Once the developer checks-in the code changes, the next deploy to the test environment will replace the locally generated artifact with an artifact from the build server. This built in safety net in the process prevents out-of-process, locally generated artifacts from entering into the production environment.
The database code has similar characteristics to native code but also has some inherent differences, which require adjustments to the source control infrastructure from perspectives of native code development, build and deploy.
For simplicity, consider the continuous delivery process as consisting of several phases: Develop, Build, and Deploy. Let's review this process in reverse (Deploy -> Build -> Develop) because the input of the latter process defines the former’s output.
A database code deployment is done by running SQL scripts (DDL, DCL and DML), which change the current structure and data to the desired version. When comparing database deployment to native code deployment, the differences are crystal clear; native code deployment is done by copying the new binaries (DLL, jar etc.) to the target environment. The previous version of the artifacts is no longer valid but may be saved for a quick rollback. This is the safety net which prevents out-of-process artifacts from reaching the production servers.
This is not the case with database code deployment. The script changes version A to version Z by many DDL, DCL and DML commands with every command changing the version a little bit. If the current version of the database is not A, there are two possible outcomes:
- The script will ignore current version and override the structure with whatever exists in the script.
- The script will fail. For example: trying to add a column that already exists with a wrong data type.
An error in a deployment process is not usually a desirable outcome. In this case however, getting the error is better than having the script run successfully, and without warning, revert changes made to production as an emergency fix or changes made by a different team or different branch in the trunk/stash
The build phase generates the scripts that are being executed in the deployment phase. The deployment phase is responsible for executing the script and not generating them.
Dev -> Build -> Deploy Process
Script DB Code Development -> Build ->Deploy
In order to generate the correct database change scripts, the build phase must have information on the current structure and source control structure. However, just having the current structure and source control structure (as is the case with standard compare & sync tools) is not enough.
Uri Margalit is the director of product management at DBmaestro.