Three Things to think about on your next  Deployment Automation Project

Version 2
    Share:|

    With more and more pressure on agility and ever quicker development cycles nobody wants to spend too much time deploying applications. Deployment automation not only takes that burden off our shoulders but also eliminates human factor from deployment process making it effectively subject of the same test regime the application code is going through.

     

    Deployment Automation discipline is relatively new. It is commonly recognized as difficult, affected by human factor and politics as everything else spanning Development and Operations sides of an organization. There is lot of truth in it, but selecting the right process and supplementing it with the right technology can help ease that strain. Following are some more or less random thoughts on three factors determining success or failure of Deployment Automation Project.

    DevOps Gap

     

    People tasked to develop new application functionality and the ones paid to keep existing technology running are inevitably looking in different directions. This leads to different processes, different dictionaries and loads of communication problems as well. None of these things is solvable by tools. The right place to start is designing common Release Process giving both groups, as well as the Business who is paying for all of this, enough visibility to make educated decisions. If we make the process right, there is something in it for everyone:

     

    • Business does see all their requests in different stages of design, implementation and testing. They are able to understand predicted go live date and the risks involved.
    • Development gets the tool to track what is happening with their code and why
    • Operations get predictable and documented test cycle, priceless especially for Change Advisory Board

     

    Experience shows that even the best and most mature process is ineffective if implemented via e-mails, spreadsheet and paper. In order to achieve the level of trust and transparency described above, all stakeholders should be presented real time view. Getting this process defined and implemented is necessary prior to automating anything. This is for couple of reasons:

     

    1. Implementing (automating) process helps to understand stages causing most problems in the past. These are preferred automation candidates.
    2. Automating actions of an erroneous process will just get us the same errors faster.

     

    To achieve best possible release quality we need consistency. This is ideally achieved by triggering automated deployment actions at specific stages of automated process. This also eliminates “swivel chair automation” from the workflow. Many parameters of Build and Deployment are already known to the Release Process, for example:

    • Which changes are in scope of release?
    • Which artifacts should be part of the build?
    • What type of release is it and which corresponding test environment path should be taken?, etc.

     

    The integration between Release Process tool and Deployment Automation can be built and maintained. Alternatively you can select tools from the vendor serving both areas and maintaining these interfaces for you.

    Weakest link

     

    Automating complex workflow involving more than one person typically shows the biggest time gain not on atomic operations but on communication between involved parties. People need to sleep and eat. We are not best in multitasking either. For this reason if the deployment automation is meant to improve time to market, it is better done end to end, including all legacy technologies present in the house.

     

    No automation technology in the world is capable of directly managing each environment, but some are more open for integrations than the others. Probably the most industry specific deployment in your organization is either already automated or covered with good scripts. Leverage those and wrap into your new automation platform.

     

    Ideally those re-used parts should be as atomic and re-usable as possible. For example if the integrated component is database these could be: create users, load schema, load data, start/stop database. These atomic actions will form easy to maintain action library used and re-used by deployment process. It is important to utilize existing platform knowledge but it is equally important to keep offloading deployment process management to automation platform. There is a big difference between maintaining scripts and action libraries, which brings us to the third item on the list:

    Content Driven Deployment Process

     

    The best tools are the ones requiring little maintenance. Changing application, environments or underlying middleware versions can all affect deployment instructions. This can quickly lead to separate deployment workflows for QA, UAT and Production and then again separate for emergency, minor and major releases, etc. Very quickly the amount of possible combinations will be hundreds, each one requiring maintenance.

     

    What if this trend could be reversed? Ideally the deployment workflow is built during deployment from a library of atomic actions, each serving specific technology, version, OS and flavour. For example emergency release package can contain only one change related to SQL database. This requires just a subset of deployment actions normally taken to deploy full application. Still this database may be single instance in QA environment and RAC cluster in Production. Content driven deployment process starts with the package to be deployed and builds specific steps for the target environment. This results in much less maintenance and is not only saving time but also decreasing area subject to human error.

     

    All these thoughts have one common background. Once you decide on your Deployment Automation platform you want deployment to fade away leaving you more time to focus on your core business: developing applications.