This document provides some basic understanding of Continuous Integration and Deployment process and its benefits, and why Jenkins fit well across this process. The document also discusses the benefits of instantaneous deployments without any downtime and manual intervention using DevOps Automation Tools such as Jenkins and Puppet.
The target audience of this specification includes developers who are currently working on this setup or in future. Details of this document may be helpful for technically oriented end-users.
What CI (Continuous Integration) really mean?
At a regular frequency on every code commit (ideally by the developer), the system is:
● Well Integrated (Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts)
● Successfully Built (The new feature code is compiled into an executable or package)
● Fully Tested (Number of automated test suite can run after the built)
● Archived (Versioning and storing the latest and last release built so that it can be distributed as is if desired)
● Successfully Deployed (The final piece can be deployed into the system where developers can interact with it)
Figure 1 illustrates the workflow of Continuous Integration:1
Figure 1: Continuous Integration Workflow
Immediate detection of the bugs in the software: One of the key benefits behind CI is to detect problems in the software as early as possible.
Building and integrating a software only before testing makes it more prone to fail, as it is not being built in sufficient frequency. Continuous addition of new features to the code to fix bugs keeps the code base constantly in a state of flux. Continuous Integration server helps automate the process of building software and keep track of software health at each level. With this, CI prevents small changes or the bug fix to the software from causing destabilization for the whole software build.
CI continually alerts the developers of any issues earlier in the development cycle, where they are easier (and cheaper) to fix.
Building software and software deployments are two separate processes: Earlier developers building the software were also responsible to deploy the software, but now with CI servers it is not the case. The two processes are completely isolated. We can configure our CI server in such a way that every commit made by the developer triggers integration but the deployment of the build can be manually triggered by the testing team after approval.
Continuous Deployment using Continuous Integration: The term Continuous Deployment is always closely related to Continuous Integration and Continuous Integration is an essential first part of a Continuous Delivery workflow.
It enables software release into a live (production) environment and runs any unit tests associated with the build process.
Continuous Deployment is a good practice to release build successfully (into either a UAT or a production environment). By adopting Continuous Deployment along with Continuous Integration, we can minimize the deployment related risks. With CD and CI, we can detect bugs and build problems immediately upon a commit to the source code and therefore easily move towards working software.
Automation testing with no manual intervention: CI is considered the best way to automate your testing. All CI servers can run unit tests. Running unit test against each successful build catches any breakages in the code, particularly those in unchanged areas of the code.
Increases confidence for the individual teams in the software development eco-system by providing confident software:
Continuous software building and immediate bug detection notifications to responsible teams bring confidence to everyone in the software development eco-system, and reduces efforts to build a high-quality software that could be pushed into higher environments with more confidence.
Continuous delivery is an extension of continuous integration. It focuses on automating the software delivery process so that teams can easily and confidently deploy their code to production at any time. By ensuring that the codebase is always in a deployable state, software release becomes an unremarkable event without complicated ritual. Teams can be confident about releasing software whenever they need to without complex coordination or late-stage testing. As with continuous integration, continuous delivery is a practice that requires a mixture of technical and organizational improvements to be effective.
On the technology side, continuous delivery inclines heavily on deployment pipelines to automate the testing and deployment processes. A deployment pipeline is an automated system that runs increasingly rigorous test suites against a build as a series of sequential stages. Continuous Delivery get started from where continuous integration leaves off, so a reliable continuous integration setup is a prerequisite to implementing continuous delivery.
At each stage, the build either fails the tests, which alerts the team, or passes the tests, which results in automatic promotion to the next stage. As the build moves through the pipeline, later stages deploy the build to environments that similar to the production environment as closely as possible. This way the build, the deployment process, and the environment can be tested in tandem. The pipeline ends with a final piece of build that can be deployed to production at any time in a single step.
The organizational aspects of continuous delivery encourage prioritization of "deployability" as a principle concern. This has an impact on the way that new features/enhancements are built and hooked into the rest of the codebase. Thought must be put into the design of the code so that features can be safely deployed to production at any time, even when incomplete. A number of techniques have emerged to assist in this area.
Continuous delivery is attractive because it automates the steps between checking code into the repository and deciding on whether to release well-tested, functional builds to your production infrastructure. In fact, steps that help assert the quality and correctness of the code are automated, but anyway the final decision about what to release is left in the hands of the organization for maximum flexibility.
Continuous deployment is an extension of continuous delivery where in case the build passes the full test cycle, automatically deploys each build that passes without any human intervention to decide what and when to deploy it into production. A continuous deployment system cycle deploys all that has successfully traversed the deployment eco-system. Keep in mind that while new code is automatically deployed, techniques exist to activate new features later or for a subset of users. Automatic deployments pushes features and fixes to the customers fast, encouraging smaller changes with limited scope, and helps to avoid the confusions over what is currently deployed to production.
This end to end automated deploy cycle can be a source of anxiety for organizations worried about relinquishing control to their automation system of what gets released. The trade-off offered by automated deployments is sometimes judged too dangerous for the payoff they provide.
So automatic release is the method of ensuring that the best practices are always followed properly in order to extend the testing process into a limited production environment. Without a final manual intervention, before deploying a piece of code/feature, developers must take responsibility for ensuring that their code is well designed and that the test suites are up-to-date which actually collapses the decision of what and when to commit to the main repository and what and when to release to production into a single point that exists firmly in the hands of the development team.
Continuous deployment also allows organizations to benefit from consistent early feedback. The new features can immediately be made available to users and defects or unhelpful implementations can be caught early before the team devotes extensive effort in an unproductive direction. Getting fast feedback that a feature isn't helpful lets the team shift focus rather than sinking more energy into an area with minimal impact.
Figure 2: Continuous Integration Workflow with Jenkins
● Jenkins by its own is highly configurable system all together
● the plugins developed by the additional communities even provide more flexibility and documentations are also available across all plugins
● Possibilities are now limitless by combining Jenkins with Ant, Gradle, or other Build Automation tools
● Jenkins is released under the MIT License
● there is a large support community and thorough documentation
● It’s easy to write plugins of your own as well, though there are more than 500 plugins already available in Jenkins
● if something goes wrong with Jenkins, you can fix it!
● Generate test reports
● Integrate with many different Version Control Systems
● Push to various artifact repositories
● Deploys directly to production or test environments
● Notify stakeholders of build status
...and much more
When setting up a project in Jenkins, out of the box you have the following general options:
● associating with a version control server
● Triggering builds
● Polling, Periodic, Building based on other projects
● Execution of shell scripts, bash scripts, Ant targets, and Maven targets
● Artifact archival
● Publish JUnit test results and Javadocs
● Email notifications
● plugins expand the functionality beyond
● Completion of one project successfully in Jenkins, all future
builds are automatic Building
● The builds execute in an executer
● In Jenkins one executer per core on the build server
● Jenkins also has the concept of slave build servers
● Useful for building on different architectures
● Distribution of load
● Jenkins comes with basic reporting features
● Keeping track of build status
● Last success and failure
● “Weather” – Build trend
These can be greatly enhanced with the use of
● pre-build plugins
● Unit test coverage
● Test result trending
● Find bugs, Check style, PMD
As system administrators acquire more and more systems to manage, automation of mundane tasks is increasingly important. Rather than developing in-house scripts, it is desirable to share a system that everyone can use, and invest in tools that can be used regardless of one's employer. Certainly, doing things manually does not scale. Puppet developed to help the system administration people start moving for building and sharing efficient tools that remove duplications of the same problem tried to get solved by everyone in the loop. It does so in two ways:
● It provides a powerful framework to simplify most of the technical tasks that sysadmins need to perform
● the system administrator work is written as code which is called infrastructure as a code in Puppet's custom language, which is shareable just like any other code.
Deployment to the cloud (AWS in this case study) is an evolving area, while many tools are available that deploy applications to nodes (machines) in the cloud, zero deployment downtime is rare or non-existent.
In this post, we will look at this problem and propose a solution. The focus of this post is on web applications—specifically, the server-side applications that run on a port (or a shared resource).
In traditional software deployment environments in the software industries, when we switch the current node version in the cloud from the current to a new version, there is a window of time when the node is unusable in point of serving traffic. During that window, the node would be taken out of traffic, and after the switch, will be brought back into traffic.
In a production environment, this downtime is not trivial. Capacity planning basically incorporates the loss of nodes by adding a few more nodes. However, the problem becomes minimized where principles like continuous delivery and deployment are adopted.
Effective and non-disruptive deployment process should possess these two characteristics:
In addition, as we already adopted DevOps best practices, there should not be any manual intervention
Suppose there are three nodes running Version1 and we are deploying Version2 to those nodes. This is how the lifecycle would look:
Every machine in the pool undergoes this lifecycle. The machine stops serving traffic right after the first step and cannot resume serving traffic until the very last step. During this time, the node is effectively offline.
For an organization of any size, many days of availability can be lost if every node must go into offline phase during deployment.
So, the more we minimize the off-traffic time, the closer we get to instant/zero-downtime deployment and many organizations are still doing the above process of software version deployments manually while this can be achieved using few automation tools to avoid manual intervention.
Now let us look into a few options for achieving this goal.
16.1 Manual Process
In this approach, suppose we have three nodes attached to an ELB. We need to deploy the new version to those nodes and switch the traffic to them. A load balancer fronts the application and is responsible for this switch upon request.
We manually detach each node from the ELB, configure the node with new software version and attach it back to ELB. Same process will be followed for other two nodes and then your application will be up and running with the new software version.
The main disadvantage to this approach is the time constraint due to manual intervention leading slow application response.
Avoiding manual process and faster switch over should be best approach whatsoever.
To automate the complete process, we need to have a Puppet server setup with the module Apache, put the module under SCM (GIT in this case study), and finally Jenkins server setup with GIT plugin.
Now create a Jenkins job configuring the GIT account URL that will keep on pulling the new changes (commit) from GIT and build the job by running Puppet on the instances by the rolling fashion of the provided ELB
Below is the screen shot of the Jenkins Job that have been configured to automate the above scenario: