Project Background
As a delivery organization traditionally the approach to installation, deployment and testing of our products has been in large influenced by the regional delivery teams skillset and experience. Although best practices are in place, they are not always enforced, often leading to inconsistency in delivery from region to region and with little unit tested code/configuration.
The aim of a consolidated CICD proposal to regional delivery is to provide structure and tooling to define and automate the processes of build, packaging, testing and deployment of software using a DevOps methodology, tailored to the Powercurve product suite.
Powercurve product suite encompasses many different modules and applications; API microservices, SPA UX, batch modules, decisioning, studio thick clients, AM service providers etc so installation/deployment can be quite complex.
What was my role
My role on this initiative was as a DevOps engineer & technical architect, detailing the aspects as-is process for traditional delivery and defining the CICD process to help target the challenges and weaknesses of the traditional approach, thus speeding up delivery and improving solution deployment quality.
What I learnt on this project
Design of the framework to cover traditional delivery challenges
Evaluation of DevOps tools for source control, testing automation, infrastructure, deployment.
Designing and developing Jenkins pipelines
Adapting Cucumber/Selenium framework to Powercurve platform
Development of Ansible tooling into a reusable global building block
Traditional delivery Challenges
Traditional delivery was often manual deployments and manual tests, similar to the following process:
Developers make unit testing in configuration studio. However, this testing is limited because many parts of the system can't be tested in the studio i.e. runtime components such as screen interaction, webservices etc.
Once all the developers have committed their changes, we need to make a deployment of different artifacts from the repository.
When the artifacts are generated, normally the technical team must take them and manually perform the deployment into development environment. As it's done manually, it's time consuming and error-prone.
Once the solution is deployed in development, the delivery team performs the tests (unit, integration, user) manually.
After all tests are done, the solution can be promoted to other environments (UAT, PRE, PROD). Again, this process is traditionally performed manually by regional teams.
The following sections were targeted for inclusion in CICD to tackle these issues.
CICD Overview
Continuous integration is a coding philosophy and set of practices that drive development teams to implement small changes and check in code to version control repositories frequently. Because most modern applications require developing code in different platforms and tools, the team needs a mechanism to integrate and validate its changes. The technical goal of CI is to establish a consistent and automated way to build, package, and test applications. With consistency in the integration process in place, teams are more likely to commit code changes more frequently, which leads to better collaboration and software quality. Continuous delivery picks up where continuous integration ends. CD automates the delivery of applications to selected infrastructure environments. Most teams work with multiple environments other than the production, such as development and testing environments, and CD ensures there is an automated way to push code changes to them. CD automation then performs any necessary service calls to web servers, databases, and other services that may need to be restarted or follow other procedures when applications are deployed.
Repository server
In the process of evaluating a repository server. Some key requirements were required:
Standard repository functionality; versioning, branching etc
Command line interface, in order to script the upload automated upload of configuration changes to the repository
Webhook from the repository to trigger orchestration pipelines.
Evaluated server repository servers to fulfill these requirements; GIT servers such as Bonobo, Artifactory and Bitbucket and Amazon S3 storage.
For delivery projects deploying to AWS, S3 was the logical choice. Bonobo was a useful alternative for licensing implications.
At least 3 repositories (and recommended 3 branches for DEV, Release and Master) are required per project:
Deployment artifacts
Configuration (playbooks, jenkinsfile etc)
Automated tests
Deployment & install
This area covers the main orchestration and install/deployment across environments. Putting this in place required the adoption of orchestration software, while also investigating the creation of underlying infrastructure for automated environments.
The key components used here are:
Jenkins to provide the orchestration of the pipelines, Ansible to deploy solution artifacts (integrating to GIT server to retrieve latest committed artifacts), Terraform to create the environment infrastructure if solution is deployed on Amazon.
The trigger of the pipeline is the commit of solution artifacts in the development branch of the GIT server
The solution artifacts will not be deployed in DEV unless the execution of the automated tests in a Terraform created environment is finished without error.
Automated testing
There are several frameworks which can be used to automate testing for the Powercurve platform. Simple API testing can use SOAPUI projects, dynamically setting parameters for new environments, however for flexibility and to test screen automation we opted for a Cucumber/Selenium framework with feature BDD files written in Gherkin.
This provided a full end to end of:
Data creation sets via product APIs
Service provider validation via SP APIs
User login to screens via Selenium to test web modules with varying scenarios
Ease of creation of additional functional tests tailored to the solution via Gherkin features. Creation of additional tests took a matter of minutes to create and commit to the test repository.
The testing suite is held in the project repository and built via Maven, orchestrated through Jenkins. Jenkins pipeline runs while no critical errors are found and only deploys to project environment after key automated tests are successful.
Target environment deploy
Many different target environments need to be supported for Powercurve, so deployment needs to handle these combinations:
DB Servers: Oracle, SQL Server, PostgreSQL
OS: RHEL and Windows
App server: Tomcat, WebSphere and WebLogic
Java: Oracle JRE, Zulu
As part of the CICD design, we created standardized Ansible roles, that can deploy on several target hosts/tiers using Ansible inventory files, and flexible Jinja templates. Via Jenkins we designed the pipelines to be able to deploy across DEV; SIT and UAT once automated tests completed successfully, depending on the phase the project was in (build, test, UAT).