Base22 has been around for more than 10 years and we have team members who have been chopping code for a lot longer than that. As a technology service provider, we have been able to develop a set of proven practices and toolsets around DevOps (software development/operations). The emergence of cloud-based tools and infrastructure offers us a lot of flexibility related to application development and deployment.
Our clients are now asking us for help with their emerging DevOps discipline and it really becomes a two-way street. We share lessons-learned and we collectively grow our DevOps knowhow. One additional catalyst to our DevOps approach is related to the development of our Carbon LDP application platform. Base22’s product development, coupled with our client application development work has accelerated our DevOps adoption.
Here is a snapshot of our current DevOps point of view. Our story will continue to evolve, but for now, we wanted to share where we are with the hope that this information might be helpful to you.
DevOps Components
For Base22, we group our tools and processes into the following components.
- Collaboration and Project Planning
- Source Control
- Build Automation
- Automated Testing
- Automated Deployment
- Environment Health Management
Let’s drill into each of these components.
Collaboration and Project Planning
There are a variety of toolsets to facilitate collaboration and project planning. Here are some of our favorites and how we use them.
Toolset | How we use it |
JIRA (from Atlassian) | - Web-based tool for issue/task tracking on ALL projects
- Searchable knowledge base across all engagements
- We can update JIRA tasks by mentioning their IDs in the source code commit messages
|
Confluence (from Atlassian) | - Web-based Wiki repository for ALL projects
- Single source of truth for project-related documentation
|
Slack | - Instant messaging and collaboration between internal and client team members
- Easily share code snippets
- Great search capability
|
GitHub | - Host both public/private git repositories
- Issue tracking
- Pull requests/code review. GitHub’s UI excels on this.
- Project documentation using:
- GitHub pages for the JS SDK.
- We find that keeping the documentation inside each repository (alongside the code) works best as it provides better context and change tracking.
- GitHub projects. We create boards that can be configured to meet our needs
|
WaffleIO | - An automated project management tool powered by GitHub issues & pull requests.
- We track GitHub issues across multiple projects and the tool automates their status (after a developer creates a branch with the issue number, waffle sets the issue as “In Progress”)
|
Source Control
There are a number of options that address source control requirements. Our current tools of choice.
Toolset | How we use it |
BitBucket (from Atlassian) | - Our primary source code repository that we share with all our clients
- Integration with other Atlassian products and other tools (e.g., updating JIRA tasks from source code operations) is a real plus
|
GitHub | - Host both public/private git repositories
- Pull requests/code review. GitHub’s UI excels on this.
|
Build Automation
Toolset | How we use it |
Docker Cloud | - Docker Cloud helps us automate the build process starting with source code and ending with Docker images
- We use it to automatically build new images each time we tag a commit on a tracked git repository.
- Example: We tag a commit on the platform as v1.0.1, Docker Cloud gets notified about the change and triggers its automated build process. Once it completes, an image gets published to the Docker hub (public Docker image registry) with the same tag.
|
IBM Bluemix | - We have used Bluemix’s pipelines to automate building and testing. Bluemix offered a lot of power but had some shortcomings so we will revisit Bluemix in the future (example: Bluemix didn’t support triggering different pipelines from different git branches)
|
Jenkins | - We use Jenkins to automatically pull the latest version of source code from a specific branch, build software artifacts and deploy to integration and staging environments depending on certain conditions that the code has to meet (e.g. flags that indicate if the code is a build for debugging or testing, or ready for staging).
|
Automated Testing
Historically, testing has been incredibly time-consuming resulting in organizations taking shortcuts or simply eliminating testing steps to meet a schedule resulting in less than optimal outcomes when a solution is deployed to production. There are many options to automate testing. Here are some of our favorites.
Toolset | How we use it |
Selenium | |
SortSite – Section 508 Compliance | - We have used this testing suite to check for Section 508 compliance as well as other website issues
|
Junit | - An option to conduct unit tests on our Java code
|
TestNG | - Another option we have used to test our Java code
|
Travis CI | - We use this cloud service to test public repositories. Each time a pull request is created an automated testing phase gets triggered and after it finishes, Travis CI flags the PRs commit as “Failing” or “Passing”
- Maintainers can trust (or not) that the code pushed is working (or at least passing the tests)
|
Automated Deployment
Toolset | How we use it |
Docker Cloud | - We configure Docker Cloud to deploy containers based on the images it builds to our environments. Aside from just deploying the containers, it also orchestrates the multi-node services we have deployed and organizes the container swapping
|
IBM Bluemix Pipelines | - We have used Bluemix pipelines to automate code’s deployment to our public instances
- After building Docker images, Bluemix Pipelines did a red-black deployment to our servers.
|
For More Information
We would love to hear from you. If you have DevOps lessons that you would like to share, or perhaps you would like to hear more about our experiences, please contact us.