Device Type: desktop
Skip to Main Content Skip to Main Content

Tracking a Moving Target - Visualising Weekly Releases

This article was published on May 26, 2020

Here at NewVoiceMedia we deploy to live every week. This is hard, but not impossible.

This post provides an overview of our release process and focuses on a particular problem we’ve solved recently: how to visualise what’s in your release when you have multiple teams continuously committing changes to a single code base.

Some context

Our product is a multi-tenant, cloud contact centre. It's complex and has evolved significantly over ten years, so contains the inevitable legacy parts and some technical debt. We practice good habits such as Continuous Integration (CI), Behaviour Driven Development (BDD), Test Driven Development (TDD) and pairing. We don't use code branches (all code changes are done on the main-line), even though a story's development may span one or more releases. Development work is spread across multiple, pizza-sized feature teams, who each gang up on one or more stories. This means it's not uncommon to have 5-15 stories in progress at any one time.

The release train

Twice daily, a release candidate "train" is made ready for departure by our CI system. Each train will already have stopped at some automated testing “stations” (unit, integration and UI). We choose the latest and greatest train on Thursday and roll it onward to the Staging environment station. Thursday through to Monday, feature teams then complete the verification of any stories marked as In Test, whilst the testers perform a half day manual regression test, to check over the more critical parts that aren't covered by automated tests.

If the candidate is still "green" by Monday, its next stop is pre-production, where we eat our own dog-food (or as some people say, drink our own champagne). For us, this means switching NewVoiceMedia's office onto the release version for two days, so all calls in and out of the business flow through it on a special dog-food platform. It's also given a good thrashing in a dedicated performance-test environment.

Finally, come Tuesday afternoon, the release team (from a cross-section of Dev-Ops - we’ve recently combined into Development and Operations into a single organisation, namely Dev-Ops) review how the candidate's performed throughout its testing (all our environments are monitored using tools such as New Relic and Papertrail). If the team are happy with the candidate's performance, it's then rolled onward out across our live environment that evening. We continue to closely monitor its health, so we can quickly roll back to the previous version should any problems occur.

The challenges

We've had to overcome many challenges to get this far, including:

  • How to check in changes early and often without breaking the behaviour on live?

  • What to test, when and how?

  • How to know the health of a candidate?

This article focuses on a challenge we've recently addressed: how to visualise what changes are in a release, or put more simply: what is the release delta? It's vital that the release team and other stakeholders in the business can easily see all the work items (stories and defect fixes) contained in the release, and know what their state is (In Progress, In Test, or Ready for Live). Having multiple teams with multiple stories on the go has made this quite a challenge to create, particularly when relying on any kind of manual process to collate this.

Early solutions

Our initial answer to the challenge of visualising the release delta was a semi-automated one. Developers would add the details of each story to a source-controlled release note (a text file). Each time the build process produced a new release candidate, it would also update the file, with a line containing the release version number. This worked for about a year, but was error-prone (to err is human). The next approach was even more manual: each time a work-item reached Ready for Live, the Tester would transfer the Story Card to a release wall. Sounded like a good idea, but also failed, due to it being a very manual, and hard to remember, step.

A lasting solution

To create a lasting solution, we first considered what it was that defined the contents of the release train. As the diagram below shows, the behaviour of an application is normally defined in two places: an Agile Project Management Tool (APMT), which holds the application’s features, stories and defects; and a source control system, which holds the application’s source and configuration files (and often, when using tools such as Chef or Puppet, the definition of the host environments).

Relying solely on the APMT gives an incomplete picture for two reasons: firstly, it’s relying on manual intervention to correctly define what work items are within a release; and secondly, it’s hard to correctly map and visualise all the product areas (e.g. reporting, user settings) and cross-cuts (e.g. security, logging) affected by the work items.

Our solution was therefore to combine both definitions, but use the source control system as the starting point, because changes to a software product are ultimately defined by the revision history of its source code. The result is an automated release note that presents the release candidate's source-code changes, summarised by work-item and product area. It also highlighted source-code changes that weren't tied to a work-item (of which you'd hope there would be none, of course ;-) !)

The process was thus:

  1. Identify source changes

  2. Identify work-items (stories and defects - each source change should have a comment that identifies the relevant work-items)

  3. Identify product areas (by analysing the paths of the modified source files and matching them to an area)

  4. Group by work-item

  5. For each work-item, summarise product areas and authors

  6. Group by:

    1. What’s ready for live

    2. What’s in progress

    3. Untracked work

Creating this solution has been a big help to our release team and other stakeholders as they can now easily see what changes exist between any two versions of our product. It's also removed some frustrating manual steps from process, which the Developers and Testers are very happy about! This kind of visualisation is just a small part of what we'd like to see, but that's for another post...

Lyndsay Prewer
Lyndsay Prewer

Deskphone with Vonage logo

Talk to an expert.