Quality, Speed and Jenkins


Mike Hall

By Igor Dorovskikh, Test Automation Architect

One of the tech world’s greatest challenges is the ongoing war between speed and quality. With things changing at the speed of light, how does any company maintain the quality of its product? Here’s how our team attacked it.

Six months ago, we built out our testing infrastructure to help the iOS and Android engineering teams speed up release cycles without sacrificing quality. Before, we were producing builds manually without exact knowledge of which build/branch/changelist/environment we were testing. As a result, quality assurance (QA) was signing off on the wrong builds and releasing features that weren’t quite ready.

But that’s all about to change! We’ve started exploring automated mobile QA testing, a widely debated topic. While both sides acknowledge that automation is quicker, skeptics are still unsure if it’s as effective. Here’s how our deep-dive went:

Introducing Jenkins

One new build that’s been particularly helpful is our “butler”, Jenkins, a new continuous integration (CI) system for Android and iOS teams!

Before diving into the nitty gritty, I want to thank Jeff Brys, Kyle Cutrer, Arthur Yu, Adam Lee, Jeff Strauss, Brian Haney, and Tom Lee for all of their hard work and dedication

What is Jenkins doing for the Android Team?


1. After a developer creates the pull release (PR), we run the following jobs in parallel to check the quality of the newly created PR.

  • Android PR Builder: 900 unit tests

  • Android PR Acceptance: 60 functional UI tests running against the stub data in parallel on six Android phones.


2. If both checks pass, developers start the manual review process after which they merge the PR into the release branch.
3. After a successful merge, we trigger two Android-Distribute-Automated jobs to automatically deploy to Crashlytics from the develop branch and to the Google Beta store from the release branch. There is also an Android-Distribute-Manual job, whereby testers can generate new builds and specify the branch and environment they’d like to test against.

Besides the jobs included in the CI pipeline, we have screenshot jobs setup for both Android and iOS for the localization team, Android stress tests, and a job that deploys our tests to the AWS Device Farm for compatibility testing across more devices than we have access to in-house.

What is Jenkins doing for the iOS team?

  1. After a developer creates the PR, we run two jobs in parallel to check the quality of the newly created PR.

    a. iOS-PR-Unit -- running 300 unit tests

    b. iOS-Acceptance -- running 11 functional tests against stub data

  2. If both checks pass, developers start the manual review process, after which they merge the PR into the release branch.

  3. If the merge is successful, we run the following jobs:

    a. iOS-Regression -- runs 80 additional UI tests not included in * iOS-Acceptance

    b. iOS-Create-Archive -- generates a distributable iOS application binary (IPA) file based on the “Tinder Enterprise” scheme. If the archive is successful, we trigger one additional job.

    c. iOS-Distribute-Archive -- If all tests pass then the build is greenlit for distribution. This job sends the new build straight to Crashlytics, with a changelog generated by pulling Git changes from iOS-Acceptance.

How are we Ensuring Test Speed and Stability?

The answer is Stub 2.0 -- NodeJS server running HAPI. It eliminates variables and dependencies, like latency and DBs. It also allows for testing concurrency by keeping a session for each client under test. Both Android and iOS tests have a server manager that sets a fixture before and/or during the test to tell Stub server what response to send back based on the scenario’s needs.

With every new feature or refactoring existing, Stub 2.0 allows us to quickly model new and existing endpoints. We can also configure these mocks to return different data, scenarios, and even error states. Meeting together to build these stubs allows our team to get ahead of any API changes or discrepancies in the sample requests/responses of our documentation. We couldn’t image new Auth testing projects without Stub 2.0.

Better Tests, Better Results

Automated mobile QA testing is still being debated. Of course, only time will tell which side is right. We’ll continue to learn and experiment, but for now, we’re confident it has given us both speed and quality--the best of both worlds.