Keeping Front End / Back End Test Parity

One of the best aspects of being a Rails developer is the attitude of testing from the community (always test, duh!). Having an extensive test suite allows one to keep a Rails app relatively low on bugs, and ensures the best experience for engineers and users. It is also easy to translate this attitude of testing to a front end framework like React or AngularJS, the target of this blog post. One thing that is not easy, however, is ensuring proper integration between a front end and back end while trying to avoid costly integration tests. I would like to propose a novel way of approaching this integration point that I haven’t seen before: back end test generated fixtures for the front end HTTP integration points.

The Problem

Writing integration tests that test the entire application (consuming real HTTP end points and data) is tricky business. Keeping the entire codebase mounted and running in every environment can lead to slow test suites that sap away developer time between runs. There has to be an alternative to integration tests that are “just as good” and prevent bugs and regressions.

One alternative that seems obvious and supported by resources online is mocking HTTP interaction with fixture files. For instance, when the test suite requests /users.json, return a pre-determined JSON blob that has the list of users. Ideally, this JSON reflects the current state of the back end API 100% of the time. What happens if this JSON no longer reflects the state of the back end? Bugs get introduced and users will be affected. What happens when a developer wants to regenerate a fixture that was generated with data they don’t have? The developer will make concessions and introduce potential bugs.

I was thinking about how to solve this problem, always ensuring that the back end and front end are in sync. The solution should have these properties:

  • Generate fixtures based on API end-points.
  • Be up to date 100% of the time, without developer thought.
  • Alert a back end developer when they are changing something that could affect the front end.
  • Reproducibility on all developer machines. If two developers run it independently, they get the same result.

Solution

My solution involves capturing the HTTP response from test cases, and writing these out to disk in files that can be consumed by a front end test suite. The test cases themselves generate the fixtures, so the developer only has to run the back end suite before the front end suite when they make a change to ensure everything is up to date. One great property about test suites is that they involve consistent data setup across all developer machines, although they may involve random data which must be accounted for.

In RSpec/Rails, a developer can write the following spec which will generate the proper fixture:

describe "GET index" do
  it "is successful", fixture: "widgets/index.json" do
    get :index
    expect(response).to be_success
  end
end

The developer does not need to think about the fixture itself, only the particular test case they are looking to mock out. If the fixture changes between invocations, their test will fail and they will be aware that they might be introducing a bug. They can delete the file to regenerate it.

One caveat of this is that data can easily change between invocations of the test. For example, *_id, created_at, updated_at are all fields that can change frequently. As a solution for this, I propose allowing for certain keys to be ignored (and implemented it out of the box in rspec-rcv gem).

Gem

I’ve put this together into a gem called rspec-rcv (reverse of VCR). If you have seen this testing paradigm before, let me know! I couldn’t find anything in my research and from discussing with experienced software engineers. Who knows, maybe it will help inspire a better name?

The gem is at https://github.com/SalesLoft/rspec-rcv and is available from download on rubygems.org.

View other posts tagged: testing