2014-08-01

What every developer should know about testing - part 3

For the pyVmomi folks in a hurry, this video covers the big shift in the project and the code from the video is here. This week I'll finally dig into some detail on fixture-based testing.

Summary

If you don't bother reading more from part 1 and part 2 in this series, the one thing to take away is: testing must be stand aloneautomated, and deterministic no matter what you are writing
No matter what you are doing the bulk of your effort should go toward creating ways to write as little code and as few tests as possible yet still cover the domain. This is  a much harder philosophy to follow than 'cover all the lines' but it is much more robust and meaningful. 

Good code coverage should be the outcome of good testing, not the goal. Because testing is hard and getting it wrong is bad you should write tests as sparingly as possible without sacrificing quality. Finally, unit boundaries are API and you should write tests to reflect the unit boundary which is an art in and of itself.

In part 1 I covered why your testing is bad. In part 2 I covered what are stubs, mocks, and fixture testing. In part 3 we'll get specific and cover how to build a fixture and what it represents as a programming technique.

What your code is determines what your test is.

Virtually all software built today is going to fall into this pattern:
core system -> [your code] -> user's code
Virtually everyone who writes software writes it in a space sandwiched between the software you use and the software that uses you. Interesting things happen when you move up stack far enough that 'user's code' becomes actual human interaction or you move down stack far enough that 'core system' means physics.

In most projects, however, code that has to perform actual human interaction means code written for a web app or some kind of GUI. In these special cases you need tools like Selenium or some other 'bot like Sikuli to drive your tests. In that case the far right of my diagram "User's Code" is simulated in that test framework. It's ultimately code at the end of the day and the code you write for test is something you create anticipating how that middle category "your code" is going to be used.

The more familiar case, where you are sitting between an end-developer (sometimes yourself later in the project's lifecycle) and a core-system of libraries that constitute the framework, runtime, or operating system your code is build on is what most unit test philosophies are build around. This is the world that is most comfortable for mock and stub based testing. It's the world of TDD and other methodologies. But, what happens when you start getting close to the metal when the things we need to mock are on the network or some other physical or transient infrastructure?

This is the case where I advocate the use of fixtures. Depending on what's on the other side of your code the right tool to craft a fixture will differ. I've worked in environments where fixtures had to be physical because we were testing micro-code. Most of the time these days I need a network based fixture.

I am a big fan of vcr, vcrpy, and betamax. These are tools for recording HTTP transactions to create a fixture. In generic terms, a testing fixture helps you fast forward a system to get it into the state you need for testing. In our specific purposes the fixtures replace the need for a network and related network servers.

Recording transactions

BTW: The source code for this test is available on github now.

Here's the sample interaction we want to support. It's a simple set of interactions... we login, get a list of objects, loop through them, and put things away.

    def test_basic_container_view(self):
        si = connect.SmartConnect(host='vcsa',
                                  user='my_user',
                                  pwd='my_password')
        atexit.register(connect.Disconnect, si)
 
        content = si.RetrieveContent()
 
        datacenter_object_view = content.viewManager.CreateContainerView(
            content.rootFolder, [vim.Datacenter], True)
 
        for datacenter in datacenter_object_view.view:
            datastores = datacenter.datastore
            pprint(datastores)
 
        datacenter_object_view.Destroy()

As written this test goes over the network nine times. Without a tool like vcrpy running this test in any automated way would require us to use a whole pile of cloud infrastructure or at least a smartly built simulator. This means doing special setup work to handle edge cases like faults, or large inventories. It requires the construction of an entire mockup of the production scenario we want to test. This could be automated in itself; but then, if the tool to perform such automations is literally what we're writing; how do we develop and test those automations? That's an extremely time consuming and wasteful yak shaving exercise.

Fortunately, in our project a simple one liner can help us remove the need to always have a service or simulator running somewhere for every test.

    @vcr.use_cassette('basic_container_view.yaml',
                      cassette_library_dir=fixtures_path, record_mode='once')

This decorator allows us to record the observed transactions into a YAML file for later consumption. We can't completely remove the need for a simulator or service, but we can remove the need for such beasts during test.

Modifying Recordings

Once you have the capacity to record network events into a fixture you can tamper with these recordings to produce new and unique scenarios that are otherwise hard to reproduce. That means your code can rapidly iterate through development cycles for situations that are really hard to get the core-system code on that remote server into.

You "shave the yak" once to get a VCSIM to a state that represents the scenario that you want. Then you script your interactions anticipating a use case your end-developer would want to exercise. Finally, using vcrpy decorators, you record the HTTP interactions and preserve them so that you can reproduce the use-case in an automated environment.

Once you have that fixture you can develop, regression test, and refactor fearlessly. You can synthesize new fixtures approximating use cases you might never be able to reliably achieve. And that's how you take something very chaotic an turn it into something deterministic.

This is of course predicated on the idea that you have a VCSIM setup for testing, more on that next time...