Life, Teams, and Software Engineering

Category: mocking

Revisited: Reducing code duplication in rhinomocks

I’ve been wanting to come back to this for some time now but, quite frankly, I’m a bit ashamed to even look at it again. I’m sure that by now I’m far enough away from it to be able to observe my work as if it were someone else’s, and that’s exactly how I’m going to structure this post.

About a year and a half ago I wrote a post titled Reducing Code Duplication in RhinoMocks Tests that now makes me want to scratch my eyeballs out. Take a look, it will make you feel better about yourself :). I suppose there would be nothing if not for progress so I’m revisiting it to go over what I (and most of you I’m sure) see as being entirely wrong with the approach described in that post, and talk about what I’ve learned since then about mocking and unit testing.

The approach described in that post has several glaring issues; smells if you will:

  1. Branching unit tests, which is a general no-no
  2. The tests are over-specified. The close relationship between test code and the code under test greatly reduces the value of the tests and will make the tests very fragile
  3. The mocks are over-relied upon and have become a Maslow’s Hammer. I was trying so hard to isolate the presenter class that I missed the point. I was shooting for metrics while ignoring the business logic the MVP implementation is supposed to realize. I should have been focusing on the interactions between the classes, and should have only isolated to directly test specific, complex logic. Only if the presentation logic was sufficiently complex should the model have been mocked away (and in such a case may not even belong in the presenter :)).
  4. This code isn’t really testing the classes in a way that is as close as possible to the way they’ll be integrated in production, and as a result are very likely to miss things.

So what have I been doing about it? I’ve been away from .NET development for the better part of a year working in C++ which I think has been really good for me. C++ has the bare-essential constructs necessary to perform object-oriented design and as a result some of the more modern frameworks are less available. I’ve only been able to find a single mocking framework for C++, googlemock, but I haven’t used it yet. Not because I don’t want to, but because it hasn’t been necessary.

Since reading Steve Freeman and Nat Pryce’s Growing Object-Oriented Software Guided by Tests I’ve been approaching my unit testing differently. I’ve started treating my unit tests as an executable specification that is defined before the “green” code is ever written, and now I can’t do it any other way. Yes, it’s just TDD, but Freeman and Pryce’s explanation felt more natural to me. Anyway, their approach calls for acceptance-level tests to be written followed by smaller, more specific, unit tests that verify the details of your specification. So how does this relate to mocking? Simple, following this approach allows my design to naturally evolve into something where mocking becomes the final step, not the first, stretching my test fixtures to the point where run-time information is necessary. Just mock out the necessary classes, provide the stimulus, and we’re good. Further, it enables me to keep my classes linked together in the way they’ll really be used in production and send my stimuli all the way through the system to verify the final effects/output. Obviously you’ll want tests in the deeper regions of your classes, but I’ve gotten more value from my tests implementing them in this way.

Reducing Code Duplication in RhinoMocks Tests

UPDATE: This post was later revisited, but it’s still worth reading for an example of what mocking shouldn’t be :).

I was recently placed on my first project at my new job (I’ll post about that later) and they’re having me integrate a bunch of great things into their process. These include automated unit testing, continuous integration and refactoring to patterns. Anyway, I’ve been writing some of my unit tests using isolation with RhinoMocks and came up with a way to achieve full path coverage without duplicating my Expects between test cases.

Below is a fairly standard MVP example. A presenter method is invoked and makes decisions based on information obtained from the view. Say you are testing the following method using isolation and assume the dependencies on view and model are injected via the presenter’s constructor:


public class Presenter
{
IView view;
IModel model;

//...constructors, properties

public void Foo()
{
if (view.val1)
{
//...
model.F1();
//...
if (view.val2)
{
//...
model.F2();
//...
}
else
{
//...
model.F3();
//...
}
}
else
{
//...
model.F4();
//...
}
}
}

Now obviously there are 3 possible paths through this method:

val1 = true & val2 = true
val1 = true & val2 = false
val1 = false & val2 = don’t care

Now, my approach to this sort of problem in the past had been to duplicate a lot of test code only to change a single case return:

[Test]
public void CanFooVal1TrueVal2True()
{
Expect.Call(view.val1).Return(true);
//...some expects
model.F1();
LastCall.IgnoreArguments();
//...some more expects
Expect.Call(view.val2).Return(true);
//...some expects
model.F2();
LastCall.IgnoreArguments();
//...some more expects

mockery.ReplayAll();

presenter.Foo();
//Assertions
}

[Test]
public void CanFooVal1TrueVal2False()
{
Expect.Call(view.val1).Return(true);
//...some expects
model.F1();
LastCall.IgnoreArguments();
//...some expects
Expect.Call(view.val2).Return(false);
//...some expects
model.F3();
LastCall.IgnoreArguments();

mockery.ReplayAll();

presenter.Foo();
//Assertions
}

[Test]
public void CanFooVal1False()
{
Expect.Call(view.val1).Return(false);
//...some expects
model.F4();
LastCall.IgnoreArguments();

mockery.ReplayAll();

presenter.Foo();
//Assertions
}

The approach I’ve come up with to solve this problem is to have each test case call a single method which uses its parameter list to take a path through the method being tested (granted this is one of the simplest mocking cases):


[Test]
public void CanFooVal1TrueVal2True()
{
FooPaths(true, true);

mockery.ReplayAll();

presenter.Foo();
//Assertions
}

[Test]
public void CanFooVal1TrueVal2False()
{
FooPaths(true, false);

mockery.ReplayAll();

presenter.Foo();
//Assertions
}

[Test]
public void CanFooVal1False()
{
FooPaths(false, true); //keep in mind val2 doesn't matter in this case

mockery.ReplayAll();

presenter.Foo();
//Assertions
}

private void FooPaths(bool val1, bool val2)
{
Expect.Call(view.val1).Return(val1);
if (val1)
{
//...
model.F1();
LastCall.IgnoreArguments();
Expect.Call(view.val2).Return(val2);
if (val2)
{
//...
model.F2();
LastCall.IgnoreArguments();
}
else
{
//...
model.F3();
LastCall.IgnoreArguments();
}
}
else
{
//...
model.F4();
LastCall.IgnoreArguments();
}
}

This makes it so that you only need to change Expects and other information relative to the code you are testing in once place (FooPaths) when the code you are testing changes.

Does anyone else out there have any other solutions to this problem?

Copyright © 2017 Life, Teams, and Software Engineering

Theme by Anders NorenUp ↑