Life, Teams, and Software Engineering

Category: quality

Quality is Not an Option

While I’m hardly the first to talk about the “Tradable Quality Hypothesis” hopefully I can reinforce in some of my readers (all 10 of you :) ) that Quality is not an option.  You cannot choose to lower quality (or forego quality producing/assurance practices) in an effort to get more features “done” or to deliver work more quickly.  At least not for any realistic amount of time.  If you do this, you might as well plan the rewrite into the schedule now.

So What if I Do?

Immediately, you will probably feel little consequence from omitting a few test environments or writing up those features without any unit tests.  But I promise it will catch up with you.  Failing to test (at any level), continuing to use outdated tools (without a transition plan), and knowingly adding functionality of little to no explicit value contribute to decaying codebases where bug counts and cost of change increase, while overall velocity (i.e. the rate of value addition) will decrease.  If you skip testing it now, guess what?  You’ll end up testing it a lot more when those bug reports start rolling in.

Yes, you might get to be a hero; and everyone loves a hero, right?  Maybe.  But when a situation arises where heroics are necessary (e.g. long nights & weekends) just to hit normal commitments teams should not celebrate.  That person has just set the precedent for what your clients and customers will expect of the team from this point forward.  Resist the urge to be a hero and be a member of the team.  Pull each other up as best you can to pull it in at the end, but be careful not to introduce large variances that can invalidate your velocity for that iteration.

If your team is delivering less and less because you’re trying to catch up with bug reports you may never be able to make up that time.  This can not only harm your organization’s reputation, but your personal or professional reputation as well.  Considering that, is it really worth the risk?

How to avoid the hole and dig out

There are many ways to avoid getting into the position of deciding to trade off quality for short-term schedule benefits.

One word: discipline.  Be vigilant that everything you contribute is tested in multiple ways (unit, integration, UI, etc.), have as many people as possible review your work and provide input, and make sure those things keep happening.  Yes it’s hard and can look to the uninitiated as if you’re moving more slowly, but when that thing you just wrote inevitably changes next week you’ll appreciate putting forth the extra effort.

Next is education.  Everyone should understand (though not necessarily intimately) what goes into delivering functionality.  Establish your definition of done and make it well known.  Hang it on the walls.  Recite it at the beginning of each daily standup.  I don’t care how people remember it, just that they do.  Whenever something is untested, it’s not really done yet.

If you’ve found yourself in this unenviable position, the first step is to admit that you have a problem.  Seriously.  We’re all proud of our solutions but sometimes they need cut down and replanted.  There’s no shame in it, I promise.  There’s only shame in voluntary insanity (doing the same thing over and over again and expecting different results).

Next, come up with a plan of how to tackle what is most commonly the real problem: technical debt.  Technical debt is any sub-par component of your entire solution space.  This could be anything from an inflexible test harness, to untestable or scary to change code, to relying on outdated or unsupported software packages.  If it causes you pain on or disappointment on the development side, it likely falls in here.  Environmental or issues external to the team should be raised to the team’s manager (e.g. Scrum Master) for them to deal with outside the team.  You need to identify and manage this technical debt before there is any hope of a sustainable pace of quality and value.

Don’t try to make it perfect, just make it better.  Don’t feel bad about it, what worked for a team of 5 and 100000 lines of code may just not scale to a larger team or codebase.  Our products grow and so our development support infrastructure must grow around them.

Remember, we’re the professionals

It may give the higher ups warm fuzzies to hear that it can all fit neatly into a little box, but no one will like the feeling later.  We do them no favors by making commitments we know we can never keep within any reasonable standard of long term success.  Clients can and should set constraints on delivery and tell us what they want delivered, but it’s our job to define how best to get there.  I don’t tell other professionals how to do their jobs when I pay for their services, and I would hope they would advise me on the consequences of taking shortcuts, while avoiding them completely.  That is after all, why I’m paying them.  They know better than I do.

If there are doubts about making commitments, or about quality, don’t be afraid to tell your clients (or your teammates) the truth, or at least what you observe.  That’s what you’re paid for, speak up.  Ultimately, even if the clients don’t say so, I’m sure if you asked whether or not you should test this feature you just gave them they’d be scared you even asked the question. Quality is always a requirement even if our clients don’t list it as a deliverable.

AAA vs BDD Structuring in Unit Tests

UPDATE: I updated this to include updates to the BDD example’s test function name.  I’m starting to dislike having the called interface in the test name.  It’s inflexible and unnecessary and ultimately doesn’t help the reader all that much.

It’s always good when there are people on your team whom you can both learn from and teach things to. Such is the case with my current team. A couple team members have never done unit testing as it’s known in the industry today. Mostly just “pound at the interface until I’m comfortable and throw it over to testing to deal with” unit testing.

During our first code review there were a lot of issues with unit tests. I actually prefer to let it pan out this way; this way people actually get in there and try it out, to later see what works, what didn’t, and (hopefully) get some ideas from the rest of the team about how to improve it. One recurring theme was that there was a lot of redundant code throughout test suites. Setup and teardown were outright missing! That’s good, because now they’ve seen the problem, and part of the solution is to use those. Another thing was how the test cases themselves were structured. I’ve come across two widely accepted ways of structuring unit tests:

Arrange – Act – Assert
Given – When – Then

I’ve personally used both in the same codebase when it makes sense, but I’m wondering if there’s more to it than just semantics and readability. With AAA you are more likely to interact with the class under test directly inside your test function:

  1. void interface_context_somethingHappens()
  2. {
  3.     //arrange
  4.     mock1->setCallShouldSucceed(false);
  5.     mock2->addFakeValue(“Value”);

  6.     //act
  7.     out->interface();

  8.     //assert
  9.     CPPUNIT_ASSERT(somethingHappened);
  10. }

However, I’ve started to notice that if you read your tests, I mean really read them, then using the Behavior-Driven-Development (BDD) Given-When-Then structuring will actually nudge you towards factoring the real test preparation and calls to the class under test out of your test case:

  1. void SomethingShouldHappenInSomeContext()
  2. {
  3.     givenSomeContext();

  4.     whenActionPerformed();
  5.     thenSomethingShouldHappen();
  6. }

  7. void givenSomeContext()
  8. {
  9.     //configure context
  10.     mock1->setCallShouldSucceed(false);
  11.     mock2->addFakeValue(“Value”);
  12. }
  13. void whenActionPerformed()
  14. {
  15.     //execute action
  16.     out->interface();
  17. }
  18. void thenSomethingShouldHappen()
  19. {
  20.     //check that what should happen happened
  21.     CPPUNIT_ASSERT(didSomething);
  22. }

Yes, it’s more code, and yes, it may just be semantics, but I see something more. The naming alone has suggested that maybe I should remove the details from the test. Not only does this produce well-factored code but, but it pulls communication with the class under test to the boundary of my test suite and away from my test cases. Now, if the usage of this class changes for some reason, I only have to update it in one or two places in my test suite, rather than in every single test case. Obviously this approach may not be suited for every single situation, but like I said, I use both where it feels right. Granted, this is considered a good practice when using AAA as well, but you’ve got to name those newly extracted functions something don’t you?

I’d love to get some feedback on this. What convention does your team follow? Are my observations valid or can it be chalked up to something else?

Autotest for Compiled Languages (C#, C++) using Watchr

When I was learning Rails I set up Autotest on Ubuntu with Growl notifications, which I thought was a pretty slick idea. On Ruby this whole technique is super easy and efficient because Ruby is an interpreted language; there’s no compile time to slow you down, and no other files to pollute your directory tree. Compiled languages don’t have that advantage, but I think we deserve some continuous feedback too. Here I’ll describe how to configure Watchr, a generic Autotest utility, to run compiled tests whenever a source file in the path is updated. This tutorial will use a C# example, but it’s trivial to have it trigger on different file types.

Getting Started

First, we’ll need to install Ruby and Watchr.  Because I’m using Windows I just downloaded RubyInstaller.  Make sure you put the Ruby/bin directory in your PATH.

Next, download Watchr from Github, extract the archive and navigate to that directory.  Or you can just download the gem directly, but some people might want to run the tests locally first. The following command will install the gem from the local directory:

C:\mynyml-watchr-17fa9bf\>gem install Watchr

Configuring Watchr

Now that we have all the dependencies installed, we need to configure Watchr. This process is easiest if you already have a single point of entry for your continuous build process, but if you don’t it’s not that bad and you’ll probably want one anyway. Now, at the same level as the directory(ies) containing your source code, create a text file. I usually call this autotest.watchr, but you could call it autotest.unit or autotest.integration if you’re into that sort of thing. For now, just put in the following line in:

  1. watch(‘./.*/(.*)\.cs$’) {system “cd build && buildAndRunTests.bat && cd ..\\}

Yes, it’s that easy. What this is doing is telling Watchr to monitor any files that match the regular expression (in this case a recursive directory search for .cs files) inside the watch() call, and then execute the command on the right. I also have it configured to return to the same directory when it’s finished, but I don’t know if that’s actually necessary. The watch() pattern is what you would modify for different environments. For example, you could use watch('./.*/(.*)\.[h|cpp|hpp|c]$') for a Mixed C/C++ system, or watch('./.*/(.*)\.[cs|vb|cpp|h]$') for a .NET project with components built in different languages. An important thing to note is the $ at the end of the regex. Because it’s likely that there will be a lot of intermediary files generated during the build process, we don’t want a file which happens to match this pattern that’s generated at build time to trigger an infinite loop of build & test (like happened to me). The heavy lifting is done here, but the stuff specific to your project happens in build/buildAndRunTests.bat. Let’s take a look at that:

  1. pushd ..\
  2. echo Building tests
  3. “C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\” Tests.Unit\Tests.Unit.csproj /rebuild Release
  4. popd
  5. pushd ..\Tests.Unit\bin\Release
  6. echo Running tests through nunit-console
  7. nunit-console.exe Tests.Unit.dll /run=Tests.Unit
  8. popd

You’ll obviously want to customize this to the specifics of your project, but right now it’s hard-coded to call Visual Studio 2008’s (on a 64-bit OS) and build a project called Tests.Unit. For brevity it also assumes that nunit-console.exe is available on the PATH. Not terribly interesting, but that’s the rest of the work.

Now to have all the magic happen. Run the following command in a new console window from your project directory:

C:\Projects\MyProject>Watchr autotest.watchr

That’s it! Watchr is now monitoring for changes to files that match your pattern. Simply modify any file matching the pattern and watch the whole process set off. Once it finishes, you can hopefully see the results and it will wait for the next change.

Now there’s one less thing you have to do during your heavy refactoring sessions, or just with day-to-day development.

Copyright © 2017 Life, Teams, and Software Engineering

Theme by Anders NorenUp ↑