Life, Teams, and Software Engineering

Category: agile (page 1 of 2)

Safe to Fail

Failure

Many people have what I believe to be a misguided fear of failure.  Fear of public humiliation?  Losing your job?  Harming your reputation?  Perhaps you’re a perfectionist who thinks they can’t live with failure.  Fear can paralyze and hold back our most talented people and when they aren’t putting forth their most daring work due to fear of failure or the consequences thereof something must change.

In the software development space failure is usually defined quantitatively as being over budget, under featured, behind schedule, or qualitatively as having poor quality, or in some cases that it just doesn’t feel or work quite right.  The period of time over which these failures occur tend to be on the order of months or years, usually to the detriment of everyone involved.  But could this have been avoided?  Were the signs there all along?  Hopefully teams and organizations that encounter these types of long running failures have the courage to ask these questions, answer them, and act on the answers, but I suspect far too many do not.

The fact is that failure at some, and often many levels is unavoidable.  Once you realize this, you can accept it and embrace it.  That is what we all must do: stop fearing failure and embrace it.  However, as with anything there are responsible ways of going about this, and not so responsible ways.  Should we seek to fail from the beginning?  No, that’s not the point.  The point is to recognize some failure, however small, (e.g. this code didn’t work, the schedule slipped, Bob just quit, etc.) and learn from it.  An unrecognized or ignored failure means that not only have you failed, you just missed a growth opportunity and have truly wasted your time.

But how can we encourage giving our people the freedom and flexibility to bring daring ideas to life without burning everything to the ground and descending into bankruptcy and code cowboy anarchy?  If my thesis is that failure is worse if realized over a long time period, the logical thing would be to decrease the time window in which the failure occurs.  There are many things you can do to shorten this window without being overbearing or invasive in day-to-day work.

With Projects

Make sure you are capturing real-time stats of where your group is on your roadmap relative to where you think you should be.  This will allow you to have a constant pulse on the general direction of your work.  An example would be a burndown chart at the project level or a similar tracking method at the organizational level.  At the project level I’m a fan of the Greenhopper burndown chart format.

In such a format if you find yourself continually over the ‘guideline’ you will want to find out why.  Maybe the team is overoptimistic with their estimates?  Maybe the team is being so rosy because they feel the real estimates wouldn’t be accepted?  Maybe estimates are being advertised as commitments when they were never meant to be?  Maybe the team isn’t the one making the estimates?

Asking these questions will allow you to course-correct over shorter time periods while keeping everyone happy and well informed.  This frequent analysis and adjustment can help prevent long running failures.

Day-To-Day

Wouldn’t it be great if we could compress realized failures into a single day instead of over several months?  For many things we do as developers this can be done; we have the technology.

Be sure to set up productive development environments with super low friction using tools like Autotest or Watchr.  Tools like this allow you to automate your workflow using simple triggers (like saving a source file) to do things automatically that you would otherwise have to break your train of thought to do.  For example, I have my Watchr scripts set up to build the project, run the unit tests, and then for an embedded project, deploy the same unit test harness to a real device and run it there.  This type of low friction environment allows you to focus on what you’re really working on and most importantly gets you that feedback more quickly.  It might only be 10-15 seconds faster, but it adds up.

At a less technical level, if you don’t know the answer to something, ask, don’t guess. Turn to the person sitting next to you and ask.  If they don’t know, keep going until you find someone who does, even if that means contacting the customer.  Ideally the person sitting next to you is your customer, but that’s a different post.  Your customer is invested in this and you owe it to them to avoid speculation where possible.  The longer you wait to get an answer, the longer that assumption lives in your system, which can start the timer on a long running failure.

With People

Make sure that you instill a sense of purpose and pride into your teams from the beginning.  Make sure that they understand why your organization is in business and the significance of what they are working on.  They must feel like they belong and that they are contributing to something of value.  Empower them to really own their work and they will exceed your wildest expectations.  An environment like this is also likely to have lower turnover than the alternatives.

Above all, challenge them, but don’t break them.  Make sure your management knows their people, their abilities, and their limits.  Better yet, make sure your people know their own abilities and limits!  Then have them placed in a team a little above their current abilities but not beyond their limits.  Make sure they have team members who can and will act as teachers and mentors who can also help reduce that risk of long running failure.

With New Ideas

Make sure that ideas are received and evaluated on merit.  A good idea shouldn’t be overlooked simply because it came from the new guy, the old guy, or the guy with a new idea every day (enthusiasm!).  Most importantly, make sure that your people know that this is how these things will be handled.  Be active about it, prove that you’re willing to put your money where your mouth is.  Don’t just put out a suggestion box, you’re all in this together and your people want to be as big a part as they can.  Let them.

Encourage, but constrain exploration.  If these new ideas have merit, charge the person who proposed it with running it down.  Give them a couple weeks to flesh out the idea.  If they show progress, give them a couple more weeks.  If they don’t, maybe it still warrants more exploration, or maybe it gets killed dead.  Either way, two weeks lost is better than 6 months or years, you’ve all learned something, and hopefully grown a little in the process.  Ideas that show promise can can grow into a great new feature, new product, or totally new venture.  Left unknown and unevaluated it’s nothing more than a missed opportunity.

You could also constrain a new effort on budget.  I get the impression from Fred Wilson at Union Square Ventures that this is how they do it.  Start them with minimal funding on a truly ‘lean’ basis and fund them further when concept, production, and execution show positive signs.  I’m positive this is an oversimplification, but hopefully my point is made.  I’m sure that a welcome side-effect of this type of constrained environment is that the team (or company, as it would be) will know how to operate within these constraints should the need ever arise again (money isn’t infinite after all).

Have Courage!

In conclusion.  Have courage, put structure in place to recognize impending failure early, empower your people, and above all, learn from your mistakes.  Failure doesn’t have to mean that it’s over, just that you can do things better, and we can always do things better.

Quality is Not an Option

While I’m hardly the first to talk about the “Tradable Quality Hypothesis” hopefully I can reinforce in some of my readers (all 10 of you :) ) that Quality is not an option.  You cannot choose to lower quality (or forego quality producing/assurance practices) in an effort to get more features “done” or to deliver work more quickly.  At least not for any realistic amount of time.  If you do this, you might as well plan the rewrite into the schedule now.

So What if I Do?

Immediately, you will probably feel little consequence from omitting a few test environments or writing up those features without any unit tests.  But I promise it will catch up with you.  Failing to test (at any level), continuing to use outdated tools (without a transition plan), and knowingly adding functionality of little to no explicit value contribute to decaying codebases where bug counts and cost of change increase, while overall velocity (i.e. the rate of value addition) will decrease.  If you skip testing it now, guess what?  You’ll end up testing it a lot more when those bug reports start rolling in.

Yes, you might get to be a hero; and everyone loves a hero, right?  Maybe.  But when a situation arises where heroics are necessary (e.g. long nights & weekends) just to hit normal commitments teams should not celebrate.  That person has just set the precedent for what your clients and customers will expect of the team from this point forward.  Resist the urge to be a hero and be a member of the team.  Pull each other up as best you can to pull it in at the end, but be careful not to introduce large variances that can invalidate your velocity for that iteration.

If your team is delivering less and less because you’re trying to catch up with bug reports you may never be able to make up that time.  This can not only harm your organization’s reputation, but your personal or professional reputation as well.  Considering that, is it really worth the risk?

How to avoid the hole and dig out

There are many ways to avoid getting into the position of deciding to trade off quality for short-term schedule benefits.

One word: discipline.  Be vigilant that everything you contribute is tested in multiple ways (unit, integration, UI, etc.), have as many people as possible review your work and provide input, and make sure those things keep happening.  Yes it’s hard and can look to the uninitiated as if you’re moving more slowly, but when that thing you just wrote inevitably changes next week you’ll appreciate putting forth the extra effort.

Next is education.  Everyone should understand (though not necessarily intimately) what goes into delivering functionality.  Establish your definition of done and make it well known.  Hang it on the walls.  Recite it at the beginning of each daily standup.  I don’t care how people remember it, just that they do.  Whenever something is untested, it’s not really done yet.

If you’ve found yourself in this unenviable position, the first step is to admit that you have a problem.  Seriously.  We’re all proud of our solutions but sometimes they need cut down and replanted.  There’s no shame in it, I promise.  There’s only shame in voluntary insanity (doing the same thing over and over again and expecting different results).

Next, come up with a plan of how to tackle what is most commonly the real problem: technical debt.  Technical debt is any sub-par component of your entire solution space.  This could be anything from an inflexible test harness, to untestable or scary to change code, to relying on outdated or unsupported software packages.  If it causes you pain on or disappointment on the development side, it likely falls in here.  Environmental or issues external to the team should be raised to the team’s manager (e.g. Scrum Master) for them to deal with outside the team.  You need to identify and manage this technical debt before there is any hope of a sustainable pace of quality and value.

Don’t try to make it perfect, just make it better.  Don’t feel bad about it, what worked for a team of 5 and 100000 lines of code may just not scale to a larger team or codebase.  Our products grow and so our development support infrastructure must grow around them.

Remember, we’re the professionals

It may give the higher ups warm fuzzies to hear that it can all fit neatly into a little box, but no one will like the feeling later.  We do them no favors by making commitments we know we can never keep within any reasonable standard of long term success.  Clients can and should set constraints on delivery and tell us what they want delivered, but it’s our job to define how best to get there.  I don’t tell other professionals how to do their jobs when I pay for their services, and I would hope they would advise me on the consequences of taking shortcuts, while avoiding them completely.  That is after all, why I’m paying them.  They know better than I do.

If there are doubts about making commitments, or about quality, don’t be afraid to tell your clients (or your teammates) the truth, or at least what you observe.  That’s what you’re paid for, speak up.  Ultimately, even if the clients don’t say so, I’m sure if you asked whether or not you should test this feature you just gave them they’d be scared you even asked the question. Quality is always a requirement even if our clients don’t list it as a deliverable.

Why "User Stories" Failed Us

NOTE: Please read this entire post, I don’t aspire to become the Winston Royce of User Stories. :)

About 7 or 8 months ago our teams decided to try using User Stories as our primary mechanism for capturing requirements.  Since then, we have taken our lessons learned and have moved away from them.  The reason for their “failure” is laughable really: people couldn’t get past the name.  There were team members and customers that just couldn’t get beyond the fact that they’re called “User Stories”.  People would make cracks about Epics and ask whether it’s more like Odyssey or Gilgamesh.  In the beginning this was funny, and we had a laugh, but some people just couldn’t get past it, and we have since decided to phase out the use of that term.

The levels of success with User Stories in our group vary greatly, but as with any two projects you can’t really compare them based on a single part of their process.  The team I am supporting has delivered value consistently since the project started and we have done so not because we used User Stories (we did, and to great effect I might add) but because we were (and are) disciplined.  Other teams call functionality “done” without having high level tests against it that can be easily repeated.  I blame myself for those failings if for no other reason than I should have noticed the signs.  Someone has to, right?

On my current team when one of us starts to get lazy the other half of the pair straightens them out and makes sure that everything they thought to test has been tested, and that it’s in the appropriate place for our automation infrastructure to get at it.  We commit a change and within an hour we’ve validated (yes, that’s validated, not just verified) that latest version against all our user stories to date, against all supported configurations, including protecting ourselves from regression.  We are able to answer the question “when will version X be done?” using real-time data because we keep Jira up to date and because we understand what “done” means.  We don’t fool ourselves into thinking we can do more work in 3 weeks than we’ve historically proven we can do, and we don’t let management pressure us into committing as such.  We’ve learned to how to adjust our sprint caps for personnel fluctuation as team members are temporarily stripped away to work on other things, which happens quite often.

Like I said, it’s not the tool’s fault.  It’s unfortunate that something so trivial could kill its use in our process.  Now we work on “Features”…that contain shorts blurbs about something succinct that the system should do…and are estimated using points…  Sound familiar?  User Stories are no more or less valuable as an artifact than “shall” statements in more traditional requirements management methodologies, but they’re tight, to the point, and no one fools themselves into thinking they can craft the perfect sequence of words to build the perfect statement.  No one even tries.  Instead, the focus becomes the user’s intent, not the words in the requirements document.  And intent is best captured with higher bandwidth forms of communication, like getting everyone in the same room and talking it out.  That, I’ve learned, is the true power of User Stories; even if they’re called Features.

AAA vs BDD Structuring in Unit Tests

UPDATE: I updated this to include updates to the BDD example’s test function name.  I’m starting to dislike having the called interface in the test name.  It’s inflexible and unnecessary and ultimately doesn’t help the reader all that much.

It’s always good when there are people on your team whom you can both learn from and teach things to. Such is the case with my current team. A couple team members have never done unit testing as it’s known in the industry today. Mostly just “pound at the interface until I’m comfortable and throw it over to testing to deal with” unit testing.

During our first code review there were a lot of issues with unit tests. I actually prefer to let it pan out this way; this way people actually get in there and try it out, to later see what works, what didn’t, and (hopefully) get some ideas from the rest of the team about how to improve it. One recurring theme was that there was a lot of redundant code throughout test suites. Setup and teardown were outright missing! That’s good, because now they’ve seen the problem, and part of the solution is to use those. Another thing was how the test cases themselves were structured. I’ve come across two widely accepted ways of structuring unit tests:

Arrange – Act – Assert
Given – When – Then

I’ve personally used both in the same codebase when it makes sense, but I’m wondering if there’s more to it than just semantics and readability. With AAA you are more likely to interact with the class under test directly inside your test function:

  1. void interface_context_somethingHappens()
  2. {
  3.     //arrange
  4.     mock1->setCallShouldSucceed(false);
  5.     mock2->addFakeValue(“Value”);

  6.     //act
  7.     out->interface();

  8.     //assert
  9.     CPPUNIT_ASSERT(somethingHappened);
  10. }

However, I’ve started to notice that if you read your tests, I mean really read them, then using the Behavior-Driven-Development (BDD) Given-When-Then structuring will actually nudge you towards factoring the real test preparation and calls to the class under test out of your test case:

  1. void SomethingShouldHappenInSomeContext()
  2. {
  3.     givenSomeContext();

  4.     whenActionPerformed();
  5.     thenSomethingShouldHappen();
  6. }

  7. void givenSomeContext()
  8. {
  9.     //configure context
  10.     mock1->setCallShouldSucceed(false);
  11.     mock2->addFakeValue(“Value”);
  12. }
  13. void whenActionPerformed()
  14. {
  15.     //execute action
  16.     out->interface();
  17. }
  18. void thenSomethingShouldHappen()
  19. {
  20.     //check that what should happen happened
  21.     CPPUNIT_ASSERT(didSomething);
  22. }

Yes, it’s more code, and yes, it may just be semantics, but I see something more. The naming alone has suggested that maybe I should remove the details from the test. Not only does this produce well-factored code but, but it pulls communication with the class under test to the boundary of my test suite and away from my test cases. Now, if the usage of this class changes for some reason, I only have to update it in one or two places in my test suite, rather than in every single test case. Obviously this approach may not be suited for every single situation, but like I said, I use both where it feels right. Granted, this is considered a good practice when using AAA as well, but you’ve got to name those newly extracted functions something don’t you?

I’d love to get some feedback on this. What convention does your team follow? Are my observations valid or can it be chalked up to something else?

Autotest for Compiled Languages (C#, C++) using Watchr

When I was learning Rails I set up Autotest on Ubuntu with Growl notifications, which I thought was a pretty slick idea. On Ruby this whole technique is super easy and efficient because Ruby is an interpreted language; there’s no compile time to slow you down, and no other files to pollute your directory tree. Compiled languages don’t have that advantage, but I think we deserve some continuous feedback too. Here I’ll describe how to configure Watchr, a generic Autotest utility, to run compiled tests whenever a source file in the path is updated. This tutorial will use a C# example, but it’s trivial to have it trigger on different file types.

Getting Started

First, we’ll need to install Ruby and Watchr.  Because I’m using Windows I just downloaded RubyInstaller.  Make sure you put the Ruby/bin directory in your PATH.

Next, download Watchr from Github, extract the archive and navigate to that directory.  Or you can just download the gem directly, but some people might want to run the tests locally first. The following command will install the gem from the local directory:

C:\mynyml-watchr-17fa9bf\>gem install Watchr

Configuring Watchr

Now that we have all the dependencies installed, we need to configure Watchr. This process is easiest if you already have a single point of entry for your continuous build process, but if you don’t it’s not that bad and you’ll probably want one anyway. Now, at the same level as the directory(ies) containing your source code, create a text file. I usually call this autotest.watchr, but you could call it autotest.unit or autotest.integration if you’re into that sort of thing. For now, just put in the following line in:

  1. watch(‘./.*/(.*)\.cs$’) {system “cd build && buildAndRunTests.bat && cd ..\\}


Yes, it’s that easy. What this is doing is telling Watchr to monitor any files that match the regular expression (in this case a recursive directory search for .cs files) inside the watch() call, and then execute the command on the right. I also have it configured to return to the same directory when it’s finished, but I don’t know if that’s actually necessary. The watch() pattern is what you would modify for different environments. For example, you could use watch('./.*/(.*)\.[h|cpp|hpp|c]$') for a Mixed C/C++ system, or watch('./.*/(.*)\.[cs|vb|cpp|h]$') for a .NET project with components built in different languages. An important thing to note is the $ at the end of the regex. Because it’s likely that there will be a lot of intermediary files generated during the build process, we don’t want a file which happens to match this pattern that’s generated at build time to trigger an infinite loop of build & test (like happened to me). The heavy lifting is done here, but the stuff specific to your project happens in build/buildAndRunTests.bat. Let’s take a look at that:

  1. pushd ..\
  2. echo Building tests
  3. “C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com” Tests.Unit\Tests.Unit.csproj /rebuild Release
  4. popd
  5. pushd ..\Tests.Unit\bin\Release
  6. echo Running tests through nunit-console
  7. nunit-console.exe Tests.Unit.dll /run=Tests.Unit
  8. popd


You’ll obviously want to customize this to the specifics of your project, but right now it’s hard-coded to call Visual Studio 2008’s devenv.com (on a 64-bit OS) and build a project called Tests.Unit. For brevity it also assumes that nunit-console.exe is available on the PATH. Not terribly interesting, but that’s the rest of the work.

Now to have all the magic happen. Run the following command in a new console window from your project directory:

C:\Projects\MyProject>Watchr autotest.watchr

That’s it! Watchr is now monitoring for changes to files that match your pattern. Simply modify any file matching the pattern and watch the whole process set off. Once it finishes, you can hopefully see the results and it will wait for the next change.

Now there’s one less thing you have to do during your heavy refactoring sessions, or just with day-to-day development.

Olderposts

Copyright © 2017 Life, Teams, and Software Engineering

Theme by Anders NorenUp ↑