Life, Teams, and Software Engineering

Category: process improvement (page 1 of 2)

Why "User Stories" Failed Us

NOTE: Please read this entire post, I don’t aspire to become the Winston Royce of User Stories. :)

About 7 or 8 months ago our teams decided to try using User Stories as our primary mechanism for capturing requirements.  Since then, we have taken our lessons learned and have moved away from them.  The reason for their “failure” is laughable really: people couldn’t get past the name.  There were team members and customers that just couldn’t get beyond the fact that they’re called “User Stories”.  People would make cracks about Epics and ask whether it’s more like Odyssey or Gilgamesh.  In the beginning this was funny, and we had a laugh, but some people just couldn’t get past it, and we have since decided to phase out the use of that term.

The levels of success with User Stories in our group vary greatly, but as with any two projects you can’t really compare them based on a single part of their process.  The team I am supporting has delivered value consistently since the project started and we have done so not because we used User Stories (we did, and to great effect I might add) but because we were (and are) disciplined.  Other teams call functionality “done” without having high level tests against it that can be easily repeated.  I blame myself for those failings if for no other reason than I should have noticed the signs.  Someone has to, right?

On my current team when one of us starts to get lazy the other half of the pair straightens them out and makes sure that everything they thought to test has been tested, and that it’s in the appropriate place for our automation infrastructure to get at it.  We commit a change and within an hour we’ve validated (yes, that’s validated, not just verified) that latest version against all our user stories to date, against all supported configurations, including protecting ourselves from regression.  We are able to answer the question “when will version X be done?” using real-time data because we keep Jira up to date and because we understand what “done” means.  We don’t fool ourselves into thinking we can do more work in 3 weeks than we’ve historically proven we can do, and we don’t let management pressure us into committing as such.  We’ve learned to how to adjust our sprint caps for personnel fluctuation as team members are temporarily stripped away to work on other things, which happens quite often.

Like I said, it’s not the tool’s fault.  It’s unfortunate that something so trivial could kill its use in our process.  Now we work on “Features”…that contain shorts blurbs about something succinct that the system should do…and are estimated using points…  Sound familiar?  User Stories are no more or less valuable as an artifact than “shall” statements in more traditional requirements management methodologies, but they’re tight, to the point, and no one fools themselves into thinking they can craft the perfect sequence of words to build the perfect statement.  No one even tries.  Instead, the focus becomes the user’s intent, not the words in the requirements document.  And intent is best captured with higher bandwidth forms of communication, like getting everyone in the same room and talking it out.  That, I’ve learned, is the true power of User Stories; even if they’re called Features.

Autotest for Compiled Languages (C#, C++) using Watchr

When I was learning Rails I set up Autotest on Ubuntu with Growl notifications, which I thought was a pretty slick idea. On Ruby this whole technique is super easy and efficient because Ruby is an interpreted language; there’s no compile time to slow you down, and no other files to pollute your directory tree. Compiled languages don’t have that advantage, but I think we deserve some continuous feedback too. Here I’ll describe how to configure Watchr, a generic Autotest utility, to run compiled tests whenever a source file in the path is updated. This tutorial will use a C# example, but it’s trivial to have it trigger on different file types.

Getting Started

First, we’ll need to install Ruby and Watchr.  Because I’m using Windows I just downloaded RubyInstaller.  Make sure you put the Ruby/bin directory in your PATH.

Next, download Watchr from Github, extract the archive and navigate to that directory.  Or you can just download the gem directly, but some people might want to run the tests locally first. The following command will install the gem from the local directory:

C:\mynyml-watchr-17fa9bf\>gem install Watchr

Configuring Watchr

Now that we have all the dependencies installed, we need to configure Watchr. This process is easiest if you already have a single point of entry for your continuous build process, but if you don’t it’s not that bad and you’ll probably want one anyway. Now, at the same level as the directory(ies) containing your source code, create a text file. I usually call this autotest.watchr, but you could call it autotest.unit or autotest.integration if you’re into that sort of thing. For now, just put in the following line in:

  1. watch(‘./.*/(.*)\.cs$’) {system “cd build && buildAndRunTests.bat && cd ..\\}


Yes, it’s that easy. What this is doing is telling Watchr to monitor any files that match the regular expression (in this case a recursive directory search for .cs files) inside the watch() call, and then execute the command on the right. I also have it configured to return to the same directory when it’s finished, but I don’t know if that’s actually necessary. The watch() pattern is what you would modify for different environments. For example, you could use watch('./.*/(.*)\.[h|cpp|hpp|c]$') for a Mixed C/C++ system, or watch('./.*/(.*)\.[cs|vb|cpp|h]$') for a .NET project with components built in different languages. An important thing to note is the $ at the end of the regex. Because it’s likely that there will be a lot of intermediary files generated during the build process, we don’t want a file which happens to match this pattern that’s generated at build time to trigger an infinite loop of build & test (like happened to me). The heavy lifting is done here, but the stuff specific to your project happens in build/buildAndRunTests.bat. Let’s take a look at that:

  1. pushd ..\
  2. echo Building tests
  3. “C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com” Tests.Unit\Tests.Unit.csproj /rebuild Release
  4. popd
  5. pushd ..\Tests.Unit\bin\Release
  6. echo Running tests through nunit-console
  7. nunit-console.exe Tests.Unit.dll /run=Tests.Unit
  8. popd


You’ll obviously want to customize this to the specifics of your project, but right now it’s hard-coded to call Visual Studio 2008’s devenv.com (on a 64-bit OS) and build a project called Tests.Unit. For brevity it also assumes that nunit-console.exe is available on the PATH. Not terribly interesting, but that’s the rest of the work.

Now to have all the magic happen. Run the following command in a new console window from your project directory:

C:\Projects\MyProject>Watchr autotest.watchr

That’s it! Watchr is now monitoring for changes to files that match your pattern. Simply modify any file matching the pattern and watch the whole process set off. Once it finishes, you can hopefully see the results and it will wait for the next change.

Now there’s one less thing you have to do during your heavy refactoring sessions, or just with day-to-day development.

Processes That Work For You

All processes have their pluses and minuses.  Many teams on each side of the ‘agile boundary’ think they’ve got it right, and they may, but it’s not which side of the boundary they’re on that matters.  What matters is that they’ve managed to find processes that work in context.

Maybe you’re on one of these teams.  Maybe they’re mythical to you.  Either way, what a stripped down process framework like Scrum gets you is an opportunity to find what works for your team not just because your processes/practices haven’t failed you yet, but because you’ve felt the pain the absence of that process/practice causes.  I’ll say that again because I think it’s worth repeating.  We want to be able to say our processes work not because they haven’t yet failed us, but because we know exactly what will happen if we remove them; because we’ve experienced it first hand.  A major advantage of the shortened feedback loops of Scrum (or any iterative process) is that it allows you to frequently look at your problems, find their root cause, and come up with a way to fix it.  I’ve seen time and again that existing “traditional” practices are quite often the remedy.  I’m not the first to suggest “home-grown” processes but I do believe it’s a good approach, perhaps the best if you can afford the effort and really dedicate to it.  Just like we wait to plan and estimate until we have the best information we’re going to have (i.e. ‘the last responsible moment’), why not apply the same methods to process adoption?

Now I’m not suggesting we start from ground zero, that would be foolish.  Obviously we use version control, somehow elicit and track requirements, some form of configuration/requirements management, etc, but those are all just tools.  The processes and practices that you choose to employ (i.e. Process Areas) should come from need.  Choose a responsible starting point and grow your processes as you find need for them from that point on.  If it’s required that you meet CMMI Level 3, so be it, but you should still revisit your processes frequently (and hope that whatever governing body levied that requirement is willing to accept the added cost and possible, or inevitable, waste).

It’s also perfectly acceptable to discard processes that cost more than the value they add, or to replace them with more appropriate ones.  That’s the beauty of retrospective.  After all, a failure across two weeks or even a month is better than a failure after a year of churning.  That is if you can course-correct as necessary.

No matter what side of the agile boundary your team works on, you should regularly evaluate your practices and processes.  Ask these questions with your team:

  1. Do we really need to be doing this?  What happens if we omit this?  Would it be responsible to omit this?
  2. Can we be doing this better?  If so, is there anything specific we can implement immediately?  Anything we need to investigate further?
  3. Does it feel like we’re missing something?  Is there a practice or process we can introduce to fill the void?
  4. Is something consistently going wrong?  Why x 5
I’ve found that keeping a consistent eye on these things allows us to make changes quickly and effectively without sacrificing overall quality, or any number of other benefits various processes can provide.  All processes aren’t all right for all situations.  Build what works from experience, from pain, from failure, not just because some book or group tells you so.

Is tooling only for the youth?

Disclaimer: This post maybe have no basis in reality at all outside my team, it’s just a question. This post is done with information on a case study of 1.

I moved to the wide world of ANSI C++ after working in C# with the love of ReSharper for the better part of 2 years. I managed but I definitely felt the absence of the great refactoring tool that took the things that should be easy and trivial and made them so. C++ is not that way, especially with old IDEs. So I started searching this past weekend for a C++ refactoring tool that actually supported my IDE. I found one that claimed to support it, Visual Assist X, so I installed it and started playing around with it. It works pretty well for what I need it to do but I haven’t fully explored it yet.
Either way, this post isn’t about Visual Assist X. Today after I set up my keyboard shortcuts (to match ReSharper, no less) I called my team’s senior developer over and showed him what it could do.
Him: Another tool?!? *shakes head* I don’t trust tools to do much of anything. They’re just more systems with their own bugs.
Me: Yes but what they do they do well. Why not take it for what it is and accept that nothing is perfect?
Notice these are not direct quotes. I remember better what he said than my own response (mostly because I was surprised by his reaction), but that was the general nature of the conversation. He then proceeded to call me “Mr. Tool” and was quick to dismiss it. I was a bit confused by this, but didn’t think much of it at the time. I had work to do so I went on my way.
Now, sitting in my living room catching up on back episodes of Legend of the Seeker something creeps up from the back of my mind. Do I really rely on that many tools? Let’s list them out:

Development:

  1. Visual Studio
  2. CppUnit (is that really a “tool”?)
  3. Rational PureCoverage for capturing code coverage
  4. Hudson for Continuous Integration
  5. CppDepend, CCCC, CppCheck, SourceMonitor for various static analysis (some do things better/more simply than others)
  6. And now Visual Assist X for refactoring support.

Process:

  1. Jira + GreenHopper
  2. Confluence
  3. Crucible
  4. Fisheye
    I don’t think this list is unreasonable at all. OK, so maybe the static analysis tools are a bit excessive but I like data, especially when it costs me next to nothing to get it through Continuous Integration.

    Let’s look back to September 2008 when I joined the team. They were using exactly ONE of these tools, Visual Studio. No unit testing, no continuous integration, no process support tools, certainly no automated testing, and worst of all no feedback mechanisms of any kind until a project ended and you handed it over to the customer to say “I hope this is what you wanted”.

    Flash to today. We have 20+ configurations in Hudson, our latest project has 90%+ unit test coverage at all times, our system testing is as automated as it can be so our test team isn’t overwhelmed, all our documentation is maintained in Confluence, and all our issues and tasks are tracked in Jira.

    I feel like each of the tools listed above plays an integral part in my day-to-day work as a developer. Obviously as developers we spend less time in the management systems and use very limited features of them, but does that make them any less important? No. If management can just jump out to Jira to check our status or out to Confluence to answer their question, that’s one less thing they had to bother me or my team about. It makes me happy and I don’t even know it, and I’m sure they appreciate it too.

    Now I finally get to the question. Is the reason for his reaction a generational thing or am I completely off base? Are the youth more likely to find a tool-based solution to a pain point while the more seasoned have just learned to deal with it?

    Then there’s the other possibility, am I over-reliant on tools? Maybe I could simplify, but do I understand how they work, their purpose? Absolutely. I know what the scope of their functionality is, how they do what they do, what they’re NOT meant to do, and how to bend them to my will within the constraints of the tool. Each one of them adds value and there are only 3 of them that any developer on the team really has to know or use; Visual Studio, CppUnit, and PureCoverage…I take that back, PureCoverage isn’t necessary for them to understand, they just need to have it installed since every time they run a build it runs the unit tests with coverage. They could completely ignore the results and I wouldn’t know the difference.

    What do you think? I’m sure there are exceptions as there are with any rule, but are the youth more likely to blaze new trails?

    From Waterfall to Agile: How we got there

    Upon Arrival

    When I arrived on my contract in September/October 2008 things were much different than today.

    • Projects took on average 3-6 months to complete, with few checkpoints in between. Stakeholders couldn’t get any value out of the systems until the end of the project even if they wanted to.
    • Projects were developed by a single person with very little, if any, collaboration.
    • The test group was far disconnected from the development team and was not involved with the project until it was thrown over the fence for testing. The testers also had little to no technical background. This isn’t necessarily a bad thing, but for our particular group the need to understand the domain and the systems is important.
    • Requirements (and the rest of the documentation) were written by the developers, usually after the system was implemented. The only reason it was documented at all was because it was required by the client.
    • While we were co-located with our customer, there was little involvement for validation purposes and contact with users was difficult and I didn’t see the team press for it.
    • Developer verification wasn’t being done on a regular basis, if at all. No unit testing at all, at least not on my team. Another group was using it minimally but I don’t know if it was a core part of their practice.
    Immediate Feedback

    The first thing I do when I start with a new team is see how they operate. If there are things that aren’t being done well, or not at all, I make it a point to find the reasons. This helps to point me in the best direction for fixing existing processes and implementing new things while creating the least friction. The reasons can vary, anything from management pressure, general uncertainty, to god forbid, complacency.

    Most of the practices the team had no familiarity with. Things like developer unit testing, continuous integration, testing throughout the lifecycle, etc. But other things, specifically the documentation problems, were chalked up to a “sad truth” that we just had to accept as a cost of doing business. We’ll see about that.

    The simplest practices to implement were those that I could stand up and use myself and introduce my team to later. Those were Continuous Integration and Developer Unit Testing. I stood up a CruiseControl instance on a spare server and integrated CUnit/CppUnit with our current projects and any new projects moving forward. This took some work, especially stripping the core functionality into its own library so CppUnit could use it, but we got there. There was some cost involved with educating the team on how to write unit tests and to check CI often but it was minimal.

    To be clear, Continuous Integration came before I introduced unit testing to the team. I did this because I feel automated unit testing adds the most value when the developers can’t just forget about it (intentionally or not). This is particularly important when everyone is still learning. CI acts as a failsafe to make sure the unit tests pass on an independent machine. The CI server doesn’t care about your feelings. If something is broken, it will tell you with some emails and a nice red block on the build status page (and Emotional Hudson in Hudson’s case).

    Testing Throughout the Lifecycle

    Having just received my CTFL from the ASTQB the idea of testing throughout the lifecycle was fresh in my mind. It wasn’t the first time I’ve heard of the concept (my Software Engineering Principles course in school touched on it briefly) but for the first time I had something actionable that I could readily apply. We were using the V-Model so the transition from project states was a perfect place to implement it. Using the states on the V-Model diagram below (from Wikipedia) we implemented static analysis on documents after the Requirements and Architecture activities (SRS), and the Detailed Design Activities (HLD), while building Test Plans, Procedures, and Cases concurrently with development efforts. The effect of this was two fold
    1. It immediately increased the quality of our systems and their documentation.
    2. It got the test team closer to the development team and got both those groups closer to the customer.


    Growing Pains


    All of this change didn’t come without its problems. After a couple months of leveraging CruiseControl some of the other groups started to take interest. They wanted in on the party! However, CruiseControl was becoming a pain to maintain. Adding new projects required modifying the XML configuration by hand and then restarting the server. This wasn’t the worst of it though. I was the only person who knew how to do this, making me the guru. Max Pool at CodeSqueeze has written about gurus before and they’re not ideal, especially when they’re you. Anyway, after inquiring with the development community at my firm I was introduced to Hudson and my life has never been the same. Hudson made it dead simple for teams to add their own projects and do whatever they wanted with them, all without having to come to me for help.

    Project Opacity

    After introducing better engineering practices I now turned my focus to project visibility. The management team had a hard time getting a grasp on where our projects were. That is, our project visibility was very low. We had weekly status meetings that were always to the tune of “what have you done for me this week?”. The meetings were often unhelpful to everyone and have only recently been done away with (within the last month or so), probably as a result of the increase in project visibility. At least I’d like to think so :). This led me to start rolling out Atlassian’s Jira. It allows management to break down work and see exactly what was being worked on in real-time, all without having to bother the delivery team. This didn’t come without purchasing hurtles (it cost more than $0) but we’ve been using it in production for the last two months.

    Documentation Management

    Our team has some pretty lofty documentation requirements levied upon us by our stakeholders, at least relative to other agile teams. They take our systems and documentation and perform further third-party evaluations to make sure we meet their standards. We are required to provide them with our ConOps, SRSs, HLDs (sometimes), test plans, test procedures, test reports, and user’s guides.

    Anyone who has attempted to maintain Office documents in Subversion knows how much of a pain it can be. This got me started on the long path of trying to find a better solution. At first, I tried building documentation inside Trac, our wiki at the time, but I quickly learned that it didn’t support exporting all that well. Wiki2PDF kind of worked, but broke anytime someone sneezed. That clearly wasn’t going to work so I moved on. Then I attended a talk given by another organization on the Atlassian tool suite where I was introduced to Confluence. I was hesitant to consider changing our wiki system at first since Trac was an integral part of our day-to-day work, but Confluence’s build in support for Word, PDF, and HTML export changed my mind. Confluence’s built in features combined with the Universal Wiki Converter (and some select SQL update statements) made this transition a breeze allowing me to export everything from Trac and import it into Confluence. Everyone left on Friday using Trac and began using Confluence on Monday.

    Since then we have imported all our old Word documentation into Confluence and deleted them from our Subversion repositories (a happy day indeed!). We now build and maintain all our documentation for all our projects inside Confluence where we can get it our for distribution at any time. I like to think of the knowledge inside of Confluence as being the Configuration Items and the consequent document exports to be “compiling” that knowledge.

    Release Management

    Our release management process used to be defined in a very heavyweight (70+ page) CM Plan document that was way heavier than it needed to be. Now it’s documented on a single Confluence page that describes how to
    1. Tag a build using Hudson
    2. Export the documentation from Confluence as PDFs and
    3. Store the Hudson binaries and PDFs in version control as a release tag.
    Why does it need to be any more complicated than that?

    Implementing Agile

    While all paths led here it wasn’t absolutely necessary to implement agile. The reason for choosing to go with an agile methodology, Scrum specifically, was bullet #1 at the beginning of this post. Ultimately we’re responsible for delivering software systems. In the old process 5 months into a 6 month project we had nothing to speak of, yet we would have thrown 5 months worth of funding at it. More specifically, our clients had. If they have a need to use some features 5 months into the project why shouldn’t they be able to? We may have implemented those features in month 1, but we can’t assure they work since testing wouldn’t have started for another week or two. The risk we had was that they would want to do just this. If they came to us in a situation like this our value is likely to be questioned, and in this economy who can afford that?

    Enter Scrum. Now we maintain a Product backlog with high level user stories that are prioritized at the beginning of each Sprint. We commit to a small subset of the requirements at the beginning of each sprint and (ideally) take all the stories to “done” within our sprints (typically 2-3 weeks, we’re still finding our sweet spot), performing our release processes at the end of each sprint. For us, done means
    1. Documentation has been updated (ConOp, SRS, HLD, User’s Guides, Test Plan/Procedures)
    2. Code has been implemented with high (90%+) unit test code coverage (where reasonable) and has been checked in for Hudson to build.
    3. Test procedures that are candidates for automation have been automated in one way or another.
    4. Code and documentation has been review by the team and stakeholders, as appropriate. We use Crucible for code reviews.
    See, documentation and Agile can coexist! You just need to include it in your definition of done and be sure your team includes that work in its estimates.

    Conclusion

    So to wrap things up, here are my suggestions to anyone wanting to implement agile, especially Scrum since it doesn’t prescribe engineering practices. Many of these probably hold true for implementing change in general.
    1. Implement engineering practices first. You’ll see the fastest impact here, and you’ll need these come time to implement Scrum.
    2. Lead by example. Start using the new tools/practices on your own and your team will hopefully catch on.
    3. Align project teams (development, QA) and stakeholders prior to implementing Scrum or you’ll have some problems.
    4. If you can, ease your team into it. It’s all the better if they can arrive at the same conclusions as you on their own as to why the change is needed and how your fix will make things better.
    5. Don’t force change if it can be avoided. Provide a good reason for why this is being done. My predecessor meant well but they forced many heavyweight processes on teams that had no say or warning, leading to processes that were not used. If teams discover its use on their own, or are at least kept in the loop, they’re more likely to accept and continue using them.
    Do you have any additional suggestions on rolling out change on your teams? Please comment below!
    Olderposts

    Copyright © 2017 Life, Teams, and Software Engineering

    Theme by Anders NorenUp ↑