Life, Teams, and Software Engineering

Category: testing (page 1 of 2)

CasperJS Tests Hang Against .NET Application

If you’re having issues with your CasperJS tests hanging  against your .NET application  after a page navigation you probably need to turn off Visual Studio’s “Browser Link” feature. This is the helpful tool that automagically updates your page after you change your CSS, but unfortunately it causes Casper to hang. In the Visual Studio toolbar (usually next to the “Start in Browser” button) you can turn this off.

Disable Browser Link

With this disabled your CasperJS tests will be able to finish.

Quality is Not an Option

While I’m hardly the first to talk about the “Tradable Quality Hypothesis” hopefully I can reinforce in some of my readers (all 10 of you :) ) that Quality is not an option.  You cannot choose to lower quality (or forego quality producing/assurance practices) in an effort to get more features “done” or to deliver work more quickly.  At least not for any realistic amount of time.  If you do this, you might as well plan the rewrite into the schedule now.

So What if I Do?

Immediately, you will probably feel little consequence from omitting a few test environments or writing up those features without any unit tests.  But I promise it will catch up with you.  Failing to test (at any level), continuing to use outdated tools (without a transition plan), and knowingly adding functionality of little to no explicit value contribute to decaying codebases where bug counts and cost of change increase, while overall velocity (i.e. the rate of value addition) will decrease.  If you skip testing it now, guess what?  You’ll end up testing it a lot more when those bug reports start rolling in.

Yes, you might get to be a hero; and everyone loves a hero, right?  Maybe.  But when a situation arises where heroics are necessary (e.g. long nights & weekends) just to hit normal commitments teams should not celebrate.  That person has just set the precedent for what your clients and customers will expect of the team from this point forward.  Resist the urge to be a hero and be a member of the team.  Pull each other up as best you can to pull it in at the end, but be careful not to introduce large variances that can invalidate your velocity for that iteration.

If your team is delivering less and less because you’re trying to catch up with bug reports you may never be able to make up that time.  This can not only harm your organization’s reputation, but your personal or professional reputation as well.  Considering that, is it really worth the risk?

How to avoid the hole and dig out

There are many ways to avoid getting into the position of deciding to trade off quality for short-term schedule benefits.

One word: discipline.  Be vigilant that everything you contribute is tested in multiple ways (unit, integration, UI, etc.), have as many people as possible review your work and provide input, and make sure those things keep happening.  Yes it’s hard and can look to the uninitiated as if you’re moving more slowly, but when that thing you just wrote inevitably changes next week you’ll appreciate putting forth the extra effort.

Next is education.  Everyone should understand (though not necessarily intimately) what goes into delivering functionality.  Establish your definition of done and make it well known.  Hang it on the walls.  Recite it at the beginning of each daily standup.  I don’t care how people remember it, just that they do.  Whenever something is untested, it’s not really done yet.

If you’ve found yourself in this unenviable position, the first step is to admit that you have a problem.  Seriously.  We’re all proud of our solutions but sometimes they need cut down and replanted.  There’s no shame in it, I promise.  There’s only shame in voluntary insanity (doing the same thing over and over again and expecting different results).

Next, come up with a plan of how to tackle what is most commonly the real problem: technical debt.  Technical debt is any sub-par component of your entire solution space.  This could be anything from an inflexible test harness, to untestable or scary to change code, to relying on outdated or unsupported software packages.  If it causes you pain on or disappointment on the development side, it likely falls in here.  Environmental or issues external to the team should be raised to the team’s manager (e.g. Scrum Master) for them to deal with outside the team.  You need to identify and manage this technical debt before there is any hope of a sustainable pace of quality and value.

Don’t try to make it perfect, just make it better.  Don’t feel bad about it, what worked for a team of 5 and 100000 lines of code may just not scale to a larger team or codebase.  Our products grow and so our development support infrastructure must grow around them.

Remember, we’re the professionals

It may give the higher ups warm fuzzies to hear that it can all fit neatly into a little box, but no one will like the feeling later.  We do them no favors by making commitments we know we can never keep within any reasonable standard of long term success.  Clients can and should set constraints on delivery and tell us what they want delivered, but it’s our job to define how best to get there.  I don’t tell other professionals how to do their jobs when I pay for their services, and I would hope they would advise me on the consequences of taking shortcuts, while avoiding them completely.  That is after all, why I’m paying them.  They know better than I do.

If there are doubts about making commitments, or about quality, don’t be afraid to tell your clients (or your teammates) the truth, or at least what you observe.  That’s what you’re paid for, speak up.  Ultimately, even if the clients don’t say so, I’m sure if you asked whether or not you should test this feature you just gave them they’d be scared you even asked the question. Quality is always a requirement even if our clients don’t list it as a deliverable.

Autotest for Compiled Languages (C#, C++) using Watchr

When I was learning Rails I set up Autotest on Ubuntu with Growl notifications, which I thought was a pretty slick idea. On Ruby this whole technique is super easy and efficient because Ruby is an interpreted language; there’s no compile time to slow you down, and no other files to pollute your directory tree. Compiled languages don’t have that advantage, but I think we deserve some continuous feedback too. Here I’ll describe how to configure Watchr, a generic Autotest utility, to run compiled tests whenever a source file in the path is updated. This tutorial will use a C# example, but it’s trivial to have it trigger on different file types.

Getting Started

First, we’ll need to install Ruby and Watchr.  Because I’m using Windows I just downloaded RubyInstaller.  Make sure you put the Ruby/bin directory in your PATH.

Next, download Watchr from Github, extract the archive and navigate to that directory.  Or you can just download the gem directly, but some people might want to run the tests locally first. The following command will install the gem from the local directory:

C:\mynyml-watchr-17fa9bf\>gem install Watchr

Configuring Watchr

Now that we have all the dependencies installed, we need to configure Watchr. This process is easiest if you already have a single point of entry for your continuous build process, but if you don’t it’s not that bad and you’ll probably want one anyway. Now, at the same level as the directory(ies) containing your source code, create a text file. I usually call this autotest.watchr, but you could call it autotest.unit or autotest.integration if you’re into that sort of thing. For now, just put in the following line in:

  1. watch(‘./.*/(.*)\.cs$’) {system “cd build && buildAndRunTests.bat && cd ..\\}


Yes, it’s that easy. What this is doing is telling Watchr to monitor any files that match the regular expression (in this case a recursive directory search for .cs files) inside the watch() call, and then execute the command on the right. I also have it configured to return to the same directory when it’s finished, but I don’t know if that’s actually necessary. The watch() pattern is what you would modify for different environments. For example, you could use watch('./.*/(.*)\.[h|cpp|hpp|c]$') for a Mixed C/C++ system, or watch('./.*/(.*)\.[cs|vb|cpp|h]$') for a .NET project with components built in different languages. An important thing to note is the $ at the end of the regex. Because it’s likely that there will be a lot of intermediary files generated during the build process, we don’t want a file which happens to match this pattern that’s generated at build time to trigger an infinite loop of build & test (like happened to me). The heavy lifting is done here, but the stuff specific to your project happens in build/buildAndRunTests.bat. Let’s take a look at that:

  1. pushd ..\
  2. echo Building tests
  3. “C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.com” Tests.Unit\Tests.Unit.csproj /rebuild Release
  4. popd
  5. pushd ..\Tests.Unit\bin\Release
  6. echo Running tests through nunit-console
  7. nunit-console.exe Tests.Unit.dll /run=Tests.Unit
  8. popd


You’ll obviously want to customize this to the specifics of your project, but right now it’s hard-coded to call Visual Studio 2008’s devenv.com (on a 64-bit OS) and build a project called Tests.Unit. For brevity it also assumes that nunit-console.exe is available on the PATH. Not terribly interesting, but that’s the rest of the work.

Now to have all the magic happen. Run the following command in a new console window from your project directory:

C:\Projects\MyProject>Watchr autotest.watchr

That’s it! Watchr is now monitoring for changes to files that match your pattern. Simply modify any file matching the pattern and watch the whole process set off. Once it finishes, you can hopefully see the results and it will wait for the next change.

Now there’s one less thing you have to do during your heavy refactoring sessions, or just with day-to-day development.

Giving Back to the Professional Community

Inspiration


My brain has had a month or so to think about Jean Tabaka‘s article, Countdown to Agility: 10 Characteristics of an Agile Organization, in the November/December 2009 issue of Better Software Magazine (I only read it last month).  This, combined with my recent exposure to talk of Coding Dojos, Code Retreats, and Weekend Testing in the TwitterVerse have inspired me to get my thoughts connecting these two down in writing.  Specifically I’d like to talk about #7 in the article, “Contributing to the Community and Maintaining a Profitable Company“.

A recurring theme in the article is the marriage of seemingly competing interests, Work/Life Balance and Constant Delivery, Servant and Leader, Sustainable and Successful, the list goes on.  It’s a good article, give it a read and see how your organization stacks up.  At first glance, giving back to the community seems like it will cost you while doing nothing more than generating good PR.  The article describes three communities to contribute to: the local community, the global community, and the professional community.

When you hear “giving back to the community” usually you think of community service.  Usually this happens locally and there are a lot of companies that do a fantastic job of this so I won’t harp on it.  I personally wouldn’t know where to start giving back to the global community.  Perhaps contributing to relief efforts in areas like Haiti, Chile, and Japan given the recent earthquakes?  But something I don’t hear a lot about is how organizations give back to the professional community.


Types of Events

I’m going to focus mainly on three types of events here; Code Retreats, Coding Dojos, and Weekend Testing, but anything that carries the spirit and intent of these events I feel should be handled the same by an organization.  I describe these as:

A deliberate, purposeful (i.e. goal-oriented) gathering of like-minded professionals coming together to better themselves, their craft, and each other.  

I’m going to make a few observations about these events, interpret it as you will.  All data is as of this writing (March 20, 2010). I’m going to put more focus on the US groups because I’m biased:

Coding Dojos – Map available here

  • Lots of groups in Europe and South America
  • 2 groups in Canada
  • Only 5 groups in the US 
    • Redmond (only open to Microsoft employees…)
    • University of Houston – appears to have gone dormant as of 2007
    • Oklahoma City
    • Pittsburgh Dojo – Last met Sept 2009
    • Albany, New York

Just looking at the map, the US is pretty far behind with this.

Code Retreats

  • Boulder, Colorado
  • Pittsburgh, Pennsylvania
  • Philadelphia, Pennsylvania
  • Detroit, Michigan
  • Floyd, Virginia
These are just the ones listed on their homepage as coming up.  Noticeably Pittsburgh is on both lists and noticeably this list is all US.  That’s pretty great and hopefully the list keeps growing.
Weekend Testing

4 chapters since inception in August 2009 in 
  • Chennai, India 
  • Hyderabad, India
  • Europe
  • Mumbai, India
The US is completely missing from this list.  I know we have talented testers that would love an opportunity like this to share their knowledge and expand their horizons.  I don’t know what it would take to get chapters started throughout the US, but given the structure the group has taken on, local or regional technology councils seem like they would be a good place to start.
As an Officer, why should I care?
By “officer” I mean someone holding a position of power and decision-making in an organization.  So why should you care?  Someone is bound to start one of these up in my area sometime, why should it be us?  Here are some big reasons that come to mind:
  • Your employees will get better (on their own time no less!)
  • People attending from outside see that you are supportive of your employees and their goals
  • Networks and ideas will develop between the people attending.  And we all know what comes from ideas.
  • Attendees will be more likely to consider your organization when they look for their next opportunity.
  • and perhaps more altruistically, you’ll be contributing to the advancement of the craft.  Something everyone can always benefit from.
So instead of “why should it?”, “why shouldn’t it?”.  It just means that when that next big advancement reveals itself you and yours will be ahead of the curve.
Suggestions for Implementation
So hopefully I’ve piqued your interest.  Now you ask, “how can I make it easier for everyone to get off the blocks and get one of these started?”   I’m glad you asked.  Here are some suggestions:
Do NOT
  • Force your employees to host one or more of these events.  
It may seem like a great recruiting and marketing opportunity, and it can be if approached correctly, but don’t make it your primary goal.  Pursue it with this as your end goal and it’s doomed from the onset.  These movements should be grassroots, initiated by people with a genuine desire to implement continuous learning and just get better.  You get better with practice, which is what these groups provide; an environment for software professionals to practice.
Do
  • Encourage people who wish to host events like these.
  • Remove barriers.  Make it easy for people to reserve spaces and market the events in your organization’s name.  Some lunch money wouldn’t hurt either :).
  • Provide support and resources.  
Do these things and you’re sure to see the long-term benefits I’ve described above.
What about your company?  Does it support efforts like this?  Do you have any additional suggestions for employees wanting to start one of these up?

Software Testing: A Tester’s Role, Part 2

During Implementation
A tester’s role during implementation depends on how close the tester wants to be to the actual implementation of the system under test (SUT). If they are meant to be an independent tester, then treat the code as your Kryptonite. Don’t go near it! The less you know about how it’s implemented the better. Ideally you should know nothing, but depending upon your operating environment that just might not be possible. At the very least, you had better be able to forget about the how when you go to verify that the system does what it’s supposed to.

During the implementation phase of the project, the test team can take a built version of the system and exercise it inside its intended operating conditions. Testing should be performed from the perspective of all human actors in the defined use cases. This protects against the team concentrating too hard on one actor’s perspective. These “stagings” can be performed weekly, bi-weekly, monthly, or even daily if they so choose. How often depends on how quickly we want/need to receive feedback.

Hopefully the developers are performing test-first development (for all our sakes), but if they aren’t the test team will need to be extra vigilant during these stagings because they are effectively performing the unit and unit integration testing.

An added benefit to staging is that by the time the system tests roll around the test team should be well versed in the system. They should know what it’s intended to do, how to make it do those things, and what caused the most frequent failures in the past.

A note on communication. Test plans and procedures should be “public domain” on your project. Everyone should be able to access them at any time, specifically the development team. If the developers know how you intend to exercise the system, they can make sure the system works in those cases. Remember that the point isn’t to catch the mistakes of the developers to humilitate them. There is no line drawn in the sand, we’re on the same team! The point is to catch problems across the board to protect the project and the people working on it.

The sooner we catch these problems, the easier they are to fix.
The easier they are to fix, the cheaper they are.
The cheaper they are, the less overhead we incur.
The less overhead we incur, the more work we can perform with that excess.

Or we could just take that money to the bank. Either way, less money spent can be allocated elsewhere.

Olderposts

Copyright © 2017 Life, Teams, and Software Engineering

Theme by Anders NorenUp ↑