Upon Arrival

When I arrived on my contract in September/October 2008 things were much different than today.

  • Projects took on average 3-6 months to complete, with few checkpoints in between. Stakeholders couldn’t get any value out of the systems until the end of the project even if they wanted to.
  • Projects were developed by a single person with very little, if any, collaboration.
  • The test group was far disconnected from the development team and was not involved with the project until it was thrown over the fence for testing. The testers also had little to no technical background. This isn’t necessarily a bad thing, but for our particular group the need to understand the domain and the systems is important.
  • Requirements (and the rest of the documentation) were written by the developers, usually after the system was implemented. The only reason it was documented at all was because it was required by the client.
  • While we were co-located with our customer, there was little involvement for validation purposes and contact with users was difficult and I didn’t see the team press for it.
  • Developer verification wasn’t being done on a regular basis, if at all. No unit testing at all, at least not on my team. Another group was using it minimally but I don’t know if it was a core part of their practice.
Immediate Feedback

The first thing I do when I start with a new team is see how they operate. If there are things that aren’t being done well, or not at all, I make it a point to find the reasons. This helps to point me in the best direction for fixing existing processes and implementing new things while creating the least friction. The reasons can vary, anything from management pressure, general uncertainty, to god forbid, complacency.

Most of the practices the team had no familiarity with. Things like developer unit testing, continuous integration, testing throughout the lifecycle, etc. But other things, specifically the documentation problems, were chalked up to a “sad truth” that we just had to accept as a cost of doing business. We’ll see about that.

The simplest practices to implement were those that I could stand up and use myself and introduce my team to later. Those were Continuous Integration and Developer Unit Testing. I stood up a CruiseControl instance on a spare server and integrated CUnit/CppUnit with our current projects and any new projects moving forward. This took some work, especially stripping the core functionality into its own library so CppUnit could use it, but we got there. There was some cost involved with educating the team on how to write unit tests and to check CI often but it was minimal.

To be clear, Continuous Integration came before I introduced unit testing to the team. I did this because I feel automated unit testing adds the most value when the developers can’t just forget about it (intentionally or not). This is particularly important when everyone is still learning. CI acts as a failsafe to make sure the unit tests pass on an independent machine. The CI server doesn’t care about your feelings. If something is broken, it will tell you with some emails and a nice red block on the build status page (and Emotional Hudson in Hudson’s case).

Testing Throughout the Lifecycle

Having just received my CTFL from the ASTQB the idea of testing throughout the lifecycle was fresh in my mind. It wasn’t the first time I’ve heard of the concept (my Software Engineering Principles course in school touched on it briefly) but for the first time I had something actionable that I could readily apply. We were using the V-Model so the transition from project states was a perfect place to implement it. Using the states on the V-Model diagram below (from Wikipedia) we implemented static analysis on documents after the Requirements and Architecture activities (SRS), and the Detailed Design Activities (HLD), while building Test Plans, Procedures, and Cases concurrently with development efforts. The effect of this was two fold
  1. It immediately increased the quality of our systems and their documentation.
  2. It got the test team closer to the development team and got both those groups closer to the customer.


Growing Pains


All of this change didn’t come without its problems. After a couple months of leveraging CruiseControl some of the other groups started to take interest. They wanted in on the party! However, CruiseControl was becoming a pain to maintain. Adding new projects required modifying the XML configuration by hand and then restarting the server. This wasn’t the worst of it though. I was the only person who knew how to do this, making me the guru. Max Pool at CodeSqueeze has written about gurus before and they’re not ideal, especially when they’re you. Anyway, after inquiring with the development community at my firm I was introduced to Hudson and my life has never been the same. Hudson made it dead simple for teams to add their own projects and do whatever they wanted with them, all without having to come to me for help.

Project Opacity

After introducing better engineering practices I now turned my focus to project visibility. The management team had a hard time getting a grasp on where our projects were. That is, our project visibility was very low. We had weekly status meetings that were always to the tune of “what have you done for me this week?”. The meetings were often unhelpful to everyone and have only recently been done away with (within the last month or so), probably as a result of the increase in project visibility. At least I’d like to think so :). This led me to start rolling out Atlassian’s Jira. It allows management to break down work and see exactly what was being worked on in real-time, all without having to bother the delivery team. This didn’t come without purchasing hurtles (it cost more than $0) but we’ve been using it in production for the last two months.

Documentation Management

Our team has some pretty lofty documentation requirements levied upon us by our stakeholders, at least relative to other agile teams. They take our systems and documentation and perform further third-party evaluations to make sure we meet their standards. We are required to provide them with our ConOps, SRSs, HLDs (sometimes), test plans, test procedures, test reports, and user’s guides.

Anyone who has attempted to maintain Office documents in Subversion knows how much of a pain it can be. This got me started on the long path of trying to find a better solution. At first, I tried building documentation inside Trac, our wiki at the time, but I quickly learned that it didn’t support exporting all that well. Wiki2PDF kind of worked, but broke anytime someone sneezed. That clearly wasn’t going to work so I moved on. Then I attended a talk given by another organization on the Atlassian tool suite where I was introduced to Confluence. I was hesitant to consider changing our wiki system at first since Trac was an integral part of our day-to-day work, but Confluence’s build in support for Word, PDF, and HTML export changed my mind. Confluence’s built in features combined with the Universal Wiki Converter (and some select SQL update statements) made this transition a breeze allowing me to export everything from Trac and import it into Confluence. Everyone left on Friday using Trac and began using Confluence on Monday.

Since then we have imported all our old Word documentation into Confluence and deleted them from our Subversion repositories (a happy day indeed!). We now build and maintain all our documentation for all our projects inside Confluence where we can get it our for distribution at any time. I like to think of the knowledge inside of Confluence as being the Configuration Items and the consequent document exports to be “compiling” that knowledge.

Release Management

Our release management process used to be defined in a very heavyweight (70+ page) CM Plan document that was way heavier than it needed to be. Now it’s documented on a single Confluence page that describes how to
  1. Tag a build using Hudson
  2. Export the documentation from Confluence as PDFs and
  3. Store the Hudson binaries and PDFs in version control as a release tag.
Why does it need to be any more complicated than that?

Implementing Agile

While all paths led here it wasn’t absolutely necessary to implement agile. The reason for choosing to go with an agile methodology, Scrum specifically, was bullet #1 at the beginning of this post. Ultimately we’re responsible for delivering software systems. In the old process 5 months into a 6 month project we had nothing to speak of, yet we would have thrown 5 months worth of funding at it. More specifically, our clients had. If they have a need to use some features 5 months into the project why shouldn’t they be able to? We may have implemented those features in month 1, but we can’t assure they work since testing wouldn’t have started for another week or two. The risk we had was that they would want to do just this. If they came to us in a situation like this our value is likely to be questioned, and in this economy who can afford that?

Enter Scrum. Now we maintain a Product backlog with high level user stories that are prioritized at the beginning of each Sprint. We commit to a small subset of the requirements at the beginning of each sprint and (ideally) take all the stories to “done” within our sprints (typically 2-3 weeks, we’re still finding our sweet spot), performing our release processes at the end of each sprint. For us, done means
  1. Documentation has been updated (ConOp, SRS, HLD, User’s Guides, Test Plan/Procedures)
  2. Code has been implemented with high (90%+) unit test code coverage (where reasonable) and has been checked in for Hudson to build.
  3. Test procedures that are candidates for automation have been automated in one way or another.
  4. Code and documentation has been review by the team and stakeholders, as appropriate. We use Crucible for code reviews.
See, documentation and Agile can coexist! You just need to include it in your definition of done and be sure your team includes that work in its estimates.

Conclusion

So to wrap things up, here are my suggestions to anyone wanting to implement agile, especially Scrum since it doesn’t prescribe engineering practices. Many of these probably hold true for implementing change in general.
  1. Implement engineering practices first. You’ll see the fastest impact here, and you’ll need these come time to implement Scrum.
  2. Lead by example. Start using the new tools/practices on your own and your team will hopefully catch on.
  3. Align project teams (development, QA) and stakeholders prior to implementing Scrum or you’ll have some problems.
  4. If you can, ease your team into it. It’s all the better if they can arrive at the same conclusions as you on their own as to why the change is needed and how your fix will make things better.
  5. Don’t force change if it can be avoided. Provide a good reason for why this is being done. My predecessor meant well but they forced many heavyweight processes on teams that had no say or warning, leading to processes that were not used. If teams discover its use on their own, or are at least kept in the loop, they’re more likely to accept and continue using them.
Do you have any additional suggestions on rolling out change on your teams? Please comment below!