In this N-Part series I’m going to chronicle the refactoring efforts of my team and I on my current project and pose some questions that fall in the realm of DDD, refactoring, testing, etc. I don’t really have a road map for this series but here is a basic idea of what I’m planning on hitting:

  • Part 1 – Project History & Overall Goals
  • Part 2 – Finding Focus – Refactoring, Where To Start?
  • Part 3 – The Big Picture – Utilizing Visualization Tools to Target Refactoring and Find Patterns
  • Part 4 – A Flood of Information – What is NDepend Trying to Tell Me?

Lately on my project there have been a lot of good things happening. TeamCity has been brought in for Continuous Integration, NUnit has begun to be used for automated regression testing (paired with WatiN for web testing), Sandcastle has been implemented at build time (nightly) to keep up to date project documentation, and I’ve been trying to sell NDepend as a refactoring/inspection tool the more I learn how to use it, and more importantly, understand what it’s trying to tell me.

Background

Before you can really understand where I’m trying to go and what it’s going to take to get there you need some background on where this application came from and what its current state is. The project was initiated more than a decade ago by a group of people who knew how to program, but really weren’t developers. It was originally written in ColdFusion and was later ported to VB.NET when the need for a newer technology presented itself. This was a great first step and is necessary for any long term project to survive and continue evolving.

The problem was that while the application now existed in a new object oriented language, the scripting mentality came along for the ride. Therefore a solid object model was not developed at the time of the port and over time any architectural changes that were meant to be enforced fell by the wayside when a release drew near. Since software is simply a means to an end within our organization (software is not our core business) best practices are sometimes not in place or, if they are in place they’re not the highest priority.

Flash to today and the team is populated with people who know software. Even the PM has a background in software, specifically with this project. I’ve been fortunate in the short time I’ve been here to be placed with a group of open-minded people who are willing to see both sides of an argument and accept (and follow) a path to positive change for both process and design. I’ve heard horror stories about people being set in their ways and resisting change, even when the benefits are obvious.

The story I’ve heard over and over again is “we’re aware of the design problems with that [class], [assembly], [subsystem], etc. but we just haven’t found the time to get in and change it.” I imagine that this is a problem in the majority of projects out there that ship a product in iterations (and probably some that don’t). You promise X at the beginning of an iteration, implement it during the iteration and then ship out at the end with little to no slack time.

This is all well and good if your definition of ‘Done’ is implemented, tested and refactored because your estimates will have taken these things into account. However, if your estimates don’t include testing and refactoring more than likely you will exhaust your estimate and run out of time. This is the first thing that has to change when attempting to implement these changes with a group who has never worked with them before. Their definition of ‘Done’ needs to change, otherwise we’ll never have the time to really test and refactor.

Ok, I’m going to stop right there. In all fairness, we did/do have QA on our team and our iterations end with a two week system testing buffer, but this testing effort was purely at the system acceptance level. There was no developer testing going on at all, so even when a problem was discovered, assuming it wasn’t obvious, it would take a little while to track it down. You could even end up with the worst case where you’re running in circles because the error you found is the result of another, more serious error. Now, however, we have regression tests for most of the new functionality and for the pages we’ve managed to refactor, but there’s still a long way to go.

Wading through the source code for the system I come across things that can only be described as code stenches. Yes I’m aware of the term code smell, and that’s where I’m coming from. They really do bother me that much. Pages are often entities all their own with some even doing data access directly. To make matters worse there are many places that business logic and requirements are implicitly implemented without proper comments or mention of it in the XML documentation, which is also sparse. Data access is very heavily tied to the UI in that many of the DataGrids are bound to a DataSet which is the direct result of a database call.

So where to go from here?

We have started refactoring our pages to MVP (Passive View) with IoC on existing forms to increase testability/reusability and disconnect the UI from the data and business layers. Don’t yell at me, I know the goal of IoC is not testability but it was a solid means to an end. This will likely continue to be the bulk of refactoring throughout this iteration but we have already added a DomainModel project where all solid domain objects should eventually end up. The appropriate services then need to be refactored to be encapsulated within these domain objects.