How would you like your code? Refining our Definition of Done

Agile has a notion of defining “done.” This might sound funny – how could we not know when something is done? But, Stick a fork in it?as with any creative endeavor – there is “done” and then there’s Done.

Just try it – ask 5 different people on your project how they determine when a development task is “done” and, as the saying goes, you’ll get a lot of different answers. But, while this might be an interesting exercise, if you really want to understand the problem, don’t just ask them what is meant by done. Instead, look and see what they’re actually passing off as done.

On my current project, it seems pretty clear that the management – if not through their words, then through their actions – has defined “done” to mean that code was checked in and a little checkmark was placed in the “Done” column for that task.

This makes all of the management’s schedules and reports up to their management look nice and tidy. Give yourselves a pat on the back, “well done, Bob!”

On the other hand, this makes the code look like a bloody mass of … well, you get the idea.

In management’s defense, the managers aren’t developers and even if they were, there’s way too much code for them to dig through to determine if it’s really production ready or not. Why shouldn’t managers be able to rely on their developers’ word for when something is done?

In the developer’s defense, they are working from outlandishly unrealistic schedules and whenever anyone tries to explain Monkeys typingthat more time is needed they’re rewarded with a management gem such as “well, it’s just got to be done by that date because that’s when we go live.”

So the developer’s, God bless ’em, tried their best to meet those deadlines – because why shouldn’t developers be able to follow their manager’s direction? At the very first sign that the code seemed to be working, they checked that puppy in and moved on to the next task.

The results are reminiscent of one million monkeys typing. You know the type – amorphous blobs of stuff that aren’t even sustainable enough to get us to an initial release of the product. One step forward, two steps back.

Let us tell you where to stick it
And I just keep thinking back to Clean Code’s preface: if we’re to call ourselves professionals, we have a responsibility to act as such. And maybe that means we really do need a 5th element for our Agile Manifesto:
We value Craftsmanship over Crap

meaning that we need to value the craft of building software to last over the mad rush of delivering the first solution we think up that appears to work. Sure, managers and customers are always going to try to squeeze the last dime of productivity out of us – that’s just human nature. But what it really comes down to is that we, as developers, are the ones that ultimately say when our code is done. We are the experts that management ultimately relies on to tell them what is needed. And every time we agree to deadlines that don’t allow us to do our jobs, every time we proclaim our work as “done” using crap rather than craft as our measuring stick, we’re failing to live up to our responsibility.

What do you think? Who defines done on your project? How do you think it should get defined?


19 responses to “How would you like your code? Refining our Definition of Done”

  1. When I posted on the Craftsmanship Over Crap on the wiki for the company I work for, there was a little community who felt pretty offended by the idea.

    And while I don’t want to get into a religious argument about the definition of agile, I have trouble believing that a team operating under unrealistic deadlines is operating under an agile umbrella — maybe there are agile practices, but if the team isn’t allowed to make its own estimates and stick to them, I don’t see how that’s honoring people over process.

    Anywho, on the “done done” subject: I believe it isn’t done until it’s deployed, but I’m willing to fall back to at least that it isn’t done until it has been demonstrated to somebody. I think this will also depend on how you decide to break up your tasks. Is verification to you a separate task or another state?

    I had always thought of it as another task, but Mike Cohn adds it as a column to his taskboard.

  2. Another thought: is there a difference between a story being done done and a task being done done?

  3. Abby Fichtner Avatar
    Abby Fichtner

    I think your point about agile’s value people over process meaning that managers need to honor developer’s estimates is a very good one, and I actually have to say that I think management has started coming around to this realization. Cuz, obviously what we got now ain’t workin’.

    On verification, I’m thinking I like the separate state better then a separate task. If it’s just another task on a story, that implies that only SOME stories need verification, not all. If it’s a separate state for stories, it sends the message that we need to verify all of our stories. And I think that’s a Good Thing.

  4. Abby Fichtner Avatar
    Abby Fichtner

    is there a difference between a story being done done and a task being done done?

    Well obviously a task can be done before the story is done so you must mean something more specific then that… whatcha thinkin?

  5. So, a story being done means that it has been demonstrated and validated/verified by somebody who has a stake.

    But is the criteria for a task making it to “done” different?

    This relates to the verification thing, too. I never would have thought to not verify a story, but what about each task?

    How granular are your tasks? I imagine they’re more than just ‘do story X’

  6. Anonymous Avatar

    I’ve often found that “done” means “when all the checks have cleared”.

    As a developer gone manager, I can say that it’s critical that a manager be able to rely on their developers claim of code being done. That being said, I’ve always taken the time to do at least a drive-by level of functional testing before I put my name on it. Furthermore, this is one of the main reasons I’ve always advocated for developers not being their own testers, a situation which is, sadly, often the case.

    Of course, I also don’t let my developers take the fall for time estimates being off. It’s a managers responsibility to sign off on the time estimates they receive. If a manager of a development project doesn’t a: know to pad the time estimates and b: know enough about development to detect when said estimates are off, then they shouldn’t be managing development projects in the first place.

  7. Abby Fichtner Avatar
    Abby Fichtner

    management wins the argument, and loses the war.

    hahaha, Randall – I think that pretty much sums it up perfectly!

  8. Hear hear! Anonymous made some good points.

    Also being a techie gone management, it’s a careful balancing act to keep both sides from attacking each other. (Management thinking developers are sandbagging on time and developers thinking management is composed of morons).

    Different groups have different ideas of success criteria. Unfortunately it usually goes according to the Golden Rule: ‘He who has the gold, makes the rules’. So a lot of times, management wins the argument, and loses the war.

  9. joshilewis Avatar

    I believe this is the essence of TDD and behaviour driven development (BDD). It gives you a concrete, testable version of 'when is it done'. In my opinion, 'done' means tested by QA, demonstrated to business stakeholder, merged into 'trunk' in your version control system, and ready to be deployed (ideally automatically, through your continuous integration system).

    I also agree about hard n fast deadlines having a detrimental affect on quality.

    I've discussed some of these ideas in these posts of mine:
    TDD – Part 1
    TDD is a pull system
    Pull Promotes Quality

Leave a Reply

Your email address will not be published. Required fields are marked *