Originally published on B2T Training BA Blog.
One of my favorite aspects of adopting an agile mindset is the tendency to think about whether what I am doing actually adds value to the organization. With this though, comes the knotty problem that value can often be hard to clearly define. Because I generally find myself working on internal IT projects, I’ve resorted to measuring value based on the objectives that the project is seeking to accomplish. However, as I develop a better understanding of the agile mindset, an interesting thing happens – I do not always end up using value per se as the deciding factor of whether or not to do something. Sometimes I do things because it will help the team learn.
We’d like to think that we can foresee everything clearly up front when working on projects, especially when it comes to whether or not we fully understand the need we’re trying to satisfy and the solutions that will satisfy that need. The unfortunate thing is, we don’t always bother to figure out the real need (or the real problem) and we’re not always sure about the right solution. This is where learning comes in handy.
When working on a new initiative, it’s common to think we’ve uncovered the problem we’re trying to solve, but we’re not always sure what the right solution is. When we find ourselves in this situation, the best thing to do is note what assumptions we’re making; not simply because it’s a good thing to do, but so we can identify the things needed to verify that we actually are delivering the right solution. (This scenario assumes we haven’t been handed a solution by our stakeholders; in which case we should be asking whether that is really the right solution or if it’s just a solution in search of a problem.)
It’s long been held that a good practice is to identify the assumptions a team makes when working on a project, but the “good practice” seems to stop there. No real discussion occurs about what to do with the knowledge about those assumptions. What’s important is to identify the key assumptions that, if proven false, will shoot huge holes in the solution. These are not the “we assume that all the key players will be available” type assumptions, but more often tend to be of the “if we build it, they will come” nature.
These are also particularly insidious assumptions in cases where people have the choice of whether or not to use the solution (and this happens in internal IT situations more frequently than you care to believe). In these cases, you could try to validate those assumptions by asking the stakeholders whether they will use the solution…but, warning, you may not like their answer. Actually you probably will like what they say, but not their corresponding action (or lack thereof). In many cases, the people you ask are going to lie to you. They’re not doing it to be malicious; they are most often doing it because they think they are telling you what you want to hear. Unfortunately, their actions rarely match up to what they say they will do.
When I find myself in these situations, I like to build something that will allow me to find out how a solution could affect people’s behavior. This can be a very simple implementation of functionality, or it can be something as simple as a message in an email. Because I’m building this more for the purposes of learning than with a lot of certainty that it’s the solution, I don’t make it too extravagant. I do the minimum necessary to find out how people will actually respond.
For example, last year when working on the Agile Alliance Conference Submission System we were having a problem with people misunderstanding the purpose of email notifications. In our effort to make the submission process interactive, we provided the capability for submitters and reviewers to carry on conversations via the submission system. A submitter would submit a session proposal, a reviewer would provide feedback, and occasionally ask questions, and then the submitter could respond back to the reviewer. The intent was for this conversation to occur all through the submission system interface. We provided the ability for everyone involved in these conversations to get a notice when their session proposal had received a review, or their review had received comment. At first these emails just contained a link back to the submission system where the submitter or reviewer could reply back. However, we received a request to have the content of the review or the comment to be included in the email notification. This made it easier for the people to see what the review or comment said without having to go to the submission system. We assumed that having a “Please reply to this comment in the submission system” hyperlink would be sufficient to drive that behavior.
It wasn’t.
As soon as emails started going out with the review text included, the general submission system email ID, from which the emails were sent, started getting inundated with people replying to the emails and not bothering to reply via the submission system. Our assumption that people would follow clear, but somewhat subtle instructions turned out to be false. Once we realized what was going on, we made a further assumption that clearer, more blatant instructions would drive the right behavior. One alternative was a larger effort to setup the submission system to receive the replies and add them to the conversation in the submission system. This would have been a lot of work, so we wanted to test out the simpler method first. As a result, we tagged a message at the beginning of every notification email in big bold letters:
Messages sent to this email address (submissions@agilealliance.org) do not go to Track Chairs or Reviewers. Please provide any replies to this review via the submission system.
We found this message helped to reduce the number of simple email replies we got, but did not entirely stop them. We then decided that some gentle chastisement (I responded to each reply that came across with a reminder that they needed to actually reply in the submission system) might work. This was a little extra work on my behalf, but I found that once people got one “friendly reminder” they tended to stop replying to the notification emails with their comments that should have been responded to in the submission system.
In this case, we explicitly made the change to the notification message to test behavior change. The actual coding effort was extremely small, but it was effective to find out what would happen, and it gave us sufficient information to decide what we needed to do going forward.
Ultimately you always build to provide value, but sometimes when heading in that direction, you need to build some things to be sure you understand what value you are providing and that you’re delivering the right solution to provide that value. The willingness to learn gives you a lot of freedom to try intentional experiments and learn from them to validate your assumptions and confirm that the solution you are providing really is the one you should go after. Those short diversions are much better in the long run than blindly heading down the wrong path. Trust me, I’ve done both in my day.