So you’ve just finished usabilty testing in your Sprint and have a pile of changes for the Scrum Team to implement. Some are changes to Definition of Done for existing User Stories, some though, are wholy new User Stories. For the most part, the Business Owner and Product Owner agree that the application does require work now to encourage that people to drop their old behaviours and use the new system, but the estimations the Team have done suggest that it’s about two whole Sprints worth of work. How do you break down this work into smaller, bite-sized User Stories?
This was one problem I faced not too long ago. After doing some usability testing, there was a significant body of work to do. While many of the estimates for the work were very high, the effort (to use Mike Cohn’s analogy) was more like licking 1,000 stamps than brain surgery. So here are the top patterns I used with the Team to break down the work.
1. Text changes first
Labels, nonclementure, and formatting were the easiest change to make, particularly where visual flow and visual contrast were concerned, so where stories required these surface UI changes, these were done first.
2. By most frequently used
In many instances, the usablity recommendations encompassed many, many, many pages. One of the easiest methods we employed to break down work estimates was to list the pages affected and then order the pages by their frequency of use. Pages were then grouped into three categories — top 20%, next 20%, bottom, 60%. The group that had the highest traffic received attention first. For the most part, those pages that fell into the bottom 60% were not changed.
3. By highest value
While there was agreement that addressing the usability issues on most frequently used pages was of high value, the Team also recognised that some pages were not used often, but represented a very significant step in a process that was critical to get right. The likelihood of human error in these pages from a risk perspective was also taken into consideration. These high-value pages were identified by business and ranked in much the same way that they helped the Product Owner in the past rank functionality. In many cases, these were pages that were of high cognitive complexity — many fields and a lot of text to be entered. Again, these were grouped into 20%/20%/60% split and the top 20 ranked higher in the Product Backlog to address than the remaining 20/60.
4. By supported area
One of the areas of concern by the Team was in making changes to areas within the software solution — Microsoft Dynamics CRM — that were not supported by Microsoft. If the Team made a change, and the software was upgraded and ran into problems, the Team doing the maintenance would be left on their own. So, we made a choice to split some of the User Stories by “supported area”, with changes required to supported areas, e.g. the body of a page, ranked higher than those that were not supported, e.g. the left-hand side navigation menu.
5. Illogical to the logical
Where workflow was involved, the steps were broken and ranked to identify the pages that represented the most confusing interaction design to the least confusing. The time taken on pages during the usability testing was one factor that contributed toward it being identified as ‘more confusing’ as did the visual complexity of the page.
6. By most valued user
One of my favourite patterns is to break up stories so they deliver the most value possible to one user before delivering value to another one. This is a pattern we also adopted for breaking down large usability stories through ranking the user types, and then identifying the pages they used within the application. These pages were identified for usability improvements prior to other pages used by other users.