Tag Archives: User Story

Learning Agile Requirements with User Story Mapping

The Limited WIP Society Bangalore Chapter held its 2nd Meetup at Pune on Oct 26. The session was hosted by BMC, India and their office. A group of about 25 Lean/Agile enthusiasts met on this Saturday morning.As in the last Meetup in Bangalore, the focus of this Meetup was also Agile Requirements and for the same reason – it is hard to establish flow and reduce cycle time when requirements are not independent, small or testable. Our experience shows that using traditional requirement definition approach, we get a set of highly inter-dependent requirements that get stuck at System Testing, waiting for each other.I started the Meetup with a detailed presentation on Agile Requirements. We discussed about the problem with requirements written the traditional way, how User Stories mitigate that problem, how to decompose User Stories and finally, doing User Story Mapping.Post a short tea break, the group was divided into 2 teams. A case study was given to both the groups. The groups were asked to first identify the sequence of steps defined above. After about 90 min of intense discussions within each of the groups following the 3 step process, both teams had their first version of the User Story Maps, though not complete.Once the User Story Mapping was completed, we discussed how to do Release Planning using the User Story Map. A follow-up question was about User Story estimation. The group was introduced to the Planning Poker approach. The Meetup ended with a quick summary of the session and a retrospective of what worked well and what could be improved in the next Meetup.

 

 

Advertisements

Agile Requirements with User Story Mapping

The Limited WIP Society Bangalore Chapter held its 3rd Meetup at Digite’s Bangalore office. A group of 20+ Lean/Agile enthusiasts met on Saturday morning at 10am.

The focus of this Meetup was Agile Requirements. This topic was chosen because it is hard to establish flow and reduce cycle time when requirements do not follow the INVEST principle (Independent, Negotiable, Valuable, Estimable, Small and Testable). Using traditional requirement definition approach, we get a bunch of highly inter-dependent requirements that get stuck at System/Integrated Testing for each other.

I started the Meetup with a small introduction to the need for Agile Requirements. Post this session, Manik Choudhary, Agile Project Manager at SAP Labs, started the main session. He gave a high level overview of the larger landscape – building the product vision using Lean Canvas, using Design Thinking to validate your concept and then use a technique like User Story Mapping to build the product backlog.

With this high level view, he started with the basics of User Story Modeling. This was done in 3 stages. First, identify the vision of the product by defining the usage sequence of the product. Next, identify the Personas who will use the system and how each of those personas will use the product in the above Usage Sequence. This is the “backbone” of the product. Finally, define the user stories under each of the usage steps (within the usage sequence) for each of the personas.

With that overview, the team started the workshop. The group was divided into 2. A case study was given to both the groups. The groups were asked to first identify the usage sequence. After about 90 min of intense discussions within each of the groups following the 3 step process, both teams had their first version of the User Story Maps (see picture). Our team could only finish User Story definition for only 2 of the 3 personas that were in the case study.

At this stage, the teams were asked to vote and identify the Minimum Viable Product (MVP).

Finally, USM is not a one time exercise. The process is repeated, perhaps once a month, complimented by a more frequent Backlog Grooming.

The Meetup ended with a quick summary of the session and a final retrospective of what worked well and what could be improved in the next Meetup. As the retrospective showed, it was a great learning experience on a Saturday morning!

Daily Life of a SWIFT-Kanban Developer

Introduction
Within the Swift-Kanban development team, we have evolved our Engineering ways combining principles of Test Automation, Continuous Integration and Kanban thinking. On the other hand, as I have tried to recruit people for such a development environment, it has been difficult to find people who understand such a working environment. This blog attempts to helps explain our Engineering environment.
Stand Up Meetings
The day starts with a stand up meeting at around 9am. Given Mumbai and Bangalore traffic, there is a 10min flexibility allowed for. Since we are distributed team across three different locations (3 cities, 2 countries), many of our team members join the call remotely.

The basis for the Standup Call is the Project Kanban Board (shown below) maintained on our own product. So, we do eat our dog food:


The purpose of this meeting is to get a quick overview of the team’s current situation, find out if any development tasks have been blocked, assign the day’s tasks, discuss any customer identified defects (which are our Expedite cards) and assess any broken builds.

Blocked cards take special attention in the Standup call.

All discussions are documented as comments against the card.

Our target is to complete the call in less than 30min but this does not happen sometimes. Primary reason for this is one or two issues hog the limelight. Sometimes, one of the team members would interrupt and ask for this issue to be taken offline but we do have some “silent” team members who prefer not to break in (culture thing). So, over a period of time, we have learnt to split the call into two parts: a) have the regular stand-up call b) discuss specific issues for which only the relevant team members need to stick in.

CI Run actionable

Once the Standup call finishes, every developer checks the CI run output if anything is broken from the previous night’s full automation run. For this, there is a consolidated failure run   report from both Junit (unit testing environment) and Sahi (functional test automation environment) sent to all test members from the build (as in the right column). The report reflects not only the failures in the last run but also highlights in red automated test cases that have failed in the last 3 runs. We have experienced that test automation failures are not always linked to the product source code issue or the test automation source code but to  a random system behavior (for e.g., where the server does not respond back in time). Hence, repeat failures is important to identify true failures.

Further, we have an artifacts repository where we store the Sahi html reports that has more information about the failures. Developers use it for further analysis.

If the developer’s names appears against the failure, it his/her first task is to fix the issue(s) reported and then move on to the regular card on the board.

Developers use Eclipse for both Automation script failure analysis and Junit failure analysis. Junits can be corrected and tested on-the-fly in Eclipse.

One of the unique aspects of our development process is the association of an automation script to an individual owner. This was very important because prior to doing this, it wasn’t clear who was responsible to get a failed script fixed. It is hard from a nightly run to identify which of the check-ins(s) (from a series of checkin(s) done throughout the day) is responsible for this script to fail. Hence, we assigned the original developer for the script the responsibility to fix it. It turns out to be faster too in most cases because of their familiarity with the script, being its owner.



For this reason, we use the Test Management repository of SWIFT-ALM (where the test suite inventory exists). A snapshot of the same is shown above.

Our source code is also integrated with Sonar dashboard. On every CI run, the dashboard gets updated and provides valuable information about the java code. We have enabled various plugins on Sonar like PMD, findbugs etc. A developer is expected to look at this dashboard and correct the violations in their module’s source files on a continuous basis. Sonar dashboard gives a good insight into the coding pattern of developers and helps the team in figuring out the better ways to write code.


Development:

Once the issues from the last CI run are addressed, the developer’s focus shift to his/her main development card. Customer defects are all the cards marked in Blue and are our equivalent of the “Expedite” Class of Service. Our next focus is on the pink cards that indicate internally identified Defects and finally, they focus on the User Story that they have on their name. We also have Tasks that are equivalent to Engineering Tasks (called a Technical User Stories in many places). This priority “policy” becomes the basis for developers to pull the next   card when they are done with their present card. Global items are things like training, CI failure rework.



A few additional policies that we have defined:

1.  User Stories flow through the Design and the Functional Automation lanes.
2. At the end of the Design stage, a T shirt estimate is converted into an actual estimate.

While code review is done for all checked in code, Automation code review is only done on a sample basis.

Developers are also free to add tasks to the card, and if needed, assign some of the tasks to another developer who is expected to pitch in.

Developers work on a separate branch in SVN created for a User Story. This branch becomes the development workspace for all the developers working on the User Story. This facilitates easy coordination between the development team and informal code review can also start since the code is already committed. Once development is complete, developer merges the changes to the main branch (trunk) on SVN and deletes the branch that we created. Cruise Control gets the latest code, does the build, runs the Junits, deploys the build on QA server and runs functional Automation on all the 3 browsers that we certify the product on..
Defect Validation:
Developers are also expected to keep an eye on the validation lane. If they have filed an internal or customer filed defects, they are expected to validate the fix on the QA environment and if the fix passes, move the card to the “Ready for Deployment” lane. User Stories are validated by the Product Manager.
Deployment
We are not in the continuous deployment environment but we do deploy every time the number of ready to deploy once we have 20+ cards. We do not deploy automatically because we do have some test cases that need to manually validated for some technical reasons (third party product integrations or test scripts that fail because of our Automation tool issue).

Hope this helps understand the daily routine of a Swift-Kanban developer. It is exciting and many times more productive than how we used to develop software about a couple of years earlier.