Applying WCM to the Software Industry

I recently spoke at Symbiosis University on how WCM (World Class Manufacturing) thinking is being applied to the software industry.  World Class Manufacturing [WCM] is the collective term for the most effective methodologies and techniques to realize the objectives of: A) Products of consistent high quality B) Delivery on Time of the desired quantity and C) Product at the lowest cost. The commonly knows WCM methodologies and techniques are TPM, Kaizen, TQM, Six Sigma, JIT, and Lean Manufacturing. This presentation shares how the software industry has been adopting many practices from the above techniques over the last decade. 

Agile Requirements with User Story Mapping

The Limited WIP Society Bangalore Chapter held its 3rd Meetup at Digite’s Bangalore office. A group of 20+ Lean/Agile enthusiasts met on Saturday morning at 10am.

The focus of this Meetup was Agile Requirements. This topic was chosen because it is hard to establish flow and reduce cycle time when requirements do not follow the INVEST principle (Independent, Negotiable, Valuable, Estimable, Small and Testable). Using traditional requirement definition approach, we get a bunch of highly inter-dependent requirements that get stuck at System/Integrated Testing for each other.

I started the Meetup with a small introduction to the need for Agile Requirements. Post this session, Manik Choudhary, Agile Project Manager at SAP Labs, started the main session. He gave a high level overview of the larger landscape – building the product vision using Lean Canvas, using Design Thinking to validate your concept and then use a technique like User Story Mapping to build the product backlog.

With this high level view, he started with the basics of User Story Modeling. This was done in 3 stages. First, identify the vision of the product by defining the usage sequence of the product. Next, identify the Personas who will use the system and how each of those personas will use the product in the above Usage Sequence. This is the “backbone” of the product. Finally, define the user stories under each of the usage steps (within the usage sequence) for each of the personas.

With that overview, the team started the workshop. The group was divided into 2. A case study was given to both the groups. The groups were asked to first identify the usage sequence. After about 90 min of intense discussions within each of the groups following the 3 step process, both teams had their first version of the User Story Maps (see picture). Our team could only finish User Story definition for only 2 of the 3 personas that were in the case study.

At this stage, the teams were asked to vote and identify the Minimum Viable Product (MVP).

Finally, USM is not a one time exercise. The process is repeated, perhaps once a month, complimented by a more frequent Backlog Grooming.

The Meetup ended with a quick summary of the session and a final retrospective of what worked well and what could be improved in the next Meetup. As the retrospective showed, it was a great learning experience on a Saturday morning!

The 1st Limited WIP Society Meeting Pune Chapter

The first meeting of the Limited WIP Society Pune Chapter happened on Jul 20. Over 20 participants from 5 different companies attended this session.

The first session was presented by Sutap Choudhury from the Amdocs Transformation team. Sutap explained the challenges of the traditional development models and the rationale behind the Agile/Lean/Kanban adoption within Amdocs. He explained the Kanban core practices using Henrik Kinberg’s animation slides. At the end of this session, multiple questions were asked around the implementation of Kanban. One of the questions asked was around the practicality of having cross functional teams – how practical is it to have people from one value stream lane move to another lane to help out on a Blocked card? It was explained that while it may not be 100% possible to get fully cross functional teams, it is important for the team keep this objective in mind and look for opportunities to implement the same. Cross skilling the team will always help tackle such bottlenecks. Another question was around the differences between Kanban and SCRUM. It was discussed that SCRUM/XP are time-boxed execution models in contrast to a continuous flow approach in a Kanban system.
The second session was presented by Hrishikesh Karekar, also from the Amdocs Transformation Team. He explained the challenges of implementing Kanban in a large project – a project of over 800 man month, with over 150 resources over a timeline of 9months. The key challenges that this team experienced were around: A) Splitting Requirements B) Managing Execution and C) Understanding whether the project is on track from a budget perspective. To help manage project execution and tracking project budget consumption, Hrishikesh explained the adoption of Earned Value Management. In the initial days, Amdocs adopted Earned Value tracking by giving part credit to a card when it completed part of the value stream (similar to % progress in MS Project). Unfortunately, this approach led teams to keep pulling work so that the Earned Value % increases, instead of focusing on completing work. Amdocs is now considering adopting the AgileEVM method, wherein a card gets credit when it completes the value stream. Expectation is that while this will strongly encourage teams to complete work as soon as they start on it.
Hrishikesh summarized the session with three key learning:
1.      Telling people “Just do it” just doesn’t do it.

2.      Mindset issues as well as “Real issues” especially when the ecosystem is not agile.
3.      Coaching strategy needs to be agile as well – know when to persist and when to let go.

Due Dates for Kanban Systems

Many teams adopting Kanban come from Agile background. Agile thinking has discouraged the use of Due Dates. Due Dates breed undesirable behavior. Focus on Due Dates results in teams working under significant pressure. Quite often, that translates into short cuts in Design/Testing activities. The net effect is that work quality is compromised and technical debt piles up.

That said, Agile methodologies inherently have a Due Date. This is the Sprint end date. The team has a clear expectation that the planned scope of the Sprint needs to be completed by the end date of the Sprint. Someone has gone through the process of mapping the Sprint capacity with the story points that is planned in that Sprint. Yes, some requirements may spill over to the next sprint but that is generally a small % of the overall Sprint scope.

In contrast, Kanban systems, being flow centric, take away the pressure of the Sprint date. The question is: should such teams, if they have not been using Due Dates, consider using Due Dates on their cards/work items?

While project teams are expected to be self-organizing and self-driven, absence of due dates tends to loss of momentum within the team. Parkinson Law takes over. A 5 days work can stretch to 7 days when no expectation is set to the respective developer of a 5 day development timeline.  For projects that work on fixed budgets, such slippage can soon pile up and cause management escalation. 

There are other benefits too. Due dates can help team members working on different user stories belonging to the same MMF align their completion date. If you want to get something done by an intermediate milestone (like a customer demo date), Due Date can focus the participating team members to that immediate milestone. I have also experienced that a mismatch in Due Date expectation between the developer and others in the team, corrects a requirement/implementation disconnect between team members. User Stories aren’t a detailed spec.

Once again, one is not talking about going back to the old ways – wherein Due Date becomes a  deadline cast in stone and quality/technical debt becomes a secondary consideration.

The next question comes is – where does the Due Date come from? Agile/Kanban systems discourage detailed estimation. Nevertheless, estimates do often exist. In IT service companies, projects are estimated and bid for in the pre-sales lifecycle. Those estimates are inherited by the development team, though often not the same level of granularity. In cases where estimates don’t exist, a simple T-shirt size categorization is adequate to communicate whether a particular card should be completed in 1 week or 2 weeks.

In summary, we need balance! Agile teams advocated against Due Dates because it used to drive wrong behavior. On the other hand, complete absence of a Due Date can lead to team throughput coming down. My recommendation to teams is to use Due Dates with Kanban Cards but ONLY as a guideline – not as something that will make the team compromise product quality and add to technical debt.

I would like to hear about your experience with the use of Due Dates in Kanban systems. 

Daily Life of a SWIFT-Kanban Developer

Introduction
Within the Swift-Kanban development team, we have evolved our Engineering ways combining principles of Test Automation, Continuous Integration and Kanban thinking. On the other hand, as I have tried to recruit people for such a development environment, it has been difficult to find people who understand such a working environment. This blog attempts to helps explain our Engineering environment.
Stand Up Meetings
The day starts with a stand up meeting at around 9am. Given Mumbai and Bangalore traffic, there is a 10min flexibility allowed for. Since we are distributed team across three different locations (3 cities, 2 countries), many of our team members join the call remotely.

The basis for the Standup Call is the Project Kanban Board (shown below) maintained on our own product. So, we do eat our dog food:


The purpose of this meeting is to get a quick overview of the team’s current situation, find out if any development tasks have been blocked, assign the day’s tasks, discuss any customer identified defects (which are our Expedite cards) and assess any broken builds.

Blocked cards take special attention in the Standup call.

All discussions are documented as comments against the card.

Our target is to complete the call in less than 30min but this does not happen sometimes. Primary reason for this is one or two issues hog the limelight. Sometimes, one of the team members would interrupt and ask for this issue to be taken offline but we do have some “silent” team members who prefer not to break in (culture thing). So, over a period of time, we have learnt to split the call into two parts: a) have the regular stand-up call b) discuss specific issues for which only the relevant team members need to stick in.

CI Run actionable

Once the Standup call finishes, every developer checks the CI run output if anything is broken from the previous night’s full automation run. For this, there is a consolidated failure run   report from both Junit (unit testing environment) and Sahi (functional test automation environment) sent to all test members from the build (as in the right column). The report reflects not only the failures in the last run but also highlights in red automated test cases that have failed in the last 3 runs. We have experienced that test automation failures are not always linked to the product source code issue or the test automation source code but to  a random system behavior (for e.g., where the server does not respond back in time). Hence, repeat failures is important to identify true failures.

Further, we have an artifacts repository where we store the Sahi html reports that has more information about the failures. Developers use it for further analysis.

If the developer’s names appears against the failure, it his/her first task is to fix the issue(s) reported and then move on to the regular card on the board.

Developers use Eclipse for both Automation script failure analysis and Junit failure analysis. Junits can be corrected and tested on-the-fly in Eclipse.

One of the unique aspects of our development process is the association of an automation script to an individual owner. This was very important because prior to doing this, it wasn’t clear who was responsible to get a failed script fixed. It is hard from a nightly run to identify which of the check-ins(s) (from a series of checkin(s) done throughout the day) is responsible for this script to fail. Hence, we assigned the original developer for the script the responsibility to fix it. It turns out to be faster too in most cases because of their familiarity with the script, being its owner.



For this reason, we use the Test Management repository of SWIFT-ALM (where the test suite inventory exists). A snapshot of the same is shown above.

Our source code is also integrated with Sonar dashboard. On every CI run, the dashboard gets updated and provides valuable information about the java code. We have enabled various plugins on Sonar like PMD, findbugs etc. A developer is expected to look at this dashboard and correct the violations in their module’s source files on a continuous basis. Sonar dashboard gives a good insight into the coding pattern of developers and helps the team in figuring out the better ways to write code.


Development:

Once the issues from the last CI run are addressed, the developer’s focus shift to his/her main development card. Customer defects are all the cards marked in Blue and are our equivalent of the “Expedite” Class of Service. Our next focus is on the pink cards that indicate internally identified Defects and finally, they focus on the User Story that they have on their name. We also have Tasks that are equivalent to Engineering Tasks (called a Technical User Stories in many places). This priority “policy” becomes the basis for developers to pull the next   card when they are done with their present card. Global items are things like training, CI failure rework.



A few additional policies that we have defined:

1.  User Stories flow through the Design and the Functional Automation lanes.
2. At the end of the Design stage, a T shirt estimate is converted into an actual estimate.

While code review is done for all checked in code, Automation code review is only done on a sample basis.

Developers are also free to add tasks to the card, and if needed, assign some of the tasks to another developer who is expected to pitch in.

Developers work on a separate branch in SVN created for a User Story. This branch becomes the development workspace for all the developers working on the User Story. This facilitates easy coordination between the development team and informal code review can also start since the code is already committed. Once development is complete, developer merges the changes to the main branch (trunk) on SVN and deletes the branch that we created. Cruise Control gets the latest code, does the build, runs the Junits, deploys the build on QA server and runs functional Automation on all the 3 browsers that we certify the product on..
Defect Validation:
Developers are also expected to keep an eye on the validation lane. If they have filed an internal or customer filed defects, they are expected to validate the fix on the QA environment and if the fix passes, move the card to the “Ready for Deployment” lane. User Stories are validated by the Product Manager.
Deployment
We are not in the continuous deployment environment but we do deploy every time the number of ready to deploy once we have 20+ cards. We do not deploy automatically because we do have some test cases that need to manually validated for some technical reasons (third party product integrations or test scripts that fail because of our Automation tool issue).

Hope this helps understand the daily routine of a Swift-Kanban developer. It is exciting and many times more productive than how we used to develop software about a couple of years earlier.

Agile and Innovation

I was one of the panel speakers at the Agile India 2013 conference that was held in Bengaluru from Feb 27th to March 2nd, discussing whether Agile fosters or kills innovation. For us, based as we are in the Silicon Valley and in India, the largest outsourced IT services market in the world, this was of special interest.  Silicon Valley is a fantastic example of innovation; clearly it has also adopted a variety of Agile and Lean Startup related practices without slowing down on innovation.  On the other hand, India’s software industry – with a history of outsourced application development and maintenance, and a fledgling, yet steadily growing ISV industry, and a large pool of software developers, has been slow to adopt Agile methods – and is not recognized for innovation yet.  So was there a right ‘engineering method’ that fosters innovation?
The other panelists included senior delivery execs from IT services companies, Lean/ Agile consultants and product development experts.  As expected from a panel of speakers with a diverse background, the conclusions from such a forum were also varied. A spirited discussion ensured and a few approaches emerged from the panel discussion that could definitely be used to foster innovation.
To set the scope correctly, the discussion was around Agile and Innovation, not limited to any specific method and practices. So, under the broad definition of Agile, that included Lean and Kanban, the panel discussed this topic.
It was clear from the discussion that the Sprint based approach, where the team is put under constant milestone pressure at a frequent interval, does not foster innovation. The team is given a well-defined backlog of items to be delivered in the Sprint and in most cases that itself is a challenge.
Three approaches emerged to foster innovation in an Agile team:
1.     Plan for some slack time as part of the Sprint plan: While this approach is reasonable, my personal opinion is that the slack will get used to deliver the scope. After all, Parkinson’s law is hard to avoid!  As long as this risk is mitigated, this approach can indeed foster innovation within the team.
2.     Schedule a Hackathon: Define a problem, define a time limit and challenge the team to figure out an innovative idea/ approach. Alternatively, you could just define a time limit and let the team get creative to deliver a bunch of ideas. While this does have an immediate deadline, it is primarily internal to the team (unlike the regular Sprint deadline, where one is expected to make a customer release). The only catch is that that in a high pressure environment, management should not forget to plan for an occasional hackathon.
3.     Carve out an Innovation team: Another alternative discussed was to carve out a small team from the delivery rigor and let them work on innovative products or approaches. This team would not be measured like the rest of the delivery team – with a Burndown/ Velocity chart for SCRUM-based projects or cycle time for Kanban projects. For a team following Kanban, this work could be tracked in a separate swim-lane, so that they are not mixed with the regular delivery. One of the questions asked by the audience was whether such a “dedicated” team would cause attrition in other teams. The follow-up discussion clarified that this should not be a permanent team. The team should be formed to find an innovative solution for the identified need/ problem. Once the solution emerges, the team should be dissolved to return to regular work. For the next such need/ problem, a new team should be formed again, depending on who has the appropriate background/ experience.
Clearly, there are many ways to foster innovation amidst regular work!  Hopefully, the ideas above provide some viable approaches on how one can foster innovation in an Agile team! 
What is your experience?  I’d love to hear from you of other approaches and ideas that might work even better!
Sudipta Lahiri
Senior Vice President