Tag Archives: Software Engineering

The 1st Limited WIP Society Meeting Pune Chapter

The first meeting of the Limited WIP Society Pune Chapter happened on Jul 20. Over 20 participants from 5 different companies attended this session.

The first session was presented by Sutap Choudhury from the Amdocs Transformation team. Sutap explained the challenges of the traditional development models and the rationale behind the Agile/Lean/Kanban adoption within Amdocs. He explained the Kanban core practices using Henrik Kinberg’s animation slides. At the end of this session, multiple questions were asked around the implementation of Kanban. One of the questions asked was around the practicality of having cross functional teams – how practical is it to have people from one value stream lane move to another lane to help out on a Blocked card? It was explained that while it may not be 100% possible to get fully cross functional teams, it is important for the team keep this objective in mind and look for opportunities to implement the same. Cross skilling the team will always help tackle such bottlenecks. Another question was around the differences between Kanban and SCRUM. It was discussed that SCRUM/XP are time-boxed execution models in contrast to a continuous flow approach in a Kanban system.
The second session was presented by Hrishikesh Karekar, also from the Amdocs Transformation Team. He explained the challenges of implementing Kanban in a large project – a project of over 800 man month, with over 150 resources over a timeline of 9months. The key challenges that this team experienced were around: A) Splitting Requirements B) Managing Execution and C) Understanding whether the project is on track from a budget perspective. To help manage project execution and tracking project budget consumption, Hrishikesh explained the adoption of Earned Value Management. In the initial days, Amdocs adopted Earned Value tracking by giving part credit to a card when it completed part of the value stream (similar to % progress in MS Project). Unfortunately, this approach led teams to keep pulling work so that the Earned Value % increases, instead of focusing on completing work. Amdocs is now considering adopting the AgileEVM method, wherein a card gets credit when it completes the value stream. Expectation is that while this will strongly encourage teams to complete work as soon as they start on it.
Hrishikesh summarized the session with three key learning:
1.      Telling people “Just do it” just doesn’t do it.

2.      Mindset issues as well as “Real issues” especially when the ecosystem is not agile.
3.      Coaching strategy needs to be agile as well – know when to persist and when to let go.

Advertisements

Daily Life of a SWIFT-Kanban Developer

Introduction
Within the Swift-Kanban development team, we have evolved our Engineering ways combining principles of Test Automation, Continuous Integration and Kanban thinking. On the other hand, as I have tried to recruit people for such a development environment, it has been difficult to find people who understand such a working environment. This blog attempts to helps explain our Engineering environment.
Stand Up Meetings
The day starts with a stand up meeting at around 9am. Given Mumbai and Bangalore traffic, there is a 10min flexibility allowed for. Since we are distributed team across three different locations (3 cities, 2 countries), many of our team members join the call remotely.

The basis for the Standup Call is the Project Kanban Board (shown below) maintained on our own product. So, we do eat our dog food:


The purpose of this meeting is to get a quick overview of the team’s current situation, find out if any development tasks have been blocked, assign the day’s tasks, discuss any customer identified defects (which are our Expedite cards) and assess any broken builds.

Blocked cards take special attention in the Standup call.

All discussions are documented as comments against the card.

Our target is to complete the call in less than 30min but this does not happen sometimes. Primary reason for this is one or two issues hog the limelight. Sometimes, one of the team members would interrupt and ask for this issue to be taken offline but we do have some “silent” team members who prefer not to break in (culture thing). So, over a period of time, we have learnt to split the call into two parts: a) have the regular stand-up call b) discuss specific issues for which only the relevant team members need to stick in.

CI Run actionable

Once the Standup call finishes, every developer checks the CI run output if anything is broken from the previous night’s full automation run. For this, there is a consolidated failure run   report from both Junit (unit testing environment) and Sahi (functional test automation environment) sent to all test members from the build (as in the right column). The report reflects not only the failures in the last run but also highlights in red automated test cases that have failed in the last 3 runs. We have experienced that test automation failures are not always linked to the product source code issue or the test automation source code but to  a random system behavior (for e.g., where the server does not respond back in time). Hence, repeat failures is important to identify true failures.

Further, we have an artifacts repository where we store the Sahi html reports that has more information about the failures. Developers use it for further analysis.

If the developer’s names appears against the failure, it his/her first task is to fix the issue(s) reported and then move on to the regular card on the board.

Developers use Eclipse for both Automation script failure analysis and Junit failure analysis. Junits can be corrected and tested on-the-fly in Eclipse.

One of the unique aspects of our development process is the association of an automation script to an individual owner. This was very important because prior to doing this, it wasn’t clear who was responsible to get a failed script fixed. It is hard from a nightly run to identify which of the check-ins(s) (from a series of checkin(s) done throughout the day) is responsible for this script to fail. Hence, we assigned the original developer for the script the responsibility to fix it. It turns out to be faster too in most cases because of their familiarity with the script, being its owner.



For this reason, we use the Test Management repository of SWIFT-ALM (where the test suite inventory exists). A snapshot of the same is shown above.

Our source code is also integrated with Sonar dashboard. On every CI run, the dashboard gets updated and provides valuable information about the java code. We have enabled various plugins on Sonar like PMD, findbugs etc. A developer is expected to look at this dashboard and correct the violations in their module’s source files on a continuous basis. Sonar dashboard gives a good insight into the coding pattern of developers and helps the team in figuring out the better ways to write code.


Development:

Once the issues from the last CI run are addressed, the developer’s focus shift to his/her main development card. Customer defects are all the cards marked in Blue and are our equivalent of the “Expedite” Class of Service. Our next focus is on the pink cards that indicate internally identified Defects and finally, they focus on the User Story that they have on their name. We also have Tasks that are equivalent to Engineering Tasks (called a Technical User Stories in many places). This priority “policy” becomes the basis for developers to pull the next   card when they are done with their present card. Global items are things like training, CI failure rework.



A few additional policies that we have defined:

1.  User Stories flow through the Design and the Functional Automation lanes.
2. At the end of the Design stage, a T shirt estimate is converted into an actual estimate.

While code review is done for all checked in code, Automation code review is only done on a sample basis.

Developers are also free to add tasks to the card, and if needed, assign some of the tasks to another developer who is expected to pitch in.

Developers work on a separate branch in SVN created for a User Story. This branch becomes the development workspace for all the developers working on the User Story. This facilitates easy coordination between the development team and informal code review can also start since the code is already committed. Once development is complete, developer merges the changes to the main branch (trunk) on SVN and deletes the branch that we created. Cruise Control gets the latest code, does the build, runs the Junits, deploys the build on QA server and runs functional Automation on all the 3 browsers that we certify the product on..
Defect Validation:
Developers are also expected to keep an eye on the validation lane. If they have filed an internal or customer filed defects, they are expected to validate the fix on the QA environment and if the fix passes, move the card to the “Ready for Deployment” lane. User Stories are validated by the Product Manager.
Deployment
We are not in the continuous deployment environment but we do deploy every time the number of ready to deploy once we have 20+ cards. We do not deploy automatically because we do have some test cases that need to manually validated for some technical reasons (third party product integrations or test scripts that fail because of our Automation tool issue).

Hope this helps understand the daily routine of a Swift-Kanban developer. It is exciting and many times more productive than how we used to develop software about a couple of years earlier.