Tag Archives: Test Automation

Applicability of Agile/Lean/Kanban Methods for fixed scope/budget projects (with short duration)

The Pune Chapter Meet of the Limited WIP Society was held on Mar 8, 2014. This session objective was to address challenges faced by Lean-Kanban practitioners in their projects. Participants from 4 different organizations attended this session. A couple of problems were posed by the participants (Meetup Link) ahead of time. We decided to work on the first one for this session.

Problem Statement: Some projects are fixed scope, fixed budget and have relatively small duration – 3-4 months. In this case, would it make more sense to go for: a) Pure Critical chain or Microsoft project plan based date based approach b) Combination approach: Critical chain based planning followed by Kanban execution. c) Pure Kanban flow based execution approach with no date based plan for individual stories.

The team started the session by working in teams to identify what aspects of Lean/Kanban facilitate small project execution that have fixed scope, fixed budget and small duration (“+ives”) and what aspects don’t (“-ives”).01

Next, they came to the white board, brought all their points and grouped them together. The result looked something like this!01

Result: The team, working independently, had identified 18 positives vs 14 negatives in adapting Lean/Kanban/Agile principles to fixed scope, fixed budget projects with a short duration That a very positive reinforcement of their suitability. Therefore, Option C was taken as the way forward for the rest of the session.

While the session started with a focus on projects with short duration, it was evident that except for a couple of points, this discussion was valid for project of fixed scope/budget, independent of any duration.

The following Positives were identified when applying Agile/Lean methods to these projects:

Positive Contributors

Planning

  • If using SCRUM, defined planning activity
  • Small Cycles
  • Estimate upfront (high level) and understand the feasibility of delivering
  • Divide into stories; complete logical business flows
  • Can help in sustainable pace

Quality:

  • Higher Quality Delivery
  • Stable Software
Scoping and Prioritization:

  • Early Start of mature (well defined items)
  • Focus on prioritizing – very relevant
  • Principles of Agile help; focus on blockers
  • Power visualization

Early Feedback

  • Still relevant
  • Demos are confidence building

Team

  • Team empowerment is positive
  • Collaboration
  • Encourage small team size
  • Cross functional teams

After a brief discussion on the positives, since the positives were stronger than the negatives, the focus on the remaining session was how to mitigate the negative factors in these projects.

The group decided to focus on the negative contributors. Each group was given one of the negative areas to focus on. They were asked to break up the negative contributors to the next level (identified as LI below) and identify possible resolutions for the same. Then, all teams converged and refined the possible resolutions in each of the areas. The following negative contributors were identified with a clear path to resolve them.

Negative Contributors
Planning Area:  01
Negative Factor:MSP is used for Portfolio and Program Planning..Resolution Approach: Lean Methods exist for Portfolio Level Planning, like Portfolio Boards. We need to educate that it a myth that Agile = No planning.
Hence, training, education and pilot programs need to be considered as a means to bring about awareness on the importance of planning and the methods available.
Negative Factor: For such small projects, the team might have good visibility of the scope. Budget, scope and timeline for such projects are often well defined. Hence, they might be tempted to plan heavily upfront and execute based on that..Resolution Approach: High level planning is good and if the details are available and if everyone feels assured about it, then doing isn’t a negative. Avoid detailed estimation, planning to man hour level and then getting into variance tracking. This will lead to timelines getting met at the expense of quality and work-life balance. Negative Factor: In a very dynamic environment, MSP helps in quick impact analysis..Resolution Approach: This is not accurate. With MSP, the defined path was cumbersome. One would find the critical path, then do all kinds of jugglery like fast tracking, crashing, etc. It was based on estimates and we all understand that estimates are dead on arrival!So, we need to move all stakeholder discussions to velocity/throughput centric discussions. To enable this to happen, a lot of training needs to happen for all stakeholders involved.
Negative Factor: No Dates; hence, things pile up.Project Manager does not get sense of delays; lack of timeline; project delivery date is variable..Resolution Approach: The obvious answers the team came up with was to have time boxes (what SCRUM does). Also, it was discussed that Kanban does not have anything against Due Dates. In fact, many Kanban team use Due Dates. The risk is that Due Dates should not construed as milestones that puts pressure to finish the job, no matter what.
Resource Management:  01
Negative Factor: MSP helps in Resource Planning, Capacity Planning..

  • Cards pile up in the end; no flexibility to take care of scope uncertainity.
  • No targets for SWP; ST is not able to plan testing
  • Sudden resource requirements that come up case disruption to the whole project
Resolution Approach:

  • The best way forward is to have stable teams. However, if this is not possible in a given environment, we should be able to visualize this constraint and give time to stakeholders to plan resources. The way to do this is document in this blog.
  • Another approach is to under-allocate critical resources. They are often called in to help others or take care of some critical work in the pipeline.
Testing:  01
Negative Factor: One has to test repeatedly, specifically regression. Since the project duration is short, the tendency is to test at one shot in one batch.Assumption Testing and development are being done by 2 different people..Resolution Approach: Automation
Where automation is not practical or feasible for whatever reason, the team felt that right documentation or Knowledge Transfer to the tester can help reduce repeat testing. If the tester understands the scope of each user story well, then he can focus on the scope of that user story and in subsequent test cycles, focus on scenario based testingIn a specific environment where ST is expected to “Certify” the product quality and hence, need to re-test “all” test cases, including unit level test cases, the recommended approach is to focus on reviewing the comprehensiveness of the Unit Test Cases and Unit Test Results. That would help Dev teams get better by not taking Unit Testing lightly. By ST doing unit level testing, it makes Dev teams continue with their current practices.In cases where Dev teams are mandated to Junit testing, ST teams can seek the Junit Code Coverage data to understand the level of comprehensiveness of Unit Testing.
Env/Infrastructure:   01
Negative Factor: The team identified Configuration, Sanity, Missing HFs, Connectivity, Overloading of environment as all related issues to environment, infrastructure..Resolution Approach:

  • Environment/Infrastructure planning cannot be a batch mode process. Just like capacity planning, Env/Infrastructure needs to be continuous process. Whenever a card gets prioritized within the backlog, all its environment and infrastructure needs can be defined on a card and put in a parallel swim lane to track them to closure (see screen shot)
  • Get people from the Infrastructure/ Environment teams involved in the Sprint planning process; if you are following the Kanban method, pull them in whenever Backlog grooming happens.

01

Applying WCM to the Software Industry

I recently spoke at Symbiosis University on how WCM (World Class Manufacturing) thinking is being applied to the software industry.  World Class Manufacturing [WCM] is the collective term for the most effective methodologies and techniques to realize the objectives of: A) Products of consistent high quality B) Delivery on Time of the desired quantity and C) Product at the lowest cost. The commonly knows WCM methodologies and techniques are TPM, Kaizen, TQM, Six Sigma, JIT, and Lean Manufacturing. This presentation shares how the software industry has been adopting many practices from the above techniques over the last decade. 

Daily Life of a SWIFT-Kanban Developer

Introduction
Within the Swift-Kanban development team, we have evolved our Engineering ways combining principles of Test Automation, Continuous Integration and Kanban thinking. On the other hand, as I have tried to recruit people for such a development environment, it has been difficult to find people who understand such a working environment. This blog attempts to helps explain our Engineering environment.
Stand Up Meetings
The day starts with a stand up meeting at around 9am. Given Mumbai and Bangalore traffic, there is a 10min flexibility allowed for. Since we are distributed team across three different locations (3 cities, 2 countries), many of our team members join the call remotely.

The basis for the Standup Call is the Project Kanban Board (shown below) maintained on our own product. So, we do eat our dog food:


The purpose of this meeting is to get a quick overview of the team’s current situation, find out if any development tasks have been blocked, assign the day’s tasks, discuss any customer identified defects (which are our Expedite cards) and assess any broken builds.

Blocked cards take special attention in the Standup call.

All discussions are documented as comments against the card.

Our target is to complete the call in less than 30min but this does not happen sometimes. Primary reason for this is one or two issues hog the limelight. Sometimes, one of the team members would interrupt and ask for this issue to be taken offline but we do have some “silent” team members who prefer not to break in (culture thing). So, over a period of time, we have learnt to split the call into two parts: a) have the regular stand-up call b) discuss specific issues for which only the relevant team members need to stick in.

CI Run actionable

Once the Standup call finishes, every developer checks the CI run output if anything is broken from the previous night’s full automation run. For this, there is a consolidated failure run   report from both Junit (unit testing environment) and Sahi (functional test automation environment) sent to all test members from the build (as in the right column). The report reflects not only the failures in the last run but also highlights in red automated test cases that have failed in the last 3 runs. We have experienced that test automation failures are not always linked to the product source code issue or the test automation source code but to  a random system behavior (for e.g., where the server does not respond back in time). Hence, repeat failures is important to identify true failures.

Further, we have an artifacts repository where we store the Sahi html reports that has more information about the failures. Developers use it for further analysis.

If the developer’s names appears against the failure, it his/her first task is to fix the issue(s) reported and then move on to the regular card on the board.

Developers use Eclipse for both Automation script failure analysis and Junit failure analysis. Junits can be corrected and tested on-the-fly in Eclipse.

One of the unique aspects of our development process is the association of an automation script to an individual owner. This was very important because prior to doing this, it wasn’t clear who was responsible to get a failed script fixed. It is hard from a nightly run to identify which of the check-ins(s) (from a series of checkin(s) done throughout the day) is responsible for this script to fail. Hence, we assigned the original developer for the script the responsibility to fix it. It turns out to be faster too in most cases because of their familiarity with the script, being its owner.



For this reason, we use the Test Management repository of SWIFT-ALM (where the test suite inventory exists). A snapshot of the same is shown above.

Our source code is also integrated with Sonar dashboard. On every CI run, the dashboard gets updated and provides valuable information about the java code. We have enabled various plugins on Sonar like PMD, findbugs etc. A developer is expected to look at this dashboard and correct the violations in their module’s source files on a continuous basis. Sonar dashboard gives a good insight into the coding pattern of developers and helps the team in figuring out the better ways to write code.


Development:

Once the issues from the last CI run are addressed, the developer’s focus shift to his/her main development card. Customer defects are all the cards marked in Blue and are our equivalent of the “Expedite” Class of Service. Our next focus is on the pink cards that indicate internally identified Defects and finally, they focus on the User Story that they have on their name. We also have Tasks that are equivalent to Engineering Tasks (called a Technical User Stories in many places). This priority “policy” becomes the basis for developers to pull the next   card when they are done with their present card. Global items are things like training, CI failure rework.



A few additional policies that we have defined:

1.  User Stories flow through the Design and the Functional Automation lanes.
2. At the end of the Design stage, a T shirt estimate is converted into an actual estimate.

While code review is done for all checked in code, Automation code review is only done on a sample basis.

Developers are also free to add tasks to the card, and if needed, assign some of the tasks to another developer who is expected to pitch in.

Developers work on a separate branch in SVN created for a User Story. This branch becomes the development workspace for all the developers working on the User Story. This facilitates easy coordination between the development team and informal code review can also start since the code is already committed. Once development is complete, developer merges the changes to the main branch (trunk) on SVN and deletes the branch that we created. Cruise Control gets the latest code, does the build, runs the Junits, deploys the build on QA server and runs functional Automation on all the 3 browsers that we certify the product on..
Defect Validation:
Developers are also expected to keep an eye on the validation lane. If they have filed an internal or customer filed defects, they are expected to validate the fix on the QA environment and if the fix passes, move the card to the “Ready for Deployment” lane. User Stories are validated by the Product Manager.
Deployment
We are not in the continuous deployment environment but we do deploy every time the number of ready to deploy once we have 20+ cards. We do not deploy automatically because we do have some test cases that need to manually validated for some technical reasons (third party product integrations or test scripts that fail because of our Automation tool issue).

Hope this helps understand the daily routine of a Swift-Kanban developer. It is exciting and many times more productive than how we used to develop software about a couple of years earlier.