Agile Scrum Development

Managing the testing for an agile project can have a unique set of challenges: one or two week iterations, overlapping test coverage, understanding what "done" means for the team, and finding the right balance of testing for functionality vs. all the other quality criteria including performance, security, etc. We'll look at some common issues encountered when testing in an agile environment, and provide some tips for dealing with them. In this article, I'll predominantly provide examples of the Scrum process. This isn't because Scrum is necessarily better, it's simply because most of my experience is using Scrum. It's what I'm comfortable with. Scrum is an iterative process, where a cross-functional team works on a series of stories from one or more product backlogs. Each iteration is called a "sprint." Your story contains your requirements, and the product backlog contains all the stories for a product. Scrum is best known for its short daily meetings, known as "scrums." For more on Scrum, there are a couple of references provided at the end of this article which can help provide context for the examples.
Dealing with the time crunch
Whether you're new to testing, or just new to testing on an agile project, the first challenge you'll run up against is the short amount of time you have to test. Most agile projects I've worked on have been structured for completion in two week iterations. I've also worked on one week iterations, and three week iterations. I know some people who've worked on longer iterations, but I haven't.
Its precisely due to this time crunch that agile teams rely so heavily on automation when it comes to testing. There are a lot of other reasons why you might want to automate tests, but certainly the ability to get rapid feedback on code changes is a big part of it. For example, take a look at the testing stack Expected Behavior put together. They're a Ruby on Rails Web application development company. The "basic" tools they use for testing include:

  • Cucumber: a behavior-driven development testing tool
  • HtmUnit: a test tool for simulating the browser
  • Celerity: a wrapper for HtmlUnit
  • Culerity: integrates Cucumber and Celerity
  • Bootstrapper: assists in bootstrapping and seeding databases
  • Factory Girl: provides a framework and DSL for defining and using factories
Before they write a line of code, they're thinking of testing. The tools are only part of it. It's also the process that supports those tools. If you're working in an agile team, your team needs a testing methodology and the right tools to support it. What tests will you create? Unit, integration, acceptance, performance, etc.? How will you create them? Test-driven, behavior-driven, coverage-focused, etc.? What practices and tools will you use to create them? Fixture, mocks, profilers, checkstyles, etc.?

Related Content:
Where Agile development works and where it doesn't: A user story
British Airways used agile development to build its website, BA.com. But despite the company's satisfaction with agile, it still plans to use waterfall in some areas.

Software test expert: Why Agile teams' unit tests often fail
For many software development teams, unit testing is a niche expertise and easily overlooked. However author Roy Osherove advocates for unit testing particularly among agile teams.

While you might normally think of all of this as just a "programming" activity, if you're a test manager for an agile team, you need to know. A lot of your strategy will be focused on getting test coverage where your developers don't already have it. That means you need to understand what they're doing and how they're doing it. Further, many times your team can build on what they're already doing. Not only can you make use of their test assets (populating data for testing, mocking services, etc.), but you can also extend their tests with your own.
Aside from automation, the next thing you have to figure out to be successful is how to build a framework for rapid testing. For me, this often means implementing an exploratory testing approach. But I know of some test managers who've also been successful with more traditional approaches. The trick is to organize your testing to reduce documentation, setup time and coordination overhead, while maximizing coverage per test and how much you can leverage tacit knowledge.
Figuring out where your testing fits
When you've got a cross-functional team working in two week increments, things get cramped. There's only so much coverage you can get on software that was written the second to last day of the sprint. This creates an interesting problem for agile testing teams. They have to figure out where their testing fits. Below I share some of the ways I've managed testing for agile teams. Recognize that none of these needs to be exclusive of the others – you can mix and match approaches to meet your needs.
Embedded testers: Most agile teams want the testers working in the sprint, side-by-side with developers. I call this embedded testing. The testers are part of the sprint, and their testing takes place real time with development. A story in the sprint isn't "done," until it's tested and all the bugs are fixed. In this scenario, testers often do as much test design as they can the first week and start testing software as soon as it becomes available. It's a very high pressure situation.
This approach has the advantage in that it produces a high-quality product. Programmers get rapid feedback on their software, and if they need to fix bugs, the code is still fresh in their minds. It also creates a shared objective – done means tested and all bugs fixed. Everyone knows it, so everyone works towards it. There's little back and forth on bugs, decisions are made swiftly and acted on more swiftly.
On the other hand, it creates tremendous pressure in the last few days of the sprint. Any story of reasonable size needs to be completed early in the sprint, else there isn't enough time to run tests or fix bugs. That means the team needs to be on the ball when it comes to estimation, and they have to be smart in the way they structure the sprint. Everyone makes mistakes, so as you can imagine, in this setup, there are a lot of sprints that have complications late in the sprint.
Testing offset from development: The alternative is to push the majority of the testing-team's testing to outside of the sprint. A common practice is to have the testing team lag the development team by one sprint. This removes the end-of-sprint pressures, but introduces the classic problems of separation of time and space - the biggest of which is that when a bug is found, you have to figure out how you're going to fix it. That's not a trivial problem and we'll address it in a later section of this article.
When testing is offset from development, often test planning and test execution happen concurrently. If I'm a tester, I'm likely doing test planning for the sprint that's currently taking place and I'm running tests against the software from the previous sprint using the test planning artifacts I made at that time. This requires the tester to balance focus between sprints. If they aren't good at that, they'll either fall behind in test planning, or not do as good of a job with their test execution.
Testing focused around production releases: An alternative to both of these approaches is to go back to a more traditional approach to testing and to just test the final releases that go out the door. In this scenario, you ignore sprints entirely, except perhaps as a vehicle for test planning. And all of your test execution takes place against the final release artifacts. This approach isn't bad for small quick releases – for example if you had software going to production every two to four weeks – but I think it's a mistake for large releases. It has all the same problems you get when you test using waterfall methods.
That said, testing for certain quality criteria makes sense on the final release. For example, testing for performance, usability and security might only be able to be done effectively on the final release. While that's not always the case, certainly there are certain types of testing that are going to favor your final release. Do everything you can to minimize the amount of testing required to get the final software package out the door. Pull as much of it forward as you possibly can. As a general rule, the closer the testing is to when the code was written, the better off the software will be.
Understanding what needs to be tested and when
In my experience, most iterations contain a mix of stories that will potentially be going to production at different times. For example, let's say we use a regular release schedule where we release to production every two weeks. The majority of our bi-weekly regular releases are minor releases, with a major release every three months. If we ever have to do an off release, we'll call that a hotfix. It might look like the following:
In that scenario, it's been my experience that it's not uncommon to have a sprint where you're doing development for multiple releases at a time. For example, you might be doing bugfixes for your next minor release, while doing new development on small features for the minor release after that, with a couple of larger features being worked on for the next major release. In that scenario, if you were working Sprint 1 in the example above, and the sprint was made up of twenty stories, those stories might be divided up as follows:
Release Number
Stories in Sprint
Time until release to production What this means for testing What this means for issue resolution
2.1.1
1
End of Sprint All test planning and execution would need to take place during the sprint. Any issues found would need to be resolved in the sprint.
2.2.0
14
1.5 weeks from end of sprint All test planning and execution would likely need to take place during the sprint. Any issues found would, at the latest, need to be resolved during the first week of the next sprint.
2.3.0
2
3.5 weeks from end of sprint All test planning would need to take place during the sprint. Execution and issue resolution could be delayed to the next sprint if needed.
3.0.0
3
9.5 weeks from end of sprint Test planning should be done during the sprint, but can wait if needed. Execution and issue resolution can likely wait if needed.
In this type of scenario, you're team needs to test for four different releases simultaneously. To be successful in this type of scenario, you need strong configuration management practices, strong release tracking, and a firm understanding of what testing coverage you hope to achieve for each release.
Strong configuration management is important since in agile environments there's often little time available for manual regression testing. In my experience, most regression issues introduced are due to poor configuration management practices. If you've got a strong CM plan in place and everyone's clear on what needs to be done, the likelihood that you'll introduce a regression-related issue goes down dramatically. On one past team I worked with, when we re-architected our configuration management plan to better allow us to work multiple releases each sprint, we were able to cut our regression issues, and issues related to code management, by around 90% for each release.
I'd be surprised if you followed the example I outlined above in both the diagram and the table the first time you read them. I would expect you'd need to re-examine that information a couple of times to really follow when code is going to production. When you've got code going to four different releases, you need strong release tracking. Most release management tools are up to the task of tracking the stories, but it's a huge benefit when you can tie your source code commits to the tickets in a release in that tool. To that end, look for release tracking tools that also integrate with your configuration management tools.
Finally, all of this feeds into figuring out what you need to test for each release and when you need to test it. Different sprints and releases will apply different pressures to your testing team. Balancing those pressures can be a challenge. Find or develop tools that help you track coverage and progress across multiple releases at a time. On the last agile testing team I managed, each tester was a member of a sprint team and also responsible for coordinating the testing for least one client-facing release. This meant that in addition to their sprint testing responsibilities, they would be tracking coverage for one release while helping execute testing for others.
Changing the way you think about bugs
Just like in traditional development methodologies, all of these testing pressures end up putting pressure on releases. The biggest of which is what the team does when bugs are found. In a waterfall project, often developers are waiting around for bugs when testing is going full-steam ahead. They know bugs are coming and are waiting for them. With agile however, developers have likely moved on to the next story. In their mind, they're done with the code that's been accepted during the sprint. They have other deliverables now they need to focus on.
When a bug is found during a sprint, the answer to that is often easy – the bugs get fixed or the sprint isn't done. However, what happens after the sprint? What if a bug is found in final release testing, during a transition sprint, or you find a bug during your sprint testing in a feature not being worked in your current sprint?
Different teams have different approaches to this problem. Some teams just make all their bugs stories. They go to the backlog just like any other story, and they get prioritized and worked using the same process any other story would get. Other teams work them in break-fix cycles, just like traditional teams. If a bug related to a release is found, developers work the bug right away and testers re-test as needed. In this scenario, you need to have capacity available for developers to do that work. Where will you get that capacity if all the developers are working in a sprint?
Transition sprints are a common answer to this problem. A transition sprint is often the sprint before code goes to production. In this sprint developers take a very light workload for the sprint in anticipation of the need to work issues resulting from final testing and/or deployment activities. That strategy works well for large releases, but what if your teams work multiple small releases? It becomes more difficult in that scenario to set aside blocks of time like that to focus on a single product.
As a leadership team, you'll need to get on the same page quickly with how your organization will handle these issues. Figure out how you'll manage defects: will they be backlog items or tracked separately? Figure out how you'll deal with the issues that come from final release testing activities: will you use transition sprints or some other set aside defect-work capacity for developers? And for all of this, how will you keep track of it all so you can see progress, time tracking, and know when you're done?
Summing it all up
Above, we covered a lot of ground. If you're new to agile, hopefully you'll see that testing can't happen as its own "phase" like it does in traditional methodologies. With agile, developers do a lot of the testing, testers are integrated in the workflow of what it means to "complete" a story, and testing can be largely a futile exercise if you don't have good configuration management and release planning in place.
If you're already a test manager on agile projects and you're already aware of those dependencies, then you'll want to instead focus on the tactical logistics of tracking coverage across sprints and release, prioritizing and load balancing the work on what is most likely an overworked team, and finding ways to leverage the automation the developers are using every day.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development

Post a Comment

أحدث أقدم