Articles On Testing

Wecome to !!!

Articles On Testing

Wecome to !!!

Articles On Testing

Wecome to !!!

Articles On Testing

Wecome to !!!

Articles On Testing

Wecome to !!!

Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Agile terminologies 1

DEFINITION - In agile software development, an iteration is a single development cycle, usually measured as one week or two weeks. An iteration may also be defined as the elapsed time between iteration planning sessions. While the adjective iterative can be used to describe any repetitive process, it is often applied to any heuristic planning and development process where a desired outcome, like a software application, is created in small sections. These sections are iterations. Each iteration is reviewed and critiqued by the software team and potential end-users; insights gained from the critique of an iteration are used to determine the next step in development. Data models or sequence diagrams, which are often used to map out iterations, keep track of what has been tried, approved, or discarded -- and eventually serve as a kind of blueprint for the final product.
In general, an iteration is the act of repeating.

Acceptance Test:

An acceptance test confirms that an story is complete by matching a user action scenario with a desired outcome. Acceptance testing is also called beta testing, application testing, and end user testing.

Agile Software Development:

Agile software development is a methodology for the creative process that anticipates the need for flexibility and applies a level of pragmatism into the delivery of the finished product. Agile software development (ASD) focuses on keeping code simple, testing often, and delivering functional bits of the application as soon as they’re ready.


In agile software development, a customer is a person with an understanding of both the business needs and operational constraints for a project. The customer provides guidance during development on what priorities should be emphasized.

Domain Model:

A domain model describes the application domain responsible for creating a shared language between business and IT.


An iteration is a single development cycle, usually measured as one week or two weeks. An iteration may also be defined as the elapsed time between iteration planning sessions.

Planning Board:

A planning board is used to track the progress of an agile develoment project. After iteration planning, stories are written on cards and pinned up in priority order on a planning board located in the development area. Development progress is marked on story cards during the week and reviewed daily.

Planning Game:

A planning game is a meeting attended by both IT and business teams that is focused on choosing stories for a release or iteration. Story selection is based upon which estimates of which stories will provide the most business value given development estimates.


A release is a deployable software package that is culmination of several iterations of development. Releases can be made before the end of an iteration.

Release Plan:

A release plan is an evolving flowchart that describes which features will be delivered in upcoming releases. Each story in a release plan has a rough size estimate associated with it.


A spike is a story that cannot be estimated until a development team runs a time-boxed investigation. The output of a spike story is an estimate for the original story.


A stand-up is a daily progress meeting, traditionally held within a development area. Business customers may attend for the purpose of gathering information. The term “standup” is derived from the way it is run all attendees must remain standing to keep it short and the team engaged.


A story is a particular business need assigned to the software development team. Stories must be broken down into small enough components that they may be delivered in a single development iteration.


A timebox is a defined period of time during which a task must be accomplished. Timeboxes are commonly used in agile software development to manage software development risk. Development teams are repeatedly tasked with producing a releasable improvement to software, timeboxed to a specific number of weeks.


Velocity is the budget of story units available for planning the next iteration of a development project. Velocity is based on measurements taken during previous iteration cycles. Velocity is calculated by adding the original estimates of the stories that were successfully
delivered in an iteration.


A wiki is a server program that allows users to collaborate in forming the
content of a Web site. With a wiki, any user can edit the site content, including other users’ contributions, using a regular Web browser

Agile Testing is continued here .

Agile terminologies

DEFINITION - A sprint, in Scrum software development, is a set period of time during which specific work has to be completed and made ready for review.
Each sprint begins with a planning meeting. During the meeting, the product owner (the person requesting the work) and the software development team agree upon exactly what work will be accomplished during the sprint. The development team has the final say when it comes to determining how much work can realistically be accomplished during the sprint and the product owner has the final say on what criteria needs to be met for the work to be approved and accepted.
The duration of a sprint is determined by the scrum master (group facilitator). To help with scheduling and planning, once the team agrees with the scrum master on how many days a sprint should last, all future sprints should be the same. Traditionally, a sprint lasts 30 days.

Once a sprint begins, the product owner must step back and let the team do their work. During the sprint, the team meets daily to discuss progress and brainstorm solutions to challenges. The project owner may attend these meetings as an observer but is not allowed to participate unless it is to answer questions. (See pigs and chickens). The project owner may not make requests for changes and only the scrum master (team facilitator) has the power to interrupt or stop the sprint.
At the end of the sprint, the team presents its completed work to the project owner and the project owner uses the criteria established at the sprint planning meeting to either accept or reject the work.

DEFINITION - A burn down chart is a visual representation of the amount of work that still needs to be completed before the end of a project. A burn down chart has a Y axis (work) and an X axis (time). Ideally, the chart illustrates a downward trend as the amount of work still left to do over time "burns down" to zero.
A burn down chart provides both project team members and business owners with a common view of how work is proceeding. The term is often used in agile programming.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development

What is Agile Testing

"Agile Development" that comes to the fore is simply:
"Reduce the gap in time between doing
some activity and gaining feedback."
Though typically couched in terms of discovering errors sooner, there is a positive side to this concept. You also want to discover you are finished (with a feature) as soon as possible! One of the big parts of being in an agile state of mind is trying to always do the best you can with the time and resources you have. Not discovering a bug until late in the development cycle is obviously expensive. However, going overboard on building code to meet more than the feature demands is potentially no less wasteful. After all, if you add "polish" and "gleam" above and beyond what was needed, you have added more cost to development. In short, you wasted time and money - neither of which can be recovered - and you added more cost to downstream maintenance.
In agile development we want to streamline the process to be as economical as possible and to get feedback as early and as often as possible.
Why test? Tests are a great way to get feedback. Tests are useful to ensure the code is still doing what it is supposed to, that no unintended consequences of a code change adversely affect the functionality. Tests are also very important in that they help add "crisp" definition to the feature description such that a developer will know when the feature is complete. To this end, I have two simple rules for knowing when a feature has been described in enough detail:
  1. The developer can supply a pretty accurate estimate
  2. The tester can write an acceptance test
Not only are there different types of tests, but there are also different ways to conduct tests, as discussed next.
How do we test? Typically testing is done manually and through automation. Okay, that is kind of a "duh," but the real issue is to not confuse the role that each technique can play during development. Though you should automate as much of the testing as possible, this does not mean that all manual testing is no longer needed. Indeed, there are many cases when a system may have passed all the automated tests, only to immediately fail when a user starts to exercise some part of the functionality.
In keeping with agile principles to do smart things, be sure you are not using manual tests to do that which could be automated. And, be sure you are not trying to do expensive automation of tests that are better left for human testers. Additionally, be sure you are not avoiding doing certain tests just because you cannot automate them!
What kinds of tests should you use? There is no "one-size-fits-all" strategy. Like everything in agile development, testing also requires you to use your brain! I know, shocking, eh? Here are an assortment of test types and the purpose/role they can play during development.
  • Unit tests are good for the developers to build up as they are coding their assigned features. The purpose is to ensure that the basic functionality of the underlying code continues to work as changes are made. Plus unit tests are a useful way to document how the "API" is to be used.
  • Acceptance tests are used to ensure specific functionality works. Essentially, the acceptance test is a client-specified scenario that indicates the desired functionality is properly implemented. By having an acceptance test, the developer can know when "good enough" is met, and the feature is complete.
  • UI tests - these often involve some means to step through the page flow, provide known inputs and check actual results against expected results. Some UI tests can get fancy enough to use bitmap comparisons; for example, for graphics or CAD packages.
  • Usability testing - this is a whole other area of testing that often involves humans! I won't go into details here, but usability can often be "make-or-break" acceptance criteria. Some projects can also benefit from automated tests that ensure UI standards are being followed (where they may not be easily enforced through UI frameworks).
  • Performance tests - running a suite of tests to ensure various non-functional metrics are met is critical for many apps. If your app requires stringent performance metrics, you need to tackle this up front during the initial architecture and design phases. Before building the entire app, you need to ensure the performance benchmarks are being met. Typically, you build a thin slice of the application and conduct stress testing. You may run simulated sweeps across system-critical parameters. For example, simulate 1 to 100 users simultaneously accessing 100 to 10,000 records with varying payloads of 100K to 1GB. The benchmark criteria may be the need for response times of 1 second or less. The performance benchmarks are usually automatically run at least on every "major" build - maybe you do this once a week on Friday, or at the end of each iteration. I like to keep running tables and graphs of these benchmark results. You can spot trends.
If you are practicing test-first development, you can start with an acceptance test for the feature you are implementing. But you will quickly get involved with writing lower-level tests to deal at simpler, more granular levels of the system.
Personally, I don't use tests to derive the architecture and major classes involved in the design. But I can see where some people do. Much depends on your habits, and more likely how your brain is wired. Mine is pretty loose :=, and I tend to visualize objects.
What test techniques are useful? Though automation is key, it doesn't have to be the only kind of tests you run! I generally like to have a mix of automated and manual tests. And for the automated tests you can refine the frequency of when they are run. Here are the various types of testing and the role it can play in an agile project:

  • "Smoke" Tests - essentially a handful of critical tests that ensure the basic build functionality works. Some can be automated, but others can be manual. If it is easy to run the smoke tests, it can help the development team know that the daily build is probably useful. When manual tests are involved (that is, a bit more expensive to conduct), this technique is often reserved for "major" builds that might be ready for undergoing more formal QA testing (to prevent you from doing expensive and exhaustive testing on a "bad" build).
  • Test Harness - is frequently a good way to "institutionalize" exposing functionality of the system (e.g., via Web Services) for coarse-grain functionality typical for acceptance tests and major system scenarios (and even external invocation). Wiring up a test harness enables ease of extending the breadth of the tests as new functionality is added. You can build test harnesses that are very automated in terms of capturing the user input (using a record mode), and capturing the output, and getting a user's (in this case typically a tester) acknowledgement that the output is correct. Each captured test case is added to a test database for later playback orchestration.
  • Automated stress test "bots" - if you do a good job of designing a layered architecture with clean separation of concerns, you may be able to build in some testing robots. This is often especially easy when dealing with service-based systems. You can use XML-style config files to control the tests being run. You can build tests to exercise each layer (e.g., persistence, business, messaging, presentation) of the system, building up to tests that replicate actual user usages of the system (except for the UI part). We have built such bots that can then be distributed on dozens of systems and can even be spawned in multiples on each system. This allows for very simple concurrent testing to "hammer" the server from all over the world. The config files control the intensity, breadth, and duration of the tests.
  • Manual tests - these are reserved for those aspects of the system best left for human testing. That is, to use a tester for boring, mundane, tedious, and monotonous testing is to be very wasteful of human potential. Not only is the likelihood of having errors in the testing high, but you will probably not have time to catch the more elusive bugs. Instead, allow the testers to focus on being more exhaustive with using the application in complex ways that may not be easy to automate.
All of the test results run against the build machine should be visible in summary form, with drill-down as required. A good way to do this is with a special page on the wiki (or other common place) to show current build results. We also always send out emails when new (successful) builds are available. And we send emails to the folks who may have errors in their code, and create defect issues as required!
There are many tools to help in this publishing process.
Putting it together So you should have a mix of manual and automated tests that go something like this during development:

  • Grab a well-defined feature
  • Write the acceptance test, write some code, write some unit tests and more code as needed, get the tests to pass, check in the code
  • Is this feature performance critical?
    • Write a performance test to measure a critical benchmark
    • Ensure the test passes as you are doing development
    • Add to the benchmark suite
  • Does the feature have unique UI aspects?
    • May need to use manual testing to ensure usability and complex functionality that is too difficult to automate
  • Add the acceptance test to the functional test suite (use an automated tool for this)
  • If the feature passes the acceptance test and the (optional) performance tests, you can declare victory on the feature, close it and move to the next feature!
If you are using other testing techniques (test harness, test bots, helpful tools such as my company's functional testing, load testing, code coverage (DevPartner), or system-level analysis tools (Vantage Analyzer);ObjectMentor FitNesse ( app) you will need to keep those tools in sync.
Constantly improve Start small. Do not overdo the process from the get-go. The tests and techniques should grow over time to meet your needs. If you get some nasty bug reported that could be prevented in the future by a test, add a new acceptance or unit test(s)! If users complain about some performance aspect of the system (like it takes 2 minutes to open up a large project file), add a benchmark test to your performance suite. This will be used to precisely document the poor performance, and then to show the improvement once the developer makes the fixes. If some tests are no longer worth executing, comment them out (and eventually delete them).
The point is to use your brain!
Agile is a state of mind! Testing in an agile project can vary in depth and style, but not in its intent. Build a mix of tools, techniques, process, and automation to make the right tests work for you.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development

Agile Scrum Development

Managing the testing for an agile project can have a unique set of challenges: one or two week iterations, overlapping test coverage, understanding what "done" means for the team, and finding the right balance of testing for functionality vs. all the other quality criteria including performance, security, etc. We'll look at some common issues encountered when testing in an agile environment, and provide some tips for dealing with them. In this article, I'll predominantly provide examples of the Scrum process. This isn't because Scrum is necessarily better, it's simply because most of my experience is using Scrum. It's what I'm comfortable with. Scrum is an iterative process, where a cross-functional team works on a series of stories from one or more product backlogs. Each iteration is called a "sprint." Your story contains your requirements, and the product backlog contains all the stories for a product. Scrum is best known for its short daily meetings, known as "scrums." For more on Scrum, there are a couple of references provided at the end of this article which can help provide context for the examples.
Dealing with the time crunch
Whether you're new to testing, or just new to testing on an agile project, the first challenge you'll run up against is the short amount of time you have to test. Most agile projects I've worked on have been structured for completion in two week iterations. I've also worked on one week iterations, and three week iterations. I know some people who've worked on longer iterations, but I haven't.
Its precisely due to this time crunch that agile teams rely so heavily on automation when it comes to testing. There are a lot of other reasons why you might want to automate tests, but certainly the ability to get rapid feedback on code changes is a big part of it. For example, take a look at the testing stack Expected Behavior put together. They're a Ruby on Rails Web application development company. The "basic" tools they use for testing include:

  • Cucumber: a behavior-driven development testing tool
  • HtmUnit: a test tool for simulating the browser
  • Celerity: a wrapper for HtmlUnit
  • Culerity: integrates Cucumber and Celerity
  • Bootstrapper: assists in bootstrapping and seeding databases
  • Factory Girl: provides a framework and DSL for defining and using factories
Before they write a line of code, they're thinking of testing. The tools are only part of it. It's also the process that supports those tools. If you're working in an agile team, your team needs a testing methodology and the right tools to support it. What tests will you create? Unit, integration, acceptance, performance, etc.? How will you create them? Test-driven, behavior-driven, coverage-focused, etc.? What practices and tools will you use to create them? Fixture, mocks, profilers, checkstyles, etc.?

Related Content:
Where Agile development works and where it doesn't: A user story
British Airways used agile development to build its website, But despite the company's satisfaction with agile, it still plans to use waterfall in some areas.

Software test expert: Why Agile teams' unit tests often fail
For many software development teams, unit testing is a niche expertise and easily overlooked. However author Roy Osherove advocates for unit testing particularly among agile teams.

While you might normally think of all of this as just a "programming" activity, if you're a test manager for an agile team, you need to know. A lot of your strategy will be focused on getting test coverage where your developers don't already have it. That means you need to understand what they're doing and how they're doing it. Further, many times your team can build on what they're already doing. Not only can you make use of their test assets (populating data for testing, mocking services, etc.), but you can also extend their tests with your own.
Aside from automation, the next thing you have to figure out to be successful is how to build a framework for rapid testing. For me, this often means implementing an exploratory testing approach. But I know of some test managers who've also been successful with more traditional approaches. The trick is to organize your testing to reduce documentation, setup time and coordination overhead, while maximizing coverage per test and how much you can leverage tacit knowledge.
Figuring out where your testing fits
When you've got a cross-functional team working in two week increments, things get cramped. There's only so much coverage you can get on software that was written the second to last day of the sprint. This creates an interesting problem for agile testing teams. They have to figure out where their testing fits. Below I share some of the ways I've managed testing for agile teams. Recognize that none of these needs to be exclusive of the others – you can mix and match approaches to meet your needs.
Embedded testers: Most agile teams want the testers working in the sprint, side-by-side with developers. I call this embedded testing. The testers are part of the sprint, and their testing takes place real time with development. A story in the sprint isn't "done," until it's tested and all the bugs are fixed. In this scenario, testers often do as much test design as they can the first week and start testing software as soon as it becomes available. It's a very high pressure situation.
This approach has the advantage in that it produces a high-quality product. Programmers get rapid feedback on their software, and if they need to fix bugs, the code is still fresh in their minds. It also creates a shared objective – done means tested and all bugs fixed. Everyone knows it, so everyone works towards it. There's little back and forth on bugs, decisions are made swiftly and acted on more swiftly.
On the other hand, it creates tremendous pressure in the last few days of the sprint. Any story of reasonable size needs to be completed early in the sprint, else there isn't enough time to run tests or fix bugs. That means the team needs to be on the ball when it comes to estimation, and they have to be smart in the way they structure the sprint. Everyone makes mistakes, so as you can imagine, in this setup, there are a lot of sprints that have complications late in the sprint.
Testing offset from development: The alternative is to push the majority of the testing-team's testing to outside of the sprint. A common practice is to have the testing team lag the development team by one sprint. This removes the end-of-sprint pressures, but introduces the classic problems of separation of time and space - the biggest of which is that when a bug is found, you have to figure out how you're going to fix it. That's not a trivial problem and we'll address it in a later section of this article.
When testing is offset from development, often test planning and test execution happen concurrently. If I'm a tester, I'm likely doing test planning for the sprint that's currently taking place and I'm running tests against the software from the previous sprint using the test planning artifacts I made at that time. This requires the tester to balance focus between sprints. If they aren't good at that, they'll either fall behind in test planning, or not do as good of a job with their test execution.
Testing focused around production releases: An alternative to both of these approaches is to go back to a more traditional approach to testing and to just test the final releases that go out the door. In this scenario, you ignore sprints entirely, except perhaps as a vehicle for test planning. And all of your test execution takes place against the final release artifacts. This approach isn't bad for small quick releases – for example if you had software going to production every two to four weeks – but I think it's a mistake for large releases. It has all the same problems you get when you test using waterfall methods.
That said, testing for certain quality criteria makes sense on the final release. For example, testing for performance, usability and security might only be able to be done effectively on the final release. While that's not always the case, certainly there are certain types of testing that are going to favor your final release. Do everything you can to minimize the amount of testing required to get the final software package out the door. Pull as much of it forward as you possibly can. As a general rule, the closer the testing is to when the code was written, the better off the software will be.
Understanding what needs to be tested and when
In my experience, most iterations contain a mix of stories that will potentially be going to production at different times. For example, let's say we use a regular release schedule where we release to production every two weeks. The majority of our bi-weekly regular releases are minor releases, with a major release every three months. If we ever have to do an off release, we'll call that a hotfix. It might look like the following:
In that scenario, it's been my experience that it's not uncommon to have a sprint where you're doing development for multiple releases at a time. For example, you might be doing bugfixes for your next minor release, while doing new development on small features for the minor release after that, with a couple of larger features being worked on for the next major release. In that scenario, if you were working Sprint 1 in the example above, and the sprint was made up of twenty stories, those stories might be divided up as follows:
Release Number
Stories in Sprint
Time until release to production What this means for testing What this means for issue resolution
End of Sprint All test planning and execution would need to take place during the sprint. Any issues found would need to be resolved in the sprint.
1.5 weeks from end of sprint All test planning and execution would likely need to take place during the sprint. Any issues found would, at the latest, need to be resolved during the first week of the next sprint.
3.5 weeks from end of sprint All test planning would need to take place during the sprint. Execution and issue resolution could be delayed to the next sprint if needed.
9.5 weeks from end of sprint Test planning should be done during the sprint, but can wait if needed. Execution and issue resolution can likely wait if needed.
In this type of scenario, you're team needs to test for four different releases simultaneously. To be successful in this type of scenario, you need strong configuration management practices, strong release tracking, and a firm understanding of what testing coverage you hope to achieve for each release.
Strong configuration management is important since in agile environments there's often little time available for manual regression testing. In my experience, most regression issues introduced are due to poor configuration management practices. If you've got a strong CM plan in place and everyone's clear on what needs to be done, the likelihood that you'll introduce a regression-related issue goes down dramatically. On one past team I worked with, when we re-architected our configuration management plan to better allow us to work multiple releases each sprint, we were able to cut our regression issues, and issues related to code management, by around 90% for each release.
I'd be surprised if you followed the example I outlined above in both the diagram and the table the first time you read them. I would expect you'd need to re-examine that information a couple of times to really follow when code is going to production. When you've got code going to four different releases, you need strong release tracking. Most release management tools are up to the task of tracking the stories, but it's a huge benefit when you can tie your source code commits to the tickets in a release in that tool. To that end, look for release tracking tools that also integrate with your configuration management tools.
Finally, all of this feeds into figuring out what you need to test for each release and when you need to test it. Different sprints and releases will apply different pressures to your testing team. Balancing those pressures can be a challenge. Find or develop tools that help you track coverage and progress across multiple releases at a time. On the last agile testing team I managed, each tester was a member of a sprint team and also responsible for coordinating the testing for least one client-facing release. This meant that in addition to their sprint testing responsibilities, they would be tracking coverage for one release while helping execute testing for others.
Changing the way you think about bugs
Just like in traditional development methodologies, all of these testing pressures end up putting pressure on releases. The biggest of which is what the team does when bugs are found. In a waterfall project, often developers are waiting around for bugs when testing is going full-steam ahead. They know bugs are coming and are waiting for them. With agile however, developers have likely moved on to the next story. In their mind, they're done with the code that's been accepted during the sprint. They have other deliverables now they need to focus on.
When a bug is found during a sprint, the answer to that is often easy – the bugs get fixed or the sprint isn't done. However, what happens after the sprint? What if a bug is found in final release testing, during a transition sprint, or you find a bug during your sprint testing in a feature not being worked in your current sprint?
Different teams have different approaches to this problem. Some teams just make all their bugs stories. They go to the backlog just like any other story, and they get prioritized and worked using the same process any other story would get. Other teams work them in break-fix cycles, just like traditional teams. If a bug related to a release is found, developers work the bug right away and testers re-test as needed. In this scenario, you need to have capacity available for developers to do that work. Where will you get that capacity if all the developers are working in a sprint?
Transition sprints are a common answer to this problem. A transition sprint is often the sprint before code goes to production. In this sprint developers take a very light workload for the sprint in anticipation of the need to work issues resulting from final testing and/or deployment activities. That strategy works well for large releases, but what if your teams work multiple small releases? It becomes more difficult in that scenario to set aside blocks of time like that to focus on a single product.
As a leadership team, you'll need to get on the same page quickly with how your organization will handle these issues. Figure out how you'll manage defects: will they be backlog items or tracked separately? Figure out how you'll deal with the issues that come from final release testing activities: will you use transition sprints or some other set aside defect-work capacity for developers? And for all of this, how will you keep track of it all so you can see progress, time tracking, and know when you're done?
Summing it all up
Above, we covered a lot of ground. If you're new to agile, hopefully you'll see that testing can't happen as its own "phase" like it does in traditional methodologies. With agile, developers do a lot of the testing, testers are integrated in the workflow of what it means to "complete" a story, and testing can be largely a futile exercise if you don't have good configuration management and release planning in place.
If you're already a test manager on agile projects and you're already aware of those dependencies, then you'll want to instead focus on the tactical logistics of tracking coverage across sprints and release, prioritizing and load balancing the work on what is most likely an overworked team, and finding ways to leverage the automation the developers are using every day.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development