Agile terminologies 1

DEFINITION - In agile software development, an iteration is a single development cycle, usually measured as one week or two weeks. An iteration may also be defined as the elapsed time between iteration planning sessions. While the adjective iterative can be used to describe any repetitive process, it is often applied to any heuristic planning and development process where a desired outcome, like a software application, is created in small sections. These sections are iterations. Each iteration is reviewed and critiqued by the software team and potential end-users; insights gained from the critique of an iteration are used to determine the next step in development. Data models or sequence diagrams, which are often used to map out iterations, keep track of what has been tried, approved, or discarded -- and eventually serve as a kind of blueprint for the final product.
In general, an iteration is the act of repeating.

Acceptance Test:

An acceptance test confirms that an story is complete by matching a user action scenario with a desired outcome. Acceptance testing is also called beta testing, application testing, and end user testing.

Agile Software Development:

Agile software development is a methodology for the creative process that anticipates the need for flexibility and applies a level of pragmatism into the delivery of the finished product. Agile software development (ASD) focuses on keeping code simple, testing often, and delivering functional bits of the application as soon as they’re ready.


In agile software development, a customer is a person with an understanding of both the business needs and operational constraints for a project. The customer provides guidance during development on what priorities should be emphasized.

Domain Model:

A domain model describes the application domain responsible for creating a shared language between business and IT.


An iteration is a single development cycle, usually measured as one week or two weeks. An iteration may also be defined as the elapsed time between iteration planning sessions.

Planning Board:

A planning board is used to track the progress of an agile develoment project. After iteration planning, stories are written on cards and pinned up in priority order on a planning board located in the development area. Development progress is marked on story cards during the week and reviewed daily.

Planning Game:

A planning game is a meeting attended by both IT and business teams that is focused on choosing stories for a release or iteration. Story selection is based upon which estimates of which stories will provide the most business value given development estimates.


A release is a deployable software package that is culmination of several iterations of development. Releases can be made before the end of an iteration.

Release Plan:

A release plan is an evolving flowchart that describes which features will be delivered in upcoming releases. Each story in a release plan has a rough size estimate associated with it.


A spike is a story that cannot be estimated until a development team runs a time-boxed investigation. The output of a spike story is an estimate for the original story.


A stand-up is a daily progress meeting, traditionally held within a development area. Business customers may attend for the purpose of gathering information. The term “standup” is derived from the way it is run all attendees must remain standing to keep it short and the team engaged.


A story is a particular business need assigned to the software development team. Stories must be broken down into small enough components that they may be delivered in a single development iteration.


A timebox is a defined period of time during which a task must be accomplished. Timeboxes are commonly used in agile software development to manage software development risk. Development teams are repeatedly tasked with producing a releasable improvement to software, timeboxed to a specific number of weeks.


Velocity is the budget of story units available for planning the next iteration of a development project. Velocity is based on measurements taken during previous iteration cycles. Velocity is calculated by adding the original estimates of the stories that were successfully
delivered in an iteration.


A wiki is a server program that allows users to collaborate in forming the
content of a Web site. With a wiki, any user can edit the site content, including other users’ contributions, using a regular Web browser

Agile Testing is continued here .

Agile terminologies

DEFINITION - A sprint, in Scrum software development, is a set period of time during which specific work has to be completed and made ready for review.
Each sprint begins with a planning meeting. During the meeting, the product owner (the person requesting the work) and the software development team agree upon exactly what work will be accomplished during the sprint. The development team has the final say when it comes to determining how much work can realistically be accomplished during the sprint and the product owner has the final say on what criteria needs to be met for the work to be approved and accepted.
The duration of a sprint is determined by the scrum master (group facilitator). To help with scheduling and planning, once the team agrees with the scrum master on how many days a sprint should last, all future sprints should be the same. Traditionally, a sprint lasts 30 days.

Once a sprint begins, the product owner must step back and let the team do their work. During the sprint, the team meets daily to discuss progress and brainstorm solutions to challenges. The project owner may attend these meetings as an observer but is not allowed to participate unless it is to answer questions. (See pigs and chickens). The project owner may not make requests for changes and only the scrum master (team facilitator) has the power to interrupt or stop the sprint.
At the end of the sprint, the team presents its completed work to the project owner and the project owner uses the criteria established at the sprint planning meeting to either accept or reject the work.

DEFINITION - A burn down chart is a visual representation of the amount of work that still needs to be completed before the end of a project. A burn down chart has a Y axis (work) and an X axis (time). Ideally, the chart illustrates a downward trend as the amount of work still left to do over time "burns down" to zero.
A burn down chart provides both project team members and business owners with a common view of how work is proceeding. The term is often used in agile programming.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development

What is Agile Testing

"Agile Development" that comes to the fore is simply:
"Reduce the gap in time between doing
some activity and gaining feedback."
Though typically couched in terms of discovering errors sooner, there is a positive side to this concept. You also want to discover you are finished (with a feature) as soon as possible! One of the big parts of being in an agile state of mind is trying to always do the best you can with the time and resources you have. Not discovering a bug until late in the development cycle is obviously expensive. However, going overboard on building code to meet more than the feature demands is potentially no less wasteful. After all, if you add "polish" and "gleam" above and beyond what was needed, you have added more cost to development. In short, you wasted time and money - neither of which can be recovered - and you added more cost to downstream maintenance.
In agile development we want to streamline the process to be as economical as possible and to get feedback as early and as often as possible.
Why test? Tests are a great way to get feedback. Tests are useful to ensure the code is still doing what it is supposed to, that no unintended consequences of a code change adversely affect the functionality. Tests are also very important in that they help add "crisp" definition to the feature description such that a developer will know when the feature is complete. To this end, I have two simple rules for knowing when a feature has been described in enough detail:
  1. The developer can supply a pretty accurate estimate
  2. The tester can write an acceptance test
Not only are there different types of tests, but there are also different ways to conduct tests, as discussed next.
How do we test? Typically testing is done manually and through automation. Okay, that is kind of a "duh," but the real issue is to not confuse the role that each technique can play during development. Though you should automate as much of the testing as possible, this does not mean that all manual testing is no longer needed. Indeed, there are many cases when a system may have passed all the automated tests, only to immediately fail when a user starts to exercise some part of the functionality.
In keeping with agile principles to do smart things, be sure you are not using manual tests to do that which could be automated. And, be sure you are not trying to do expensive automation of tests that are better left for human testers. Additionally, be sure you are not avoiding doing certain tests just because you cannot automate them!
What kinds of tests should you use? There is no "one-size-fits-all" strategy. Like everything in agile development, testing also requires you to use your brain! I know, shocking, eh? Here are an assortment of test types and the purpose/role they can play during development.
  • Unit tests are good for the developers to build up as they are coding their assigned features. The purpose is to ensure that the basic functionality of the underlying code continues to work as changes are made. Plus unit tests are a useful way to document how the "API" is to be used.
  • Acceptance tests are used to ensure specific functionality works. Essentially, the acceptance test is a client-specified scenario that indicates the desired functionality is properly implemented. By having an acceptance test, the developer can know when "good enough" is met, and the feature is complete.
  • UI tests - these often involve some means to step through the page flow, provide known inputs and check actual results against expected results. Some UI tests can get fancy enough to use bitmap comparisons; for example, for graphics or CAD packages.
  • Usability testing - this is a whole other area of testing that often involves humans! I won't go into details here, but usability can often be "make-or-break" acceptance criteria. Some projects can also benefit from automated tests that ensure UI standards are being followed (where they may not be easily enforced through UI frameworks).
  • Performance tests - running a suite of tests to ensure various non-functional metrics are met is critical for many apps. If your app requires stringent performance metrics, you need to tackle this up front during the initial architecture and design phases. Before building the entire app, you need to ensure the performance benchmarks are being met. Typically, you build a thin slice of the application and conduct stress testing. You may run simulated sweeps across system-critical parameters. For example, simulate 1 to 100 users simultaneously accessing 100 to 10,000 records with varying payloads of 100K to 1GB. The benchmark criteria may be the need for response times of 1 second or less. The performance benchmarks are usually automatically run at least on every "major" build - maybe you do this once a week on Friday, or at the end of each iteration. I like to keep running tables and graphs of these benchmark results. You can spot trends.
If you are practicing test-first development, you can start with an acceptance test for the feature you are implementing. But you will quickly get involved with writing lower-level tests to deal at simpler, more granular levels of the system.
Personally, I don't use tests to derive the architecture and major classes involved in the design. But I can see where some people do. Much depends on your habits, and more likely how your brain is wired. Mine is pretty loose :=, and I tend to visualize objects.
What test techniques are useful? Though automation is key, it doesn't have to be the only kind of tests you run! I generally like to have a mix of automated and manual tests. And for the automated tests you can refine the frequency of when they are run. Here are the various types of testing and the role it can play in an agile project:

  • "Smoke" Tests - essentially a handful of critical tests that ensure the basic build functionality works. Some can be automated, but others can be manual. If it is easy to run the smoke tests, it can help the development team know that the daily build is probably useful. When manual tests are involved (that is, a bit more expensive to conduct), this technique is often reserved for "major" builds that might be ready for undergoing more formal QA testing (to prevent you from doing expensive and exhaustive testing on a "bad" build).
  • Test Harness - is frequently a good way to "institutionalize" exposing functionality of the system (e.g., via Web Services) for coarse-grain functionality typical for acceptance tests and major system scenarios (and even external invocation). Wiring up a test harness enables ease of extending the breadth of the tests as new functionality is added. You can build test harnesses that are very automated in terms of capturing the user input (using a record mode), and capturing the output, and getting a user's (in this case typically a tester) acknowledgement that the output is correct. Each captured test case is added to a test database for later playback orchestration.
  • Automated stress test "bots" - if you do a good job of designing a layered architecture with clean separation of concerns, you may be able to build in some testing robots. This is often especially easy when dealing with service-based systems. You can use XML-style config files to control the tests being run. You can build tests to exercise each layer (e.g., persistence, business, messaging, presentation) of the system, building up to tests that replicate actual user usages of the system (except for the UI part). We have built such bots that can then be distributed on dozens of systems and can even be spawned in multiples on each system. This allows for very simple concurrent testing to "hammer" the server from all over the world. The config files control the intensity, breadth, and duration of the tests.
  • Manual tests - these are reserved for those aspects of the system best left for human testing. That is, to use a tester for boring, mundane, tedious, and monotonous testing is to be very wasteful of human potential. Not only is the likelihood of having errors in the testing high, but you will probably not have time to catch the more elusive bugs. Instead, allow the testers to focus on being more exhaustive with using the application in complex ways that may not be easy to automate.
All of the test results run against the build machine should be visible in summary form, with drill-down as required. A good way to do this is with a special page on the wiki (or other common place) to show current build results. We also always send out emails when new (successful) builds are available. And we send emails to the folks who may have errors in their code, and create defect issues as required!
There are many tools to help in this publishing process.
Putting it together So you should have a mix of manual and automated tests that go something like this during development:

  • Grab a well-defined feature
  • Write the acceptance test, write some code, write some unit tests and more code as needed, get the tests to pass, check in the code
  • Is this feature performance critical?
    • Write a performance test to measure a critical benchmark
    • Ensure the test passes as you are doing development
    • Add to the benchmark suite
  • Does the feature have unique UI aspects?
    • May need to use manual testing to ensure usability and complex functionality that is too difficult to automate
  • Add the acceptance test to the functional test suite (use an automated tool for this)
  • If the feature passes the acceptance test and the (optional) performance tests, you can declare victory on the feature, close it and move to the next feature!
If you are using other testing techniques (test harness, test bots, helpful tools such as my company's functional testing, load testing, code coverage (DevPartner), or system-level analysis tools (Vantage Analyzer);ObjectMentor FitNesse ( app) you will need to keep those tools in sync.
Constantly improve Start small. Do not overdo the process from the get-go. The tests and techniques should grow over time to meet your needs. If you get some nasty bug reported that could be prevented in the future by a test, add a new acceptance or unit test(s)! If users complain about some performance aspect of the system (like it takes 2 minutes to open up a large project file), add a benchmark test to your performance suite. This will be used to precisely document the poor performance, and then to show the improvement once the developer makes the fixes. If some tests are no longer worth executing, comment them out (and eventually delete them).
The point is to use your brain!
Agile is a state of mind! Testing in an agile project can vary in depth and style, but not in its intent. Build a mix of tools, techniques, process, and automation to make the right tests work for you.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development

Agile Scrum Development

Managing the testing for an agile project can have a unique set of challenges: one or two week iterations, overlapping test coverage, understanding what "done" means for the team, and finding the right balance of testing for functionality vs. all the other quality criteria including performance, security, etc. We'll look at some common issues encountered when testing in an agile environment, and provide some tips for dealing with them. In this article, I'll predominantly provide examples of the Scrum process. This isn't because Scrum is necessarily better, it's simply because most of my experience is using Scrum. It's what I'm comfortable with. Scrum is an iterative process, where a cross-functional team works on a series of stories from one or more product backlogs. Each iteration is called a "sprint." Your story contains your requirements, and the product backlog contains all the stories for a product. Scrum is best known for its short daily meetings, known as "scrums." For more on Scrum, there are a couple of references provided at the end of this article which can help provide context for the examples.
Dealing with the time crunch
Whether you're new to testing, or just new to testing on an agile project, the first challenge you'll run up against is the short amount of time you have to test. Most agile projects I've worked on have been structured for completion in two week iterations. I've also worked on one week iterations, and three week iterations. I know some people who've worked on longer iterations, but I haven't.
Its precisely due to this time crunch that agile teams rely so heavily on automation when it comes to testing. There are a lot of other reasons why you might want to automate tests, but certainly the ability to get rapid feedback on code changes is a big part of it. For example, take a look at the testing stack Expected Behavior put together. They're a Ruby on Rails Web application development company. The "basic" tools they use for testing include:

  • Cucumber: a behavior-driven development testing tool
  • HtmUnit: a test tool for simulating the browser
  • Celerity: a wrapper for HtmlUnit
  • Culerity: integrates Cucumber and Celerity
  • Bootstrapper: assists in bootstrapping and seeding databases
  • Factory Girl: provides a framework and DSL for defining and using factories
Before they write a line of code, they're thinking of testing. The tools are only part of it. It's also the process that supports those tools. If you're working in an agile team, your team needs a testing methodology and the right tools to support it. What tests will you create? Unit, integration, acceptance, performance, etc.? How will you create them? Test-driven, behavior-driven, coverage-focused, etc.? What practices and tools will you use to create them? Fixture, mocks, profilers, checkstyles, etc.?

Related Content:
Where Agile development works and where it doesn't: A user story
British Airways used agile development to build its website, But despite the company's satisfaction with agile, it still plans to use waterfall in some areas.

Software test expert: Why Agile teams' unit tests often fail
For many software development teams, unit testing is a niche expertise and easily overlooked. However author Roy Osherove advocates for unit testing particularly among agile teams.

While you might normally think of all of this as just a "programming" activity, if you're a test manager for an agile team, you need to know. A lot of your strategy will be focused on getting test coverage where your developers don't already have it. That means you need to understand what they're doing and how they're doing it. Further, many times your team can build on what they're already doing. Not only can you make use of their test assets (populating data for testing, mocking services, etc.), but you can also extend their tests with your own.
Aside from automation, the next thing you have to figure out to be successful is how to build a framework for rapid testing. For me, this often means implementing an exploratory testing approach. But I know of some test managers who've also been successful with more traditional approaches. The trick is to organize your testing to reduce documentation, setup time and coordination overhead, while maximizing coverage per test and how much you can leverage tacit knowledge.
Figuring out where your testing fits
When you've got a cross-functional team working in two week increments, things get cramped. There's only so much coverage you can get on software that was written the second to last day of the sprint. This creates an interesting problem for agile testing teams. They have to figure out where their testing fits. Below I share some of the ways I've managed testing for agile teams. Recognize that none of these needs to be exclusive of the others – you can mix and match approaches to meet your needs.
Embedded testers: Most agile teams want the testers working in the sprint, side-by-side with developers. I call this embedded testing. The testers are part of the sprint, and their testing takes place real time with development. A story in the sprint isn't "done," until it's tested and all the bugs are fixed. In this scenario, testers often do as much test design as they can the first week and start testing software as soon as it becomes available. It's a very high pressure situation.
This approach has the advantage in that it produces a high-quality product. Programmers get rapid feedback on their software, and if they need to fix bugs, the code is still fresh in their minds. It also creates a shared objective – done means tested and all bugs fixed. Everyone knows it, so everyone works towards it. There's little back and forth on bugs, decisions are made swiftly and acted on more swiftly.
On the other hand, it creates tremendous pressure in the last few days of the sprint. Any story of reasonable size needs to be completed early in the sprint, else there isn't enough time to run tests or fix bugs. That means the team needs to be on the ball when it comes to estimation, and they have to be smart in the way they structure the sprint. Everyone makes mistakes, so as you can imagine, in this setup, there are a lot of sprints that have complications late in the sprint.
Testing offset from development: The alternative is to push the majority of the testing-team's testing to outside of the sprint. A common practice is to have the testing team lag the development team by one sprint. This removes the end-of-sprint pressures, but introduces the classic problems of separation of time and space - the biggest of which is that when a bug is found, you have to figure out how you're going to fix it. That's not a trivial problem and we'll address it in a later section of this article.
When testing is offset from development, often test planning and test execution happen concurrently. If I'm a tester, I'm likely doing test planning for the sprint that's currently taking place and I'm running tests against the software from the previous sprint using the test planning artifacts I made at that time. This requires the tester to balance focus between sprints. If they aren't good at that, they'll either fall behind in test planning, or not do as good of a job with their test execution.
Testing focused around production releases: An alternative to both of these approaches is to go back to a more traditional approach to testing and to just test the final releases that go out the door. In this scenario, you ignore sprints entirely, except perhaps as a vehicle for test planning. And all of your test execution takes place against the final release artifacts. This approach isn't bad for small quick releases – for example if you had software going to production every two to four weeks – but I think it's a mistake for large releases. It has all the same problems you get when you test using waterfall methods.
That said, testing for certain quality criteria makes sense on the final release. For example, testing for performance, usability and security might only be able to be done effectively on the final release. While that's not always the case, certainly there are certain types of testing that are going to favor your final release. Do everything you can to minimize the amount of testing required to get the final software package out the door. Pull as much of it forward as you possibly can. As a general rule, the closer the testing is to when the code was written, the better off the software will be.
Understanding what needs to be tested and when
In my experience, most iterations contain a mix of stories that will potentially be going to production at different times. For example, let's say we use a regular release schedule where we release to production every two weeks. The majority of our bi-weekly regular releases are minor releases, with a major release every three months. If we ever have to do an off release, we'll call that a hotfix. It might look like the following:
In that scenario, it's been my experience that it's not uncommon to have a sprint where you're doing development for multiple releases at a time. For example, you might be doing bugfixes for your next minor release, while doing new development on small features for the minor release after that, with a couple of larger features being worked on for the next major release. In that scenario, if you were working Sprint 1 in the example above, and the sprint was made up of twenty stories, those stories might be divided up as follows:
Release Number
Stories in Sprint
Time until release to production What this means for testing What this means for issue resolution
End of Sprint All test planning and execution would need to take place during the sprint. Any issues found would need to be resolved in the sprint.
1.5 weeks from end of sprint All test planning and execution would likely need to take place during the sprint. Any issues found would, at the latest, need to be resolved during the first week of the next sprint.
3.5 weeks from end of sprint All test planning would need to take place during the sprint. Execution and issue resolution could be delayed to the next sprint if needed.
9.5 weeks from end of sprint Test planning should be done during the sprint, but can wait if needed. Execution and issue resolution can likely wait if needed.
In this type of scenario, you're team needs to test for four different releases simultaneously. To be successful in this type of scenario, you need strong configuration management practices, strong release tracking, and a firm understanding of what testing coverage you hope to achieve for each release.
Strong configuration management is important since in agile environments there's often little time available for manual regression testing. In my experience, most regression issues introduced are due to poor configuration management practices. If you've got a strong CM plan in place and everyone's clear on what needs to be done, the likelihood that you'll introduce a regression-related issue goes down dramatically. On one past team I worked with, when we re-architected our configuration management plan to better allow us to work multiple releases each sprint, we were able to cut our regression issues, and issues related to code management, by around 90% for each release.
I'd be surprised if you followed the example I outlined above in both the diagram and the table the first time you read them. I would expect you'd need to re-examine that information a couple of times to really follow when code is going to production. When you've got code going to four different releases, you need strong release tracking. Most release management tools are up to the task of tracking the stories, but it's a huge benefit when you can tie your source code commits to the tickets in a release in that tool. To that end, look for release tracking tools that also integrate with your configuration management tools.
Finally, all of this feeds into figuring out what you need to test for each release and when you need to test it. Different sprints and releases will apply different pressures to your testing team. Balancing those pressures can be a challenge. Find or develop tools that help you track coverage and progress across multiple releases at a time. On the last agile testing team I managed, each tester was a member of a sprint team and also responsible for coordinating the testing for least one client-facing release. This meant that in addition to their sprint testing responsibilities, they would be tracking coverage for one release while helping execute testing for others.
Changing the way you think about bugs
Just like in traditional development methodologies, all of these testing pressures end up putting pressure on releases. The biggest of which is what the team does when bugs are found. In a waterfall project, often developers are waiting around for bugs when testing is going full-steam ahead. They know bugs are coming and are waiting for them. With agile however, developers have likely moved on to the next story. In their mind, they're done with the code that's been accepted during the sprint. They have other deliverables now they need to focus on.
When a bug is found during a sprint, the answer to that is often easy – the bugs get fixed or the sprint isn't done. However, what happens after the sprint? What if a bug is found in final release testing, during a transition sprint, or you find a bug during your sprint testing in a feature not being worked in your current sprint?
Different teams have different approaches to this problem. Some teams just make all their bugs stories. They go to the backlog just like any other story, and they get prioritized and worked using the same process any other story would get. Other teams work them in break-fix cycles, just like traditional teams. If a bug related to a release is found, developers work the bug right away and testers re-test as needed. In this scenario, you need to have capacity available for developers to do that work. Where will you get that capacity if all the developers are working in a sprint?
Transition sprints are a common answer to this problem. A transition sprint is often the sprint before code goes to production. In this sprint developers take a very light workload for the sprint in anticipation of the need to work issues resulting from final testing and/or deployment activities. That strategy works well for large releases, but what if your teams work multiple small releases? It becomes more difficult in that scenario to set aside blocks of time like that to focus on a single product.
As a leadership team, you'll need to get on the same page quickly with how your organization will handle these issues. Figure out how you'll manage defects: will they be backlog items or tracked separately? Figure out how you'll deal with the issues that come from final release testing activities: will you use transition sprints or some other set aside defect-work capacity for developers? And for all of this, how will you keep track of it all so you can see progress, time tracking, and know when you're done?
Summing it all up
Above, we covered a lot of ground. If you're new to agile, hopefully you'll see that testing can't happen as its own "phase" like it does in traditional methodologies. With agile, developers do a lot of the testing, testers are integrated in the workflow of what it means to "complete" a story, and testing can be largely a futile exercise if you don't have good configuration management and release planning in place.
If you're already a test manager on agile projects and you're already aware of those dependencies, then you'll want to instead focus on the tactical logistics of tracking coverage across sprints and release, prioritizing and load balancing the work on what is most likely an overworked team, and finding ways to leverage the automation the developers are using every day.

Agile Testing is continued here .
Agile Terminologies
Agile Terminologies Contd.
What Is Agile Testing
Agile Scrum Development

Test Complete 8

Beverly, MA, July 28, 2010 – SmartBear Software announces the new major release of AutomatedQA TestComplete™ 8.0. This comes on the heels of last week’s news that SmartBear is the new name for AutomatedQA, Smart Bear Software and Pragmatic Software. (See announcement)
TestComplete 8.0 adds major new capabilities that boost productivity of individual users and further streamline enterprise-level automated testing. Overall, more than 120 new features and enhancements have been added, driven by TestComplete’s global community of thousands of loyal users.
Ian McLeod, EVP Products at SmartBear Software said, “Listening to our community of users and adding features that are most important to them is key for our product development strategy. We continue to advance our state-of-the-art tools to help novice users get up and running quickly and at the same time add the advanced features our more experienced users continue to get jazzed about, at a price that is unmatched in the industry.”
AutomatedQA TestComplete 8.0 adds more automation for all categories of users. We at SmartBear understand the daily pressures of test teams and engineers, and challenges that new users face with unfamiliar tools. With this in mind, the TestComplete 8.0 interface has been enhanced specifically for first-time testers to get up and running more quickly. At the same time, new advanced features play into the hands of power users helping them create sophisticated automated tests with point-and-click wizards. The new Test Visualizer and the Data-driven Test Wizard make the set up of complicated tests accessible to everyone.
TestComplete 8.0 adds major new features that enhance user experience and enterprise-class automated testing:
  • Test Visualizer – TestComplete 8.0 automatically captures screenshots during test recording and playback. This enables quick and intuitive comparisons between expected and actual screens during test.
  • New Automation for Data-Driven Tests – Creating data-driven tests is now highly automated using a new wizard to access data sources in databases, Excel and CSV files. Users can create Data-driven tests without scripting, which significantly improves test productivity.
  • Support for new platforms and technologies – Users can now perform automated testing of native and managed applications built with Microsoft Visual Studio 2010 and the .NET Framework 4. TestComplete 8.0 also supports new Web technologies and platforms such as Silverlight 4 and Firefox 3.6.
  • SoftwarePlanner and other defect tracking tools – TestComplete 8.0 is better integrated with SoftwarePlanner, SmartBear’s development and test management software, featuring new capabilities to generate reports based on automated test runs, and automatically log defects. This version also introduces new integration with other defect-tracking tools, Atlassian JIRA and Axosoft OnTime, enabling users to post defects directly from the TestComplete user interface.
Corey Anderson, Development Manager at Schweitzer Engineering Laboratories (SEL) said, “We have been using TestComplete for years and have seen it bring many values to our testing teams. TestComplete 8.0 makes it easier to troubleshoot our issues and find necessary information for scripting with its new configuration options.”
Visit for a full list and details of enhancements.
AutomatedQA TestComplete 8.0 is available immediately. To purchase or download a free 30-day trial, visit

Release Summary

The new release includes over 120 new features, functional and usability enhancements, driven by feedback and requests from our global community. We pack yet more test automation for the users with the following key new features and updates:

  • Test Visualizer enables quick and intuitive comparisons between expected and actual screens by automatically capturing screenshots during test recording and playback
  • New wizards for easier creation of data-driven tests using Excel, DB, and CSV data sources
  • Support for Visual Studio 2010 and .NET Framework 4 applications
  • Support for Silverlight 4 and Firefox 3.6
  • Integration bridges for the SmartBear own Software Planner, Atlassian JIRA, and Axosoft OnTime defect-tracking systems
  • Automatic importing of manual test instructions from Word, Excel and text files
  • Improved support for dynamic web controls
  • Access to methods and properties of Java application objects during tests
  • Conditional criteria for name mapping
  • Smarter image comparison
To learn more, attend our "What's New in TestComplete 8" Webinar.
See the information below for training, download and upgrade options. We thank you for your continuing support and the confidence that you have placed in TestComplete.


Online training is now available for keyword-driven testing in TestComplete 8. This 3 day (4 hour sessions each day) online course prepares experienced, beginners and new users to run fully automated keyword-driven tests against a full range of software products.
SmartBear will host a "What's New in TestComplete 8" Webinar on Thursday, August 5th at 2pm ET. We highly recommend that you attend this Webinar to get the latest and greatest on the new release of our award-winning TestComplete test automation suite.

Test Script Maintenance

Automation Testing and script maintenance :

Debug Scripts Incrementally :
Recorded test scripts, like other software development efforts, can become quite large. Hundreds of lines of codes will need debugging for successful playback, which might contain several sets of data for parameterized data driven test scripts. The common approach to debugging a test script is to first record all the business processes and requirements, then have the tester play back the test script to identify and correct any problems. The tester continues to debug the test script until it successfully plays back with a single set of data and/or multiple sets of data.

Debugging and troubleshooting test scripts becomes extremely tedious and intractable when the test script has hundreds of lines of code, verification points, branching logic, error handling, parameters, and data correlation among various recorded business processes. A much more manageable approach to debugging complex and lengthy test scripts is to record portions of the script and debug these portions individually before recording other parts of the test script. After testing individual portions, you can determine how one portion of the test script works with another portion and how data flows from one recorded process to the other. After all sections for a test script have been recorded, one can playback the entire test script and ensure that it properly plays back from the beginning to the end with one or more sets of data.

As an example, I recorded and automated a complex test script that performed the following business processes:
1. Check the inventory in the warehouse,
2. carry out an MRP run,
3. replenish inventory,
4. pick items for a delivery and process the delivery,
5. confirm the orders were transferred for the delivery, and
6. verify the delivery items arrived at their destination.

This test script had several lines of codes, parameters, verification points, and data correlation that needed to work as a cohesive unit. First I recorded each individual process and verified they could successfully playback individually. Then I integrated all the recorded processes into a large test script and verified it could playback successfully with multiple sets of data. As previously stated, a key objective is to ensure that each recorded process plays back successfully before proceeding to record the remaining portions of the entire test script. I did not record all the processes mentioned (1 through 6) and stringed them together for playback without first verifying that all the processes could playback successfully as individual processes.

The lesson here is to avoid waiting to debug the script until the entire test script is recorded.

Test Script Synchronization
Test tools can play back recorded test script at rates much faster than an end-user's manual keystrokes. Subsequently this can overwhelm the application under test since the application might not display data or retrieve values from the database fast enough to allow proper test script playback. When the test application cannot respond to the test script, the script execution can terminate abruptly thus requiring user intervention. In order to synchronize applications under test and test scripts during playback, the testing team introduces artificial wait times within the recorded test scripts. Wait times embedded in the test script meant to slow down the test script execution are at best arbitrary and estimated through trial and error. The main problem with wait times is that they either wait too long or not long enough.

For instance, the tester might notice the test script is playing back too fast for the application under test. He might decide to slow it down several times until the test script execution is synchronized with the test application. This technique can backfire--even fail--if the application is running slower than the newly introduced wait times during test execution due to external factors such as the LAN having delays or system maintenance. In this scenario, the tester would have to continually guess a new reasonable wait time--each time. Slowing down a script with wait times is not very scientific and does not contribute to the creation of a robust automated test script that playbacks successfully without user intervention.

If possible, testers should avoid introducing artificial wait times or arbitrary sleep variables to synchronize test scripts with the application.

"While" statements or nested "loops" are appropriate techniques used to synchronize test scripts that require synchronization points and will playback successfully regardless of the response times of the application under test. Inserting "nested" loops or "while" statements within a test script also reduces user intervention during the test script playback. For example, I insert "while" statements in recorded test scripts that continually press the Enter button until a scheduling agreement is created no matter how long the application under test takes to generate the agreement. The test script works independently of the response times for the application under test.

Signed-off, Peer Reviewed
As part of the test readiness review criteria, test scripts should be formally accepted and approved prior to starting the test cycle. SMEs, business analysts, and developers should be involved in approving recorded test scripts. The tester writing the automated test script should demonstrate that the test script successfully plays back in the QA environment and, if possible, with various sets of data.

Recording, Playing Back Against Hidden Objects
Scripts might be recorded to populate or double click values for a field within a table grid or an array where the location of this field is not fixed. If the field's location within a table grid or array changes from the time it was recorded, the script might fail during playback. Test scripts often fail during playback because the location of objects that are not displayed or visible within the screen have changed.

In order to playback scripts that are location sensitive or where the location is subject to change, it might be necessary to enhance the script with functionality such as "scroll down," "next page," or "find." Including such utilities ensure that hidden objects requiring playback will be identified, populated, and/or double-clicked regardless of their location within an array, table grid, or the displayed screen.

As an example, I once recorded a script where I scrolled down twice during the initial recording to find an empty field where data could be entered within a table grid. When I played back the script a few weeks later, I had to scroll down four times to find an empty field instead of twice as previously recorded. Consequently the script failed, so I embedded logic in the script that instructs the script to scroll down as many times as necessary to find an empty field. I did this by placing the "next page" function in a "while" loop, which caused the script to "page down" until an empty field was found.

Schedule Recurring Scripts/Store Execution Logs
To circumvent the limitation of test tools not capable of scheduling test scripts on a recurring basis, one can schedule test script via the NT scheduler which supports various command line options. Test scripts should have execution logs stored in a shared drive or within test management tools for test results that are subject to audits.

Create Automatic Notification for Critical Scripts
Test scripts can be enhanced with error-handling programming logic that instantly sends error messages to a wireless device or an email address when problems occur. Some test scripts are business critical and might run as batch jobs in the middle of the night. The proper and successful execution of these business critical test scripts can serve as a dependency or pre-condition for other automated tasks.

Always include logic in business critical test scripts that automatically sends notification in the event of a failure.

To make test scripts reusable and easier to maintain, document all relevant information for executing the test script, a test script header, and any special conditions for execution of the test script. For example:
1. Adjust dates within the application under test for closing of the books,
2. Update any fields that require unique data,
3. display settings for context sensitive/analog/bitmap recording,
4. list other test scripts that are dependencies,
5. specify necessary authorization levels or user roles for executing the script,
6. conditions under which the script can fail and work around for re-launching the script,
7. applications that need to be either opened or closed during the script execution, and
8. specific data formats, for instance, European date format versus US date format, etc.

Furthermore, scripts should contain a header with a description (i.e. what it is used for) and its particular purpose (i.e. regression testing). The script header should also include the script author and owner, creation and modification date, requirement identifiers that the script traces back to, the business area that the script supports, and number of variables and parameters for the script. Providing this information in the test script header facilitates the execution, modification, and maintenance of the script for future testing efforts.

Perform Version Control on Test Scripts
Many corporations spend tens of thousands of dollars for test tools, but ignore the by-product of the test tool -- namely the recorded test script. For companies building libraries and repositories of automated test scripts, it is highly advisable to perform version control on automated test scripts. Version control helps to track changes made to the test scripts, and to maintain multiple versions of the same test script.

Adhere to Test Script Naming Standards and Storage
Test scripts should follow the project's accepted naming standards and should be stored in the designated repository such as a shared drive or test management tool.

The test manager should designate naming standards for the test scripts that include information for these areas:
1. Project name (i.e. GSI which stands for Global SAP Implementation),
2. release number (i.e. version or release number that will be released/deployed,
3. subject area or test category (i.e. SC for Security Testing, LT for Load Testing),
4. sequential test case number, and
5. title or function that will be tested (i.e. procurement from external vendors).

Following these tips enable testers to build more robust test scripts for their organizations. Also, developing maintainable test scripts maximizes the benefits of automated test tools. Companies can realize ROI from automated test tools when automated test scripts are used for future testing efforts, reducing the time needed to complete a testing cycle. The techniques above will help companies build test scripts that will meet these objectives.

VSTS2010 Nexgen Automation Tool

VSTS 2010 or Visual Studio has in it something a lot more than just what Microsoft has been doing earlier on. Yeah its not just the marketing that has been done on this very product but as well some of the remarkable features that this one contains for the very first time that can give the bets of the automation tetsing tools a tough run . When I say a tough run its not just the quality of the Test Projects that can be developed using this very tool but also the low charges at which the Microsoft has made the tool available.You get the complete VSTS2010 Ultimate edition pack for just a meagre $25 monthly charge. Going by the other tools license charges such as the QTP - A name more than having just a mark in the Automation testing world - $1500 per concurrent license , its a great deal from the microsoft as has always been in terms of marketing its products.

There are some other Quality tools in the market especially the IBM Rational, but then the degree of high quality technical grads that they require will not attract the best of the industry ventures. As a result the two in the race to remain is QTP and the VSTS , Mind you the QTP has had no challengers in the Automation Testing world prior to the launch of VSTS2010 Test Projects enhanced feature set. Also currently there is not much of resource avaialabe on this technology. So it can just be termed as the last honeymoon period in the offing for the HP QTP.

Going by my experince on the VSTS 2010 tool as an Automation testing stuff, it does possess a lot of feature reach control and lot of obejct recognition methodolgies but then the stuff thats going to awe the complete industry is its Greybox level testing capabilities. The tools Webetst feature works at this level and utilises just the POST & GET method of the web application. As a results of it several Client side validation suffers some problems but then it all can be addressed in some way or the other.

I think its high time for me to close my laptop for now .
Will definitely bring in more transaprency on this feature rich and hierarchy independent Test Automation Tool.

QTP Excel Word

III) Working with Word Docs

a) Create a word document and enter some data & save

Dim objWD
Set objWD = CreateObject("Word.Application")

objWD.Selection.TypeText "This is some text." & Chr(13) & "This is some more text"
objWD.ActiveDocument.SaveAs "e:\gcreddy.doc"

IV) Working with Excel Sheets

a) Create an excel sheet and enter a value into first cell

Dim objexcel
Set objExcel = createobject("Excel.application")
objexcel.Visible = True
objexcel.Cells(1, 1).Value = "Testing"

b) Compare two excel files

Set objExcel = CreateObject("Excel.Application")
objExcel.Visible = True
Set objWorkbook1= objExcel.Workbooks.Open("E:\gcr1.xls")
Set objWorkbook2= objExcel.Workbooks.Open("E:\gcr2.xls")

Set objWorksheet1= objWorkbook1.Worksheets(1)

Set objWorksheet2= objWorkbook2.Worksheets(1)

For Each cell In objWorksheet1.UsedRange
If cell.Value <> objWorksheet2.Range(cell.Address).Value Then
msgbox "value is different"
msgbox "value is same"
End If
set objExcel=nothing

QTP File System Operations

I) Working with Drives and Folders

a) Creating a Folder

Option Explicit
Dim objFSO, objFolder, strDirectory
strDirectory = "D:\logs"
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFolder = objFSO.CreateFolder(strDirectory)

b) Deleting a Folder

Set oFSO = CreateObject("Scripting.FileSystemObject")

c) Copying Folders

Set oFSO=createobject("Scripting.Filesystemobject")
oFSO.CopyFolder "E:\gcr6", "C:\jvr", True
d) Checking weather the folder available or not, if not creating the folder
Option Explicit
Dim objFSO, objFolder, strDirectory
strDirectory = "D:\logs"
Set objFSO = CreateObject("Scripting.FileSystemObject")
If objFSO.FolderExists(strDirectory) Then
Set objFolder = objFSO.GetFolder(strDirectory)
msgbox strDirectory & " already created "
Set objFolder = objFSO.CreateFolder(strDirectory)
end if

e) Returning a collection of Disk Drives

Set oFSO = CreateObject("Scripting.FileSystemObject")
Set colDrives = oFSO.Drives
For Each oDrive in colDrives
MsgBox "Drive letter: " & oDrive.DriveLetter

f) Getting available space on a Disk Drive

Set oFSO = CreateObject("Scripting.FileSystemObject")
Set oDrive = oFSO.GetDrive("C:")
MsgBox "Available space: " & oDrive.AvailableSpace

II) Working with Flat Files
a) Creating a Flat File

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.CreateTextFile("E:\ScriptLog.txt")

b) Checking weather the File is available or not, if not creating the File

Set objFSO = CreateObject("Scripting.FileSystemObject")
If objFSO.FileExists(strDirectory & strFile) Then
Set objFolder = objFSO.GetFolder(strDirectory)
Set objFile = objFSO.CreateTextFile("E:\ScriptLog.txt")
End if

c) Reading Data character by character from a Flat File
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile("E:\gcr.txt", 1)
Do Until objFile.AtEndOfStream
strCharacters = objFile.Read(1)
msgbox strCharacters

d) Reading Data line by line from a Flat File

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile("E:\gcr.txt", 1)
Do Until objFile.AtEndOfStream
strCharacters = objFile.Readline
msgbox strCharacters

e) Reading data from a flat file and using in data driven testing

Dim fso,myfile
Set fso=createobject("scripting.filesystemobject")
Set myfile= fso.opentextfile ("F:\gcr.txt",1)
While myfile.atendofline <> True
s=split (x, ",")
SystemUtil.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4a.exe","","C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\","open"
Dialog("Login").WinEdit("Agent Name:").Set s(0)
Dialog("Login").WinEdit("Password:").SetSecure s(1)
Window("Flight Reservation").Close

f) Writing data to a text file

Dim Stuff, myFSO, WriteStuff, dateStamp
dateStamp = Date()
Stuff = "I am Preparing this script: " &dateStamp

Set myFSO = CreateObject("Scripting.FileSystemObject")
Set WriteStuff = myFSO.OpenTextFile("e:\gcr.txt", 8, True)
SET WriteStuff = NOTHING

g) Delete a text file

Set objFSO=createobject("Scripting.filesystemobject")
Set txtFilepath = objFSO.GetFile("E:\gcr.txt")

h) Checking weather the File is available or not, if available delete the File

Set objFSO = CreateObject("Scripting.FileSystemObject")
If objFSO.FileExists(strDirectory & strFile) Then
Set objFile = objFSO.Getfile(strDirectory & strFile)
objFile.delete ()
End if

i) Comparing two text files

Dim f1, f2
Public Function CompareFiles (FilePath1, FilePath2)
Dim FS, File1, File2
Set FS = CreateObject("Scripting.FileSystemObject")

If FS.GetFile(FilePath1).Size <> FS.GetFile(FilePath2).Size Then
CompareFiles = True
Exit Function
End If
Set File1 = FS.GetFile(FilePath1).OpenAsTextStream(1, 0)
Set File2 = FS.GetFile(FilePath2).OpenAsTextStream(1, 0)

CompareFiles = False
Do While File1.AtEndOfStream = False
Str1 = File1.Read
Str2 = File2.Read

CompareFiles = StrComp(Str1, Str2, 0)

If CompareFiles <> 0 Then
CompareFiles = True
Exit Do
End If

End Function

Call Comparefiles(f1,f2)

If CompareFiles(f1, f2) = False Then
MsgBox "Files are identical."
MsgBox "Files are different."
End If

j) Counting the number of times a word appears in a file

Dim oFso, oTxtFile, sReadTxt, oRegEx, oMatches
Set oFso = CreateObject("Scripting.FileSystemObject")
Set oTxtFile = oFso.OpenTextFile(sFileName, FOR_READING)
sReadTxt = oTxtFile.ReadAll
Set oRegEx = New RegExp
oRegEx.Pattern = sString
oRegEx.IgnoreCase = bIgnoreCase
oRegEx.Global = True
Set oMatches = oRegEx.Execute(sReadTxt)
MatchesFound = oMatches.Count
Set oTxtFile = Nothing : Set oFso = Nothing : Set oRegEx = Nothing
msgbox MatchesFound

QTP VB Script Built in Functions

Types of Vbscript Functions
o Conversions (25)
o Dates/Times (19)
o Formatting Strings (4)
o Input/Output (3)
o Math (9)
o Miscellaneous (3)
o Rounding (5)
o Strings (30)
o Variants (8)

Important Functions

1) Abs Function
Returns the absolute value of a number.
Dim num
msgbox num

2) Array Function
Returns a variant containing an Array
Dim A
msgbox A(0)
ReDim A(5)
msgbox A(4)

3) Asc Function
Returns the ANSI character code corresponding to the first letter in a string.
Dim num
msgbox num

* It returns the value 65 *

4) Chr Function
Returns the character associated with the specified ANSI character code.
Dim char
msgbox char

* It returns A *

5) CInt Function
Returns an expression that has been converted to a Variant of subtype Integer.
Dim num
msgbox MyInt

6) Date Function

Returns the Current System Date.

Dim mydate
msgbox mydate

7) Day Function

Ex1) Dim myday
msgbox myday

Ex2) Dim myday
msgbox myday

8) DateDiff Function
Returns the number of intervals between two dates.
Dim myday
msgbox x

9) Hour Function

Returns a whole number between 0 and 23, inclusive, representing the hour of the day.
Dim mytime, Myhour
myhour=hour (mytime)
msgbox myhour

10) Join Function
Returns a string created by joining a number of substrings contained in an array.
Dim mystring, myarray(3)
myarray(0)="Chandra "
myarray(1)="Mohan "
msgbox mystring

11) Eval Function

Evaluates an expression and returns the result.

12) Time Function
Returns a Variant of subtype Date indicating the current system time.
Dim mytime
msgbox mytime

13) VarType Function
Returns a value indicating the subtype of a variable.

Dim MyCheck
MyCheck = VarType(300) ' Returns 2.
Msgbox Mycheck

MyCheck = VarType(#10/19/62#) ' Returns 7.
Msgbox Mycheck

MyCheck = VarType("VBScript") ' Returns 8.
Msgbox Mycheck

14) Left Function

Dim MyString, LeftString
MyString = "VBSCript"
LeftString = Left(MyString, 3) ' LeftString contains "VBS".

14) Right Function

Dim AnyString, MyStr
AnyString = "Hello World" ' Define string.
MyStr = Right(AnyString, 1) ' Returns "d".
MyStr = Right(AnyString, 6) ' Returns " World".
MyStr = Right(AnyString, 20) ' Returns "Hello World".

15) Len Function

Returns the number of characters in a string or the number of bytes required to store a variable.

Ex 1): Dim Mystring
msgbox mystring

Ex 2): Dim Mystring
Mystring=Inputbox("Enter a Value")
Msgbox Mystring

16) Mid Function
Returns a specified number of characters from a string.
Dim MyVar
MyVar = Mid("VB Script is fun!", 4, 6)
Msgbox MyVar

* It Returns ‘Script’ *

17) Timer Function
Returns the number of seconds that have elapsed since 12:00 AM (midnight).

Function myTime(N)
Dim StartTime, EndTime
StartTime = Timer
For I = 1 To N
EndTime = Timer
myTime= EndTime - StartTime
msgbox myTime
End Function
Call myTime(2000)

17) isNumeric Function

Dim MyVar, MyCheck
MyVar = 53
MyCheck = IsNumeric(MyVar)
msgbox MyCheck

MyVar = "459.95"
MyCheck = IsNumeric(MyVar)
msgbox MyCheck

MyVar = "45 Help"
MyCheck = IsNumeric(MyVar)
msgbox MyCheck

* It Returns True/False like Result *

18) Inputbox Function
Displays a prompt in a dialog box, waits for the user to input text or click a button, and returns the contents of the text box.

Dim Input
Input = InputBox("Enter your name")
MsgBox ("You entered: " & Input)

19) Msgbox Function

Displays a message in a dialog box, waits for the user to click a button, and returns a value indicating which button the user clicked.
Dim MyVar
MyVar = MsgBox ("Hello World!", 65, "MsgBox Example")
VBScript syntax rules and guidelines

21.1 Case-sensitivity:

By default, VBScript is not case sensitive and does not differentiate between upper case and lower-case spelling of words, for example, in variables, object and method names, or constants.

For example, the two statements below are identical in VBScript:

Browser("Mercury").Page("Find a Flight:").WebList("toDay").Select "31"
browser("mercury").page("find a flight:").weblist("today").select "31"

21.2 Text strings:

When we enter a value as a text string, we must add quotation marks before and after the string. For example, in the above segment of script, the names of the Web site, Web page, and edit box are all text strings surrounded by quotation marks.

Note that the value 31 is also surrounded by quotation marks, because it is a text string that represents a number and not a numeric value.

In the following example, only the property name (first argument) is a text string and is in quotation marks. The second argument (the value of the property) is a variable and therefore does not have quotation marks. The third argument (specifying the timeout) is a numeric value, which also does not need quotation marks.

Browser("Mercury").Page("Find a Flight:").WaitProperty("items count", Total_Items, 2000)

21.3 Variables:

We can specify variables to store strings, integers, arrays and objects. Using variables helps to make our script more readable and flexible

21.4 Parentheses:

To achieve the desired result and to avoid errors, it is important that we use parentheses () correctly in our statements.

21.5 Indentation:

We can indent or outdent our script to reflect the logical structure and nesting of the statements.


We can add comments to our statements using an apostrophe ('), either at the beginning of a separate line, or at the end of a statement. It is recommended that we add comments wherever possible, to make our scripts easier to understand and maintain.

21.7 Spaces:

We can add extra blank spaces to our script to improve clarity. These spaces are ignored by VBScript.


We have two types Errors in VB Script; they are VBScript Run-time Errors and VBScript Syntax Errors

13.1 VBScript Run-time Errors

VBScript run-time errors are errors that result when our VBScript script attempts to perform an action that the system cannot execute. VBScript run-time errors occur while our script is being executed; when variable expressions are being evaluated, and memory is being dynamic allocated.

13.2 VBScript Syntax Errors

VBScript syntax errors are errors that result when the structure of one of our VBScript statements violates one or more of the grammatical rules of the VBScript scripting language. VBScript syntax errors occur during the program compilation stage, before the program has begun to be executed.

QTP Control Flow Examples

11.1 read a number and verify that number Range weather in between 1 to 100 or 101 to 1000?

Option explicit
Dim a,x
a=Inputbox ("Enter a Value")
If a<= 100 Then For x= 1 to 100 If a=x Then msgbox "a is in between 1 to 100 range" End If Next else For x= 101 to 1000 If a=x Then msgbox "a is in between 101 to 1000 range" End If Next End If
11.1 read Data and find that data size, If size <>4 then display invalid data message, if data size = 4 then verify “a” is there or not in that data?

Dim x
x=Inputbox ("Enter 4 digit value")
x2=Left (x,1)
x3=mid (x,2,Len(1))
x4=mid (x,3,Len(1))
If y=4 Then
If x1="a" or x2="a" or x3="a" or x4="a" Then
msgbox "a is there"
msgbox "a is Not there"
End If
msgbox "Invalid Data"
End If

VB Script Procedures

In VBScript, there are two kinds of procedures available; the Sub procedure and the Function procedure.

11.1 Sub Procedures

A Sub procedure is a series of VBScript statements (enclosed by Sub and End Sub statements) that perform actions but don't return a value.

A Sub procedure can take arguments (constants, variables, or expressions that are passed by a calling procedure).

If a Sub procedure has no arguments, its Sub statement must include an empty set of parentheses ().

Sub Procedure name ()
End Sub
Sub Procedure name (argument1, argument2)
End Sub

Example: 1

Sub ConvertTemp()
temp = InputBox("Please enter the temperature in degrees F.", 1)
MsgBox "The temperature is " & Celsius(temp) & " degrees C."
End Sub

Example: 2

11.2 Function Procedures

A Function procedure is a series of VBScript statements enclosed by the Function and End Function statements.

A Function procedure is similar to a Sub procedure, but can also return a value.

A Function procedure can take arguments (constants, variables, or expressions that are passed to it by a calling procedure).

If a Function procedure has no arguments, its Function statement must include an empty set of parentheses.

A Function returns a value by assigning a value to its name in one or more statements of the procedure. The return type of a Function is always a Variant.

Function Procedure name ()
End Function
Function Procedure name (argument1, argument2)
End Function

Example: 1

Function Celsius(fDegrees)
Celsius = (fDegrees - 32) * 5 / 9
End Function

Example: 2

Function cal(a,b,c)
cal = (a+b+c)
End Function

11.3 Getting Data into and out of Procedures

o Each piece of data is passed into our procedures using an argument.
o Arguments serve as placeholders for the data we want to pass into our procedure. We can name our arguments any valid variable name.
o When we create a procedure using either the Sub statement or the Function statement, parentheses must be included after the name of the procedure.
o Any arguments are placed inside these parentheses, separated by commas.

11.4 Using Sub and Function Procedures in Code

A Function in our code must always be used on the right side of a variable assignment or in an expression.

For example:
Temp = Celsius(fDegrees)
MsgBox "The Celsius temperature is " & Celsius(fDegrees) & " degrees."

To call a Sub procedure from another procedure, type the name of the procedure along with values for any required arguments, each separated by a comma.

The Call statement is not required, but if you do use it, you must enclose any arguments in parentheses.

The following example shows two calls to the MyProc procedure. One uses the Call statement in the code; the other doesn't. Both do exactly the same thing.

Call MyProc(firstarg, secondarg)

MyProc firstarg, secondarg

Notice that the parentheses are omitted in the call when the Call statement isn't used.

QTP VBScript Looping Through Code

o Looping allows us to run a group of statements repeatedly.
o Some loops repeat statements until a condition is False;
o Others repeat statements until a condition is True.
o There are also loops that repeat statements a specific number of times.

The following looping statements are available in VBScript:

o Do...Loop: Loops while or until a condition is True.
o While...Wend: Loops while a condition is True.
o For...Next: Uses a counter to run statements a specified number of times.
o For Each...Next: Repeats a group of statements for each item in a collection or each element of an array.

9.1 Using Do Loops

We can use Do...Loop statements to run a block of statements an indefinite number of times.

The statements are repeated either while a condition is True or until a condition becomes True.

9.1.1 Repeating Statements While a Condition is True

Repeats a block of statements while a condition is True or until a condition becomes True

a) Do While condition
Or, we can use this below syntax:


Dim x
Do While x<5 x=x+1 Msgbox "Hello G.C.Reddy" Msgbox "Hello QTP" Loop b) Do Statements ----------- ----------- Loop While condition Example: Dim x x=1 Do Msgbox "Hello G.C.Reddy" Msgbox "Hello QTP" x=x+1 Loop While x<5
9.1.2 Repeating a Statement Until a Condition Becomes True

c) Do Until condition
Or, we can use this below syntax:

Dim x
Do Until x=5 x=x+1
Msgbox "G.C.Reddy"
Msgbox "Hello QTP"
Or, we can use this below syntax:
d) Do
Loop Until condition
Or, we can use this below syntax:


Dim x
Msgbox “Hello G.C.Reddy”
Msgbox "Hello QTP"
Loop Until x=5

9.2 While...Wend Statement

Executes a series of statements as long as a given condition is True.

While condition
Dim x
While x<5 x=x+1 msgbox "Hello G.C.Reddy" msgbox "Hello QTP" Wend 9.3 For...Next Statement

Repeats a group of statements a specified number of times.
For counter = start to end [Step step]

Dim x
For x= 1 to 5 step 1
Msgbox "Hello G.C.Reddy"

9.4 For Each...Next Statement

Repeats a group of statements for each element in an array or collection.


For Each item In array

Example: (1

Dim a,b,x (3)
x(0)= "Addition is "& a+b
x(1)="Substraction is " & a-b
x(2)= "Multiplication is " & a*b
x(3)= "Division is " & a/b

For Each element In x
msgbox element

Example: (2

MyArray = Array("one","two","three","four","five")
For Each element In MyArray
msgbox element

QTP VBScript Decisions with Select Case

   The Select Case structure provides an alternative to If...Then...ElseIf for selectively executing one block of statements from among multiple blocks of statements. A Select Case statement provides capability similar to the If...Then...Else statement, but it makes code more efficient and readable.

Option explicit
Dim x,y, Operation, Result
x= Inputbox (" Enter x value")
y= Inputbox ("Enter y value")
Operation= Inputbox ("Enter an Operation")

Select Case Operation

Case "add"
Result= cdbl (x)+cdbl (y)
Msgbox "Hello "
Msgbox "Addition of x,y values is "&Result

Case "sub"
Result= x-y
Msgbox "Hello "
Msgbox "Substraction of x,y values is "&Result

Case "mul"
Result= x*y
Msgbox "Hello "
Msgbox "Multiplication of x,y values is "&Result

Case "div"
Result= x/y
Msgbox "Hello "
Msgbox "Division of x,y values is "&Result

Case "mod"
Result= x mod y
Msgbox "Hello "
Msgbox "Mod of x,y values is "&Result

Case "expo"
Result= x^y
Msgbox "Hello"
Msgbox"Exponentation of x,y values is "&Result

Case Else
Msgbox "Hello "
msgbox "Wrong Operation"

End Select

8.3 Other Examples

8.3.1 Write a program for finding out whether the given year is a leap year or not?

Dim xyear
xyear=inputbox ("Enter Year")

If xyear mod 4=0 Then
msgbox "This is a Leap year"
msgbox "This is NOT"
End If

8.3.2 Write a program for finding out whether the given number is, Even number or Odd number?

Dim num
num=inputbox ("Enter a number")

If num mod 2=0 Then
msgbox "This is a Even Number"
msgbox "This is a Odd Number"
End If

8.3.3 Read two numbers and display the sum?

Dim num1,num2, sum
num1=inputbox ("Enter num1")
num2=inputbox ("Enter num2")

sum= Cdbl (num1) + Cdbl (num2) 'if we want add two strings conversion require
msgbox ("Sum is " ∑)

8.3.4 Read P,T,R values and Calculate the Simple Interest?

Dim p,t, r, si
p=inputbox ("Enter Principle")
t=inputbox ("Enter Time")
r=inputbox ("Enter Rate of Interest")
si= (p*t*r)/100 ' p= principle amount, t=time in years, r= rate of interest
msgbox ("Simple Interest is " &si)

8.3.5 Read Four digit number, calculate & display the sum of the number or display Error message if the number is not a four digit number?

Dim num, sum
num=inputbox ("Enter a Four digit number")
If Len(num) = 4 Then
sum=sum+num mod 10
num= left (num, 3)
sum=sum+num mod 10
num= left (num, 2)
sum=sum+num mod 10
num= left (num, 1)
sum=sum+num mod 10
msgbox ("Sum is " ∑)
msgbox "Number, you entered is not a 4 digit number"
End If

8.3.6 Read any Four-digit number and display the number in reverse order?

Dim num,rev
num= inputbox("Enter a number")
If len(num)=4 Then

rev=rev*10 + num mod 10
num= left(num,3)

rev=rev*10 + num mod 10
num= left(num,2)

rev=rev*10 + num mod 10
num= left(num,1)

rev=rev*10 + num mod 10

msgbox "Reverse Order of the number is "&rev
msgbox "Number, you entered is not a 4 digit number"
End If

8.3.7 Read 4 subjects marks; calculate the Total marks and grade?

(a) If average marks Greater than or equal to 75, grade is Distinction
b) If average marks Greater than or equal to 60 and less than 75 , then grade is First
c) If average marks Greater than or equal to 50 and less than 60 , then grade is Second
d) If average marks Greater than or equal to 40 and less than 50 , then grade is Third
e) Minimum marks 35 for any subject, otherwise 'no grade fail')

Dim e,m,p,c, tot
e=inputbox ("Enter english Marks")
m=inputbox ("Enter maths Marks")
p=inputbox ("Enter physics Marks")
c=inputbox ("Enter chemistry Marks")

tot= cdbl(e) + cdbl(m) + cdbl(p) + cdbl(c)
msgbox tot

If cdbl(e) >=35 and cdbl(m) >=35 and cdbl(p) >=35 and cdbl(c) >=35 and tot >=300 Then
msgbox "Grade is Distinction"

else If cdbl(e) >=35 and cdbl(m) >=35 and cdbl(p) >=35 and cdbl(c) >=35 and tot >=240 and tot<300 Then msgbox "Grade is First" else If cdbl(e) >=35 and cdbl(m) >=35 and cdbl(p) >=35 and cdbl(c) >=35 and tot >=200 and tot<240 Then msgbox "Grade is Second" else If cdbl(e) >=35 and cdbl(m) >=35 and cdbl(p) >=35 and cdbl(c) >=35 and tot >=160 and tot<200 Then msgbox "Grade is Third" else msgbox "No Grade, Fail" End If End If End If End If
8.3.8 Display Odd numbers up to n?

Dim num,n
n=Inputbox ("Enter a Vaule")
For num= 1 to n step 2
msgbox num

8.3.9 Display Even numbers up to n?

Dim num,n
n=Inputbox ("Enter a Vaule")
For num= 2 to n step 2
msgbox num

8.3.10 display natural numbers up to n and write in a text file?

Dim num, n, fso, myfile
n= inputbox ("Enter any Value")
For num= 1 to n step 1
Set fso= createobject ("scripting.filesystemobject")
set myfile=fso.opentextfile ("E:\gcr.txt", 8, true)
myfile.writeline num

8.11 Display Natural numbers in reverse order up to n?

Dim num,n
n=Inputbox ("Enter a Vaule")
For num=n to 1 step -1
msgbox num

8.12 Display Natural numbers sum up to n? (Using For...Next Loop)

Dim num, n, sum
n= inputbox ("Enter a Value")
For num= 1 to n step 1
sum= sum+num
msgbox sum

8.13 Display Natural numbers sum up to n? (using While...Wend Loop)

Dim num, n, sum
n= inputbox ("Enter a Value")
While num <=cdbl (n) sum= sum+num num=num+1 Wend msgbox sum

8.14 Display Natural numbers sum up to n? (Using Do...Until...Loop)

Dim num, n, sum
n= inputbox ("Enter a Value")
sum= sum+num
Loop Until num =cdbl (n+1)
msgbox sum

8.15 Write a Function for Natural Numbers sum up to n?

Function NNumCou (n)
Dim num, sum
For num= 1 to n step 1
sum= sum+num
msgbox sum
End Function

8.16 Verify weather the entered 10 digit value is a numeric value or not?

Dim a,x,y,z,num
num=Inputbox ("Enter a Phone Number")

d1= left (num,1)
d10=Right (num,1)
d2=mid (num, 2, len (1))
d3=mid (num, 3, len (1))
d4=mid (num, 4, len (1))
d5=mid (num, 5, len (1))
d6=mid (num, 6, len (1))
d7=mid (num, 7, len (1))
d8=mid (num, 8, len (1))
d9=mid (num, 9, len (1))

If isnumeric (d1) = "True" and isnumeric (d2) = "True" and isnumeric (d3) = "True" and isnumeric (d4) = "True"and isnumeric (d5) = "True"and isnumeric (d6) = "True"and isnumeric (d7) = "True"and isnumeric (d8) = "True"and isnumeric (d9) = "True"and isnumeric (d10) = "True" Then
msgbox "It is a Numeric Value"
Msgbox "It is NOT Numeric"
End If

8.17 Verify weather the entered value is a 10 digit value or not and Numeric value or not? (Using multiple if conditions)

Dim a,x,y,z,num
num=Inputbox ("Enter a Phone Number")

d1= left (num,1)
d10=Right (num,1)
d2=mid (num, 2, len (1))
d3=mid (num, 3, len (1))
d4=mid (num, 4, len (1))
d5=mid (num, 5, len (1))
d6=mid (num, 6, len (1))
d7=mid (num, 7, len (1))
d8=mid (num, 8, len (1))
d9=mid (num, 9, len (1))

If len (num) =10 Then

If isnumeric (d1) = "True" and isnumeric (d2) = "True" and isnumeric (d3) = "True" and isnumeric (d4) = "True"and isnumeric (d5) = "True"and isnumeric (d6) = "True"and isnumeric (d7) = "True"and isnumeric (d8) = "True"and isnumeric (d9) = "True"and isnumeric (d10) = "True" Then
msgbox "It is a Numeric Value"
End If
End If

If len (num) <> 10 Then
Msgbox "It is NOT valid Number "
End If

















For gaining more insights in the automation using QTP log on to below url :

Automation Testing Using QTP