Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Test Execution Model in Software Testing Lifecycle

  Manual testing has always been the mainstay of any STLC. It starts with the initial stages of the Software development life-cycle. Have a look at any of the SDLC, being it the latest introduced Agile or one of the most orthodox of all the waterfall model, the STLC starts from the very begining.  specific reason for this very trend in existence lies in the fact that finding a bug early in the SDLC reduces the cost to fix the same.
That definitely is not the topic of discussion in this post but I am sure we will get through those lines very often when the very terms of test execution comes to our mind.
   So starting with the points I have to discuss and highlight in this post our Test execution cycle normally starts with a buddy build that gets released as a pre-test environment to help the test team adjusted with the way the functionality is going to be implemented. Does that mean as a test resource I am not going to log bugs for this very build that is available for test. That sounds so interesting a riddle that can be solved only through the experience and not just the orthodox quality assurance technique. So basically a buddy build is released to a test team not for logging bugs but to give the look and feel , and also there remains one hidden advantage of this very phase. It is meant to help the test team figure out the areas that need a change request in place from the client or product owner side.In case the product needs some change request to be implemented within the functionality and the same can be feasible as per the development team's understanding, a confirmation is sought for from the client and the same gets implemented as per the decided stack rank of the change request created. that is the whole idea of a buddy build in place. So a question that comes to the mind here is do we really bother to implement things that is not part of business requirement. And do we really have any extra mileage getting carved out by doing this extra bit of test effort that goes no way to optimize life-cycle timelines from the test execution front. Now that sounds bit harsh. But yeah there is a tweak of activities that get triggered by getting this activity implemented and scoped in the life-cycle.The very first thing that the test team gets attributed with is the bandwidth enhancement in terms of understanding the way the features and functionality is getting implemented without being accountable for not being able to find bugs prevalent in the system under test.The second is the early detection of the product shortfalls which if not detected earlier in the life-cycle might have huge impact in future releases.Third point is customer gets huge confidence in the test team's ability to understand the functionality and capability to deliver quality product not from functional aspect alone but also from the domain aspect simply because on most of the occasions customers are not very adept in documenting the requirements and huge number of change requests keep coming in future phases of releases done.
However lets move onto the topic we were to discuss upon. I have the very bad habit to create huge background for the main story to be put down into the blog.
So lets start with the test execution plan that we have in place. Suppose we have three test execution cycles , one regression cycle and finally one sanity cycle. So these are the three main phases within which the complete execution needs to be performed so to add quality and release a defect free product into the system.
For our case let us consider we have a matrix of information as per the documented activity. We have test case counts as under in terms of priority with which the test cases need to be executed:
Priority 1 - 350
Priority 2 - 650
Priority 3 - 200
That does look an ideal count for a small to mid sized project which comes under test execution cycles of three rounds.Here we assume that all the 1200 odd cases are part of the test execution in the regression cycle that needs to  be performed after the three rounds of iteration that shall be performed . However, as in a typical scenario of test execution plan , for the first round of testing that needs to be carried out in the System Test cycle 1 - we will not have all 1200 odd test cases as part of execution cycle.Let us assume that for the first round we have close to 400 odd test cases as part of execution scope and their matrix is as under.

Priority 1 - 150
Priority 2 - 200
Priority 3 - 050
So this is how the test execution cycle need to be carried out so as to get the maximum functionality tested and here in this execution cycle we will definitely be able to add some 20% more test cases as some functionalities do get implemented as part of the change requests that were raised as part of buddy test execution.Let us not include that as part of discussion plane though they will definitely be part of the execution plan and the same needs to be redesigned.


With this execution plan in place the System Test cycle 1 gets executed. Now let us move into the next phase namely the System Test cycle 2. Suppose here in we have close to 550 odd test cases as part of execution scope and their matrix is as under.

Priority 1 - 150
Priority 2 - 300
Priority 3 - 100
Now we will execute the second test execution cycle for these test cases. But is that all we need to plan for or is it that something goes missing from our end in this case ? Yeah so here is something more to it.We must have logged some bugs as part of the earlier cycle, so that needs to be re-tested. But does our job as test execution cycle owner gets over. No.What is it that we need to look out for in this case. We need to have a matrix in place that helps us to assist the areas of impact due to the bug fixes introduced into the system. How do we track that ? There comes something called bi-directional traceability matrix with the additional feature of vertical traceability getting added and tracked.  Is that something you have not heard for ? Ok I will being throwing some light on this black hole. Lets see how much you can pierce through the same.
Bi-directional traceability is simple as it gives you the opportunity to track the business requirement getting failed due to a bug logged. Now just imagine what vertical traceability would refer to?Yeah, It is simple, it refers a linkage from one Business requirement to another business requirement. That is which requirement might impact which other requirement.
That makes test execution interesting or rather more complex than what many others feel about it !!!  Do you now agree that testing a product is far more difficult than developing it.
So you end up testing all the impacted areas as part of this test execution cycle.
I have seen many projects where in the test lead does not break his/her head much into defining the impacted areas and maintaining traceability matrix , but they just end up executing all the test cases from execution cycle 1 also in the execution cycle 2.That gives clear indication of DULL work methodology and lack of SMART work , and making the job of complete test team monotonous.
Something that comes as a stumbling block typically in a Service industry based project is insufficient clarity on the documented business requirements. So in cases such as these I have my clarity in terms of Test cases and I maintain a vertical traceability on the test cases, so I get clear vision on which test case when failed and fixed might impact which other test cases.I hope that sounds effective solution model to the problem we geared up for in the current context.

Now you might have understood how the test execution becomes more challenging as the product matures more in subsequent cycles. Let us move into the next phase namely the System Test cycle 3.So this is the last phase of major development cycle and with this we will come to and end to the functionality development. Here we are left with close to 200 odd test cases as part of execution scope and their matrix is as under.


Priority 1 - 050
Priority 2 - 100
Priority 3 - 050
Now we will execute the final test execution cycle for these test cases and also plan for the bugs retesting and impact areas testing just the way we did in previous cycles.
Once the same is done regression test needs to be performed.One thing comes here at the back of my mind is does regression need to be performed or is it that testing the impacted areas for the bugs logged in the test execution cycle three would be sufficient. So two major points comes into my mind from the experience I have had in terms of the way things get handled in the industry. First point is of course the bugs logged in the cycle 3 needs to be tested. Second point is the major one to look out for and plan effectively to reduce the risks getting created due to this.On almost all occasions development team do end up in not being able to fix the bugs logged in previous cycles in subsequent cycles. How do they handle this miss from their end ? They basically increase the stack rank and keep the bugs low on priority , to which test team also need to confirm from their end as to whether delaying that fix is a risk to the product quality or it can be fixed later on .So test team here bring the vertical traceability matrix and clarify on how many requirements are impacted due the logged bug and to what extent the bug - fix for the same can be delayed is feasible. So for all these type of bugs that are lower on  stack rank become part of fixing prior to the regression cycle. Also there might be some misses in terms of the test execution cycle which can also be re-addressed in the regression cycle. So all the test cases become part of the regression cycle execution.Based on the joint decision made by the entire project delivery team and the product owner, the project is then released into the pre production environment where in the UAT is performed. Since the test execution has been modeled well enough no major issue is bound to come and the project deployed into production environment which can then be released after a small round of sanity testing with some live data.
I would very soon come up with some screenshots on how these modelling can be achieved in a test management tool. I have MTM with me and I will just show up how organising things in it helps us keep track things in a nice manner.For the time being keep browsing for more on this blog, It loves quality assured nature in you. :)

MS Sql Server installation testing

Installation testing is perhaps one of the most time taking testing category I have ever come across in my professional career. Installation is something that keeps happening for any time range between 20 minutes to 40 minutes, and during this time frame only thing you can do is bird watching. And if you donot have  enough birds in your locality passing time becomes tough.
However , a workaround does exist in this case, during the course of installation you can proceed with some negative testing activities. Negative testing approach can be termed in the sense of parameter validations performed by passing the wrong parameters, such as invalid port, invalid db instance name, invalid collation etc.
Types Of installations :
There are numerous database installation types that can be tested, the major ones are underlined as under:

1. Standalone installation :
  This installation type refers to the one which we generally do during the MSSql Server installation on our personal machines. It however in enterprise installation needs to be assessed for its validation in terms of the port range within which  the installation has to be able to commenced. Other basic features such as the database instance names and all will definitely go fine as that needs to be validated from the sql server's set up file directly. However , we need to have utmost care in the administrator group account we should be able to configure for the database instance getting created, apart from that the service account that should be utilized for running the database instance also needs to be validated. All these validations can be directly performed from the configuration manager, where in the details of the database instance installed is available.Several other validations that are of importance are to verify if the security settings are as per the expectations, such as the error logs have to be captured for what type of events such failed logins only or both successfull and failed logins both. Do not forget to validate the connectivity of the database instance using the port number used at the time of installation.


2. Cluster Installation
 This installation type has some complicated aspects associated with it . There is some restriction in terms of the IP range that can be used for this installation. Generally the best approach to target this installation is the Static IP utilization to consume the service. Get a static IP configured from the lab team on the domain you are performing testing. for my case my case I used some six virtual machines to perform my installation activities. The way I have approached is have three machines blocked just for testing the Cluster installation and the add node to cluster installation. Just have the Windows cluster failover manager installed one any one of the three machine. Do remember to have the DTC running on the machine. Again from validation perspective assure that the properties of the installed database instance is the same as expected with the service being consumed by the account it is expected to. For cluster installation you need to have some cluster network created which you can always do with perfect ease, and pass this as a parameter during the course of installation of the Cluster in the machine.


3. Add Node to a cluster installation :
 Adding node to a cluster is nothing but creating a MSSql server database instance of same name as is already existing within the windows cluster failover manager. So I just consumed the database instance that was created as part of the Cluster installation. So what I meant to say is when you do the add node a pre-requisite is you should have a Cluster installation performed beforehand, first to meet the optimized approach to testing and second to have the test bed created for the add node functionality testing to proceed with. What are the various things we nee to have at the back of mind, use the Cluster network name as used during cluster installation, database instance will be the same as the one used during installation of the Cluster. And this add node has to be done on the same network wherein the cluster network has been configured , that is the machine must be a part of the same network , as already mentioned I hace three machines on one machine I have the Cluster installation done, On remaining two machines I would be performing the add node installation. Once I have performed installation od Add node on second machine its time for the validation which is nothing but go to the main cluster machine where in the windows cluster failover manager has been installed and perform the move instance service to a different node on the cluster. So in my case I would move the service from node one to node two, because on node one I have my cluster installed and node has been added onto the node 2, so I now move the service to node 2 so that the node 2 database instance running and node 1 database instance getting stopped. Do verify that before doing this move node activity and just after having completed the Add node installation , the database service on the added node is installed in Stopped  state. Hoever when we move the service from node 1 to node 2, it is the database service on node 1 that gets stopped and the database service on node 2 that gets started. Similarly add node onto the third node as well ad keep ,moving nodes from one to another among the three machines  namely 1, 2 3 cluster machiness.


4. Enrolment of a Standalone instance to a CMS/MDW server :
5. Security groups enrolment :
6. Access provisioning :
7. Memory range provided to the database instance installed




 To Be Contd.

How to create test data using MS Sql Server

What is test data creation in testing Business Intelligence application ?
What is the relevance of test data creation ?
How is test effectiveness measured in terms of the amount of test data we create to do BI testing ?

There are many other questions which can be religiously and profusely asked by the test enthusiasts, but most of us may not be in a position to answer all of them without touch basing the basic point that no testing can be performed in a BI based application without a proper test data generator being in place. When I say proper , it means a lot specially with so many tools available we still need the ability to code and get this test data preparation done as per our needs.

Manual creation of test data may not be a good way of performing the testing activities in the BI domain. The simple reason accompanying the ideology to this conjecture is application performance .
Performance of an application gets deteriorated when we have huge chunk of data in place and a simple job is run to load data from one source to another one as per the business model's needs.
For example we might need to load some SAP systems data into a simple flat file type data. There can be numerous such occasions when we need to load data from some typical Line of business applications into some separate systems, in this case just a flat file.

So going to the main point how do we create test data ?
Suppose we have  a need to create a thousand record within a excel sheet, we may utilise the excel tips like dragging , or Control + D etc, but to what extent ?

It has always been a known fact that coding is the only way an engineering excellence can be achieved especially in the software world, there is absolutely no other choice if you are really interested to bring the best quality product in place.

Very recently I had come across a small challenge in one of my activities. I had to create a couple of million test data in a specific format to suit the testing of the application under test. Let us have a sample case to make things easy for us to understand.
            I have a simple job which when run, transforms various record sets from some flat file into a table. So basically the source file is my test data in current context, and when I run the job , the records from source file will be loaded onto the table within a particular database. By the way I have been using a term here 'job'.
Has it got anything to do with the daily office going job ? Jokes apart, I know people from Business Intelligence domain are pretty accustomed with the term job but for others I would just like to add some details. So job is basically a sequence of coded steps which can be configured using MS Sql Server/ or many other tools . This combination of steps results in a set of activities as needed by the development architecture.In general for ETL activities is what I shall be discussing this article on.
So just to give some insight on the business requirement front, I have MS Sql server on my machine, on which the job has been configured, I have a source file that will have some record sets in it and the same is placed on some folder within the same machine. And now what I will do is just run the job . After some time job completes with a success message displayed implying that job has been successfully able to extract data from the source file and load the valid data in the table within the database.
That is as simple as moving out from one home into another one, just to make office commuting easy in day to day life.. I hope I did not break your rhythm.

By the way I was just looking at the title of this blog post, "How to create test data using MS Sql Server". And was just thinking if my post till now did any justice to this topic. Yeah of course creating a base on which now we can understand the relevance of test data creation and importance of coding skills in creating huge chunks of data in a matter of seconds.

But will continue some time later , for now feeling sleepy !!!






ETL testing in a Business Intelligence application

What is ETL testing all about ?
                   ETL is the extract , transform and load operation performed on the source data in order to build a strong base for developing an application that does the needful for the decision making teams in large enterprise systems.

Such is the importance of a Business Intelligence(BI) application that at times big enterprise end up in developing a complete application for just a single user to access and analyze the reports available in them. And developing the same itself involves huge effort and cost not only due to huge chunk of data and their testing but also the degree of complexity associated with the logic development for achieving the same . Reports however are not solely dependent on the ETL but several other logically inter-related objects and the access and authorization rules implemented on the cube level as well as the UI level.Someone who is an expert at BI application development might just not be sufficient to develop such an application as the end user's requirement needs to be documented and loads of data analysis on its nature needs to be done to frame a volley of questions for the end user and clarification documents needs to be tracked right through out the project development life cycle. this is because in these systems a bug that gets discovered at a very late stage generally has very high cost associated with it for being fixed.

Lets not get out of context and try to understand the approach and priority of testing just the ETL logic within the application under test. For a Business Intelligence application software developed using the Microsoft technologies we have the Microsoft SQL Server in place . And developing the ETL can be achieved by the  SSIS - Sql Server Integration Services feature available there in.
Using the SSIS feature, the packages can be developed which have the ETL logic in them, based on which the source data is filtered as per the requirement traced out for the application.

Once the ETL packages get developed , the testing of the same becomes an uphill task due to the huge amount of data in the source environment which gets extracted, transformed and loaded into the destination rather a sort of pre -destination environment. Just imagine the verification of each and every record set that was in the source against the target environment. From common principle we might just conclude that the data load is based on the Sql queries which in any case will go fine, but the main target area is the logic verification as to which data needs to be ignored for loading in the target environment. there might be cases wherein we have duplicate records in the source and we might not be able to load both the records just because that will create high level of discrepancy when we browse through the reports in the end product. Running ETL packages gets the data loaded from the source to the staging environment and depending on the context and nature of the application , same gets loaded into the data mart as well based on the transformation logic applied and also the nature of load which may be a full load, that is truncate and load, or a incremental load that is just the additional data gets loaded into the environment.

When we have the uphill task of validating such huge chunks of data we take the help of some database automation tools that helps us to verify each and every record . There are many tools available in the industry  and at times we can ourselves create tools using the excel and ,macro programming but then what I prefer to do is utilize the DB unit test projevct feature available within the Visual Studio IDE. Now the time is to build up the logic that will help us validate each record set in the source against the target. The general approach that is considered to be sufficient to to do the same is in a two staged sql queries verification. One being the count of the source and target data environment must match on applying the filters as has been documented by the client and the second is that the data must match . We genuinely find the option of empty return value as sufficient for doing the same . What we all need to apply is the except keyword between the two query execution logic and the addition of the Test condition from within the added DB unit test file.
Just browse through the some screenshots of the working analogy for doing the same that would definitely make things easy for the SSIS testers.


 



Just hit cancel for the above dialog box. This is actually the database configuration file creation which we can create directly by adding an app.config file which is as under. To get it done we can add a new item into the project by right clicking the project and clicking add new item and then  selecting an application configuration file as shown :




The content of the same will be something as under :
 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <configSections>     <section name="DatabaseUnitTesting" type="Microsoft.Data.Schema.UnitTesting.

Configuration.DatabaseUnitTestingSection, 
Microsoft.Data.Schema.UnitTesting, Version=10.0.0.0, Culture=neutral, 
PublicKeyToken=b03f5f7f11d50a3a" />
  </configSections>
  <DatabaseUnitTesting>
    <DataGeneration ClearDatabase="true" />
    <ExecutionContext Provider="System.Data.SqlClient"  
ConnectionString="Data Source=DB_Server_Name;Initial Catalog=Master;
Integrated Security=True;Pooling=False"
        CommandTimeout="220" />
  </DatabaseUnitTesting>
</configuration>

Once this has been set up , we can go ahead with the logic to validate the same, which we do as under.
Here we have just renamed the method from databasetest to more relevant onw that is Employee_RowCount, similarly, we add another test method by clicking the plus icon to add another method to the same DBUnit class file that is employee to validate the data content as under.




So what is it that I have done in this First level of verifictaion as in above image : Its is simple I have just written the query to fetch the row count and utilized the Except keyword. So now if the count matches of the two query , due to the except query in place we are expecting the "Empty resultset" as the return value of the complete query execution. Hence in the below section of the image you can see, I have removed the default condition that got added and added a new condition namely the empty resultset. We are hence ready with one validation that is on the row count.

Second level of verification is for the data match as well. We add a new method by the upper plus icon and rename it to Employee_DataCheck method name, provide the query for the same and add the except keyword in between the two queries and rest is as was done above to get the empty resultset as the return value for the query execution. This will look as under :
As part of some experience tips we at times have issue with data validation especially for the string datatype attributes. Just check in for the collation conflict that creates such issues. Do provide the collation to sort out those issues.

A third level of verification that adds quality to the testing of the SSIS packages and ETL execution is verification of the schema of the database objects in the source against the destination environment. This provides an added quality as it helps us to verify if the data will be loaded with same precision values or not.
The general query that will fetch the schema details of any database table is as under :

select column_name  collate Latin1_General_CI_AI,
 DATA_TYPE  collate Latin1_General_CI_AI
 , CHARACTER_MAXIMUM_LENGTH,
NUMERIC_PRECISION
, DATETIME_PRECISION
from Source.INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='Employee'
Except
select column_name  collate Latin1_General_CI_AI,
 DATA_TYPE  collate Latin1_General_CI_AI
 , CHARACTER_MAXIMUM_LENGTH,
NUMERIC_PRECISION
, DATETIME_PRECISION
from Destination.INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='Employee'
and COLUMN_NAME Not in ('clumn not to be validated especially some ID reltaed that gets auto generated')

The code above validates the column names in the two environment,  do keep in mind that there are certain columns that get auto generated especially in staging environment we have staging id and so on, so do exclude their verification as has been addressed.these columns get verified on their attributes which we provide above namely "datatype", "Size" that is character maximum legth,"numeric precision","datetime precision". We could have easily validated another aspect such as IsNull. But the general approach in Business Intellignece(BI) is that if we have some column to validate the ID of the complete record set we tend to oignore the IsNull feature and the same gets reflected in the code above.


Thus we have successfully automated the ETL testing using three levels of verification namely rowcount, datacheck and schemacheck. This provides the team with more level of confidence on the quality of testing of the ETL packages as data verification has been done at a rigorous level rather a sampling manner.

So we have explored the ETL testing and ways to achieve high degree of quality for the same..