Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Showing posts with label sql server. Show all posts
Showing posts with label sql server. Show all posts

How do you Test a database split

What is a database split ?
When do we really come across with such a business scenario in our enterprise business ?
What is it that holds paramount importance in planning and strategizing the testing activities in such an application ?
How does the test related timelines impact the overall estimation in such a project ?
What are the areas of scope which have scope of optimization in test estimation of such project ?

I hope this post is best suited for your expectation to be met in case these are the areas of your interest when browsing through this post .

Database split is nothing but separation of a single entity into two child entities just the similar way an amoeba fission is known to occur. Here in the risk involved may be as high as ad infinitum in case there is some legal aspects involved .
So the best approach to test is validate the data that has been entered in the child databases. Only relevant data should get loaded into each of the respective sinister databases.
Bring in the approach of database test automation for validation of data content using the rowcount and datacheck as we discussed in a previous article listed as under.

http://www.articlesontesting.com/2012/06/etl-testing-in-business-intelligence.html

By following the approach mentioned in this article you can reduce heavily in test execution timelines. Scripting of the queries is a one time activity and you can use it in all subsequent cycles. Also the script preparation is a parallel activity and can be done during the course of development pahse itself. there is no need to maintain any test case document as well as the dbunit test scripts are themselves self explanatory. however in some cases depending on customer's expectation, some comments in the dbunits test script itslef can be provided.
Do you know how long it takes for the scripts to run ?
One script runs in times of milliseconds.
I ran close to ten databases split operation testing in five minutes and results were logged into TFS.
This included close to 450 tables validation with close to query counts in the range of 1275 odd !!!
And all was done in minutes. However you would have taken close to four man days for validation of tables to extent of these by manually executing these queries in sql management studio.

Enjoy smart test execution technique by implementing the dbunit test of visual studio 2010, while in 2012 version has been separately labelled as SqlDatabaseUnit test and you need to install the SSDT along with the Visual Studio 2012 IDE.

You will really have a ease of time handling this project, but without this approach the time taken for verification is just way high and not a feasible approach in the current trending industry.

Do post your queries and I shall help you get them answered herein asap.

MS Sql Server installation testing

Installation testing is perhaps one of the most time taking testing category I have ever come across in my professional career. Installation is something that keeps happening for any time range between 20 minutes to 40 minutes, and during this time frame only thing you can do is bird watching. And if you donot have  enough birds in your locality passing time becomes tough.
However , a workaround does exist in this case, during the course of installation you can proceed with some negative testing activities. Negative testing approach can be termed in the sense of parameter validations performed by passing the wrong parameters, such as invalid port, invalid db instance name, invalid collation etc.
Types Of installations :
There are numerous database installation types that can be tested, the major ones are underlined as under:

1. Standalone installation :
  This installation type refers to the one which we generally do during the MSSql Server installation on our personal machines. It however in enterprise installation needs to be assessed for its validation in terms of the port range within which  the installation has to be able to commenced. Other basic features such as the database instance names and all will definitely go fine as that needs to be validated from the sql server's set up file directly. However , we need to have utmost care in the administrator group account we should be able to configure for the database instance getting created, apart from that the service account that should be utilized for running the database instance also needs to be validated. All these validations can be directly performed from the configuration manager, where in the details of the database instance installed is available.Several other validations that are of importance are to verify if the security settings are as per the expectations, such as the error logs have to be captured for what type of events such failed logins only or both successfull and failed logins both. Do not forget to validate the connectivity of the database instance using the port number used at the time of installation.


2. Cluster Installation
 This installation type has some complicated aspects associated with it . There is some restriction in terms of the IP range that can be used for this installation. Generally the best approach to target this installation is the Static IP utilization to consume the service. Get a static IP configured from the lab team on the domain you are performing testing. for my case my case I used some six virtual machines to perform my installation activities. The way I have approached is have three machines blocked just for testing the Cluster installation and the add node to cluster installation. Just have the Windows cluster failover manager installed one any one of the three machine. Do remember to have the DTC running on the machine. Again from validation perspective assure that the properties of the installed database instance is the same as expected with the service being consumed by the account it is expected to. For cluster installation you need to have some cluster network created which you can always do with perfect ease, and pass this as a parameter during the course of installation of the Cluster in the machine.


3. Add Node to a cluster installation :
 Adding node to a cluster is nothing but creating a MSSql server database instance of same name as is already existing within the windows cluster failover manager. So I just consumed the database instance that was created as part of the Cluster installation. So what I meant to say is when you do the add node a pre-requisite is you should have a Cluster installation performed beforehand, first to meet the optimized approach to testing and second to have the test bed created for the add node functionality testing to proceed with. What are the various things we nee to have at the back of mind, use the Cluster network name as used during cluster installation, database instance will be the same as the one used during installation of the Cluster. And this add node has to be done on the same network wherein the cluster network has been configured , that is the machine must be a part of the same network , as already mentioned I hace three machines on one machine I have the Cluster installation done, On remaining two machines I would be performing the add node installation. Once I have performed installation od Add node on second machine its time for the validation which is nothing but go to the main cluster machine where in the windows cluster failover manager has been installed and perform the move instance service to a different node on the cluster. So in my case I would move the service from node one to node two, because on node one I have my cluster installed and node has been added onto the node 2, so I now move the service to node 2 so that the node 2 database instance running and node 1 database instance getting stopped. Do verify that before doing this move node activity and just after having completed the Add node installation , the database service on the added node is installed in Stopped  state. Hoever when we move the service from node 1 to node 2, it is the database service on node 1 that gets stopped and the database service on node 2 that gets started. Similarly add node onto the third node as well ad keep ,moving nodes from one to another among the three machines  namely 1, 2 3 cluster machiness.


4. Enrolment of a Standalone instance to a CMS/MDW server :
5. Security groups enrolment :
6. Access provisioning :
7. Memory range provided to the database instance installed




 To Be Contd.

How to create test data using MS Sql Server

What is test data creation in testing Business Intelligence application ?
What is the relevance of test data creation ?
How is test effectiveness measured in terms of the amount of test data we create to do BI testing ?

There are many other questions which can be religiously and profusely asked by the test enthusiasts, but most of us may not be in a position to answer all of them without touch basing the basic point that no testing can be performed in a BI based application without a proper test data generator being in place. When I say proper , it means a lot specially with so many tools available we still need the ability to code and get this test data preparation done as per our needs.

Manual creation of test data may not be a good way of performing the testing activities in the BI domain. The simple reason accompanying the ideology to this conjecture is application performance .
Performance of an application gets deteriorated when we have huge chunk of data in place and a simple job is run to load data from one source to another one as per the business model's needs.
For example we might need to load some SAP systems data into a simple flat file type data. There can be numerous such occasions when we need to load data from some typical Line of business applications into some separate systems, in this case just a flat file.

So going to the main point how do we create test data ?
Suppose we have  a need to create a thousand record within a excel sheet, we may utilise the excel tips like dragging , or Control + D etc, but to what extent ?

It has always been a known fact that coding is the only way an engineering excellence can be achieved especially in the software world, there is absolutely no other choice if you are really interested to bring the best quality product in place.

Very recently I had come across a small challenge in one of my activities. I had to create a couple of million test data in a specific format to suit the testing of the application under test. Let us have a sample case to make things easy for us to understand.
            I have a simple job which when run, transforms various record sets from some flat file into a table. So basically the source file is my test data in current context, and when I run the job , the records from source file will be loaded onto the table within a particular database. By the way I have been using a term here 'job'.
Has it got anything to do with the daily office going job ? Jokes apart, I know people from Business Intelligence domain are pretty accustomed with the term job but for others I would just like to add some details. So job is basically a sequence of coded steps which can be configured using MS Sql Server/ or many other tools . This combination of steps results in a set of activities as needed by the development architecture.In general for ETL activities is what I shall be discussing this article on.
So just to give some insight on the business requirement front, I have MS Sql server on my machine, on which the job has been configured, I have a source file that will have some record sets in it and the same is placed on some folder within the same machine. And now what I will do is just run the job . After some time job completes with a success message displayed implying that job has been successfully able to extract data from the source file and load the valid data in the table within the database.
That is as simple as moving out from one home into another one, just to make office commuting easy in day to day life.. I hope I did not break your rhythm.

By the way I was just looking at the title of this blog post, "How to create test data using MS Sql Server". And was just thinking if my post till now did any justice to this topic. Yeah of course creating a base on which now we can understand the relevance of test data creation and importance of coding skills in creating huge chunks of data in a matter of seconds.

But will continue some time later , for now feeling sleepy !!!






ETL testing in a Business Intelligence application

What is ETL testing all about ?
                   ETL is the extract , transform and load operation performed on the source data in order to build a strong base for developing an application that does the needful for the decision making teams in large enterprise systems.

Such is the importance of a Business Intelligence(BI) application that at times big enterprise end up in developing a complete application for just a single user to access and analyze the reports available in them. And developing the same itself involves huge effort and cost not only due to huge chunk of data and their testing but also the degree of complexity associated with the logic development for achieving the same . Reports however are not solely dependent on the ETL but several other logically inter-related objects and the access and authorization rules implemented on the cube level as well as the UI level.Someone who is an expert at BI application development might just not be sufficient to develop such an application as the end user's requirement needs to be documented and loads of data analysis on its nature needs to be done to frame a volley of questions for the end user and clarification documents needs to be tracked right through out the project development life cycle. this is because in these systems a bug that gets discovered at a very late stage generally has very high cost associated with it for being fixed.

Lets not get out of context and try to understand the approach and priority of testing just the ETL logic within the application under test. For a Business Intelligence application software developed using the Microsoft technologies we have the Microsoft SQL Server in place . And developing the ETL can be achieved by the  SSIS - Sql Server Integration Services feature available there in.
Using the SSIS feature, the packages can be developed which have the ETL logic in them, based on which the source data is filtered as per the requirement traced out for the application.

Once the ETL packages get developed , the testing of the same becomes an uphill task due to the huge amount of data in the source environment which gets extracted, transformed and loaded into the destination rather a sort of pre -destination environment. Just imagine the verification of each and every record set that was in the source against the target environment. From common principle we might just conclude that the data load is based on the Sql queries which in any case will go fine, but the main target area is the logic verification as to which data needs to be ignored for loading in the target environment. there might be cases wherein we have duplicate records in the source and we might not be able to load both the records just because that will create high level of discrepancy when we browse through the reports in the end product. Running ETL packages gets the data loaded from the source to the staging environment and depending on the context and nature of the application , same gets loaded into the data mart as well based on the transformation logic applied and also the nature of load which may be a full load, that is truncate and load, or a incremental load that is just the additional data gets loaded into the environment.

When we have the uphill task of validating such huge chunks of data we take the help of some database automation tools that helps us to verify each and every record . There are many tools available in the industry  and at times we can ourselves create tools using the excel and ,macro programming but then what I prefer to do is utilize the DB unit test projevct feature available within the Visual Studio IDE. Now the time is to build up the logic that will help us validate each record set in the source against the target. The general approach that is considered to be sufficient to to do the same is in a two staged sql queries verification. One being the count of the source and target data environment must match on applying the filters as has been documented by the client and the second is that the data must match . We genuinely find the option of empty return value as sufficient for doing the same . What we all need to apply is the except keyword between the two query execution logic and the addition of the Test condition from within the added DB unit test file.
Just browse through the some screenshots of the working analogy for doing the same that would definitely make things easy for the SSIS testers.


 



Just hit cancel for the above dialog box. This is actually the database configuration file creation which we can create directly by adding an app.config file which is as under. To get it done we can add a new item into the project by right clicking the project and clicking add new item and then  selecting an application configuration file as shown :




The content of the same will be something as under :
 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <configSections>     <section name="DatabaseUnitTesting" type="Microsoft.Data.Schema.UnitTesting.

Configuration.DatabaseUnitTestingSection, 
Microsoft.Data.Schema.UnitTesting, Version=10.0.0.0, Culture=neutral, 
PublicKeyToken=b03f5f7f11d50a3a" />
  </configSections>
  <DatabaseUnitTesting>
    <DataGeneration ClearDatabase="true" />
    <ExecutionContext Provider="System.Data.SqlClient"  
ConnectionString="Data Source=DB_Server_Name;Initial Catalog=Master;
Integrated Security=True;Pooling=False"
        CommandTimeout="220" />
  </DatabaseUnitTesting>
</configuration>

Once this has been set up , we can go ahead with the logic to validate the same, which we do as under.
Here we have just renamed the method from databasetest to more relevant onw that is Employee_RowCount, similarly, we add another test method by clicking the plus icon to add another method to the same DBUnit class file that is employee to validate the data content as under.




So what is it that I have done in this First level of verifictaion as in above image : Its is simple I have just written the query to fetch the row count and utilized the Except keyword. So now if the count matches of the two query , due to the except query in place we are expecting the "Empty resultset" as the return value of the complete query execution. Hence in the below section of the image you can see, I have removed the default condition that got added and added a new condition namely the empty resultset. We are hence ready with one validation that is on the row count.

Second level of verification is for the data match as well. We add a new method by the upper plus icon and rename it to Employee_DataCheck method name, provide the query for the same and add the except keyword in between the two queries and rest is as was done above to get the empty resultset as the return value for the query execution. This will look as under :
As part of some experience tips we at times have issue with data validation especially for the string datatype attributes. Just check in for the collation conflict that creates such issues. Do provide the collation to sort out those issues.

A third level of verification that adds quality to the testing of the SSIS packages and ETL execution is verification of the schema of the database objects in the source against the destination environment. This provides an added quality as it helps us to verify if the data will be loaded with same precision values or not.
The general query that will fetch the schema details of any database table is as under :

select column_name  collate Latin1_General_CI_AI,
 DATA_TYPE  collate Latin1_General_CI_AI
 , CHARACTER_MAXIMUM_LENGTH,
NUMERIC_PRECISION
, DATETIME_PRECISION
from Source.INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='Employee'
Except
select column_name  collate Latin1_General_CI_AI,
 DATA_TYPE  collate Latin1_General_CI_AI
 , CHARACTER_MAXIMUM_LENGTH,
NUMERIC_PRECISION
, DATETIME_PRECISION
from Destination.INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='Employee'
and COLUMN_NAME Not in ('clumn not to be validated especially some ID reltaed that gets auto generated')

The code above validates the column names in the two environment,  do keep in mind that there are certain columns that get auto generated especially in staging environment we have staging id and so on, so do exclude their verification as has been addressed.these columns get verified on their attributes which we provide above namely "datatype", "Size" that is character maximum legth,"numeric precision","datetime precision". We could have easily validated another aspect such as IsNull. But the general approach in Business Intellignece(BI) is that if we have some column to validate the ID of the complete record set we tend to oignore the IsNull feature and the same gets reflected in the code above.


Thus we have successfully automated the ETL testing using three levels of verification namely rowcount, datacheck and schemacheck. This provides the team with more level of confidence on the quality of testing of the ETL packages as data verification has been done at a rigorous level rather a sampling manner.

So we have explored the ETL testing and ways to achieve high degree of quality for the same..

Sql server 2008 Enhancements

SQL Server 2008 Top New Features

Enterprise Data Platform

Policy-Based Management

Policy-Based Management enables the efficient management of multiple SQL Server instances from a single location. Easily create policies that control security, database options, object naming conventions, and other settings at a highly granular level. Policies can evaluate servers for compliance with a set of predefined conditions and prevent undesirable changes being made to servers.

Performance Data Collection (Data Collector)

The Data Collector provides a convenient way to collect, store, and view performance data automatically. It collects disk usage, server activity, and query statistics data, which it loads in a management data warehouse and performance data can be reviewed in SQL Server Management Studio or by using third-party tools.

Data Compression

Data compression reduces the amount of storage space needed to store tables and indexes, which enables more efficient storage of data. Data Compression does not require changes be made to applications in order to be enabled.

Resource Governor

The Resource Governor enables administrators to control and allocate CPU and memory resources to high priority applications. This enables predictable performance to be maintained and helps avoid performance from being negatively affected by resource-intense applications or processes

Transparent Data Encryption

Transparent Data Encryption enables data to be stored securely by encrypting the database files. If the disks that contain database files become compromised, data in those files is protected because that data can only be de-encrypted by an authorized agent. SQL Server performs the encryption and de-encryption directly, so the process is entirely transparent to connecting applications. Applications can continue to read and write data to and from the database as they normally would. Backup copies of encrypted database files are also automatically encrypted.

External Key Management / Extensible Key Management

External Key Management enables certificates and encryption keys to be stored using third-party hardware security modules that are designed specifically for this purpose. Storing the keys separately from the data enables a more extensible and robust security architecture.

Data Auditing

Data Auditing provides a simple way to track and log events relating to your databases and servers. You can audit logons, password changes, data access and modification, and many other events. Tracking these events helps maintain security and can also provide valuable troubleshooting information. The results of audits can be saved to file or to the Windows Security or Application logs for later analysis or archiving.

Hot-Add CPUs and Hot-Add Memory

Hot-add CPUs, a feature available with the 64-bit edition SQL Server Enterprise, allows CPUs to be dynamically added to servers as needed, without the need to shut down the server or limit client connections. Hot-add memory enables memory to be added in the same way.

Streamlined Installation

The SQL Server 2008 installation process has been streamlined to be easier and more efficient. Individual SQL Server components, such as Database Services, Analysis Services, and Integration Services, can be optionally selected for installation. Failover cluster support configuration has also been added to the installation.

Server Group Management

Server Group management enables T-SQL queries to be issued against multiple servers from a single Central Management Server, which simplifies administration. Stream results of multi-server queries into a single result set or into multiple result sets enables the option of evaluating policies against a server group.

Upgrade Advisor

The Upgrade Advisor generates a report that highlights any issues that might hinder an upgrade. This provides administrators detailed information that can be used to prepare for upgrades.

Partition Aligned Indexed Views

Indexed Views let SQL Server persist the results of a view, instead of having to dynamically combine the results from the individual queries in the view definition. Indexed Views can now be created to follow the partitioning scheme of the table that they reference. Indexed views that are aligned in this manner do not need to be dropped before a partition is switched out of the partitioned table, as was the case with SQL Server 2005 indexed views.

Backup Compression

Backup compression enables the backup of a database to be compressed without having to compress the database itself. All backup types, including log backups, are supported and data is automatically uncompressed upon restore.

Extended Events

The extended events infrastructure provides an in-depth troubleshooting tool that enables administrators to address difficult-to-solve problems more efficiently. Administrators can investigate excessive CPU usage, deadlocks, and application time outs as well as many other issues. Extended events data can be correlated with Windows events data to obtain a more complete picture that will aid in problem resolution.

Dynamic Development


Grouping Sets

Use GROUPING SETS to obtain results similar to those generated by using CUBE and ROLLUP, however GROUPING SETS is more flexible, offers better performance, and is ANSI SQL 2006 compliant. GROUPING SETS enables the GROUP BY clause to generate multiple grouped aggregations in a single result set. It is equivalent to using UNION ALL to return a result set from multiple SELECT statements, each of which has a GROUP BY clause.

MERGE Operator

The new MERGE operator streamlines the process of populating a data warehouse from a source database. For example, rows that get updated in the source database will probably already exist in the data warehouse but rows that are inserted into the source database will not already exist in the data warehouse. The MERGE statement distinguishes between the new and updated rows from the source database so that the appropriate action (insert or update) can be performed against the data warehouse in one single call.

LINQ

Language Integrated Query (LINQ) is a .NET Framework version 3.5 feature that provides developers with a common syntax to query any data source from client applications. Using LINQ to SQL or LINQ to Entities, developers can select, insert, update, and delete data that is stored in SQL Server 2008 databases using any .NET programming language such as C# and VB.NET.

ChangeDataCapture


Use Change Data Capture (CDC) to track changes to the data in your tables. CDC uses a SQL Server Agent job to capture insert, update and delete activity. This information is stored in a relational table, from where it can be accessed by data consumers such as SQL Server 2008 Integration Services. Use CDC in conjunction with Integration Services to incrementally populate data warehouses, enabling you to produce more frequent reports that contain up-to-date information. It also allows sync-enabled mobile and desktop applications to perform efficient data synchronization between client and server, without requiring changes to the database.

Table-Valued Parameters

Table-Valued Parameters (TVPs) allows stored procedures to accept and return lists of parameters. Developers can write applications that pass sets of data into stored procedures rather than just one value at a time. Table-valued parameters make the development of stored procedures that manipulate data more straightforward and can improve performance by reducing the number of times a procedure needs to call a database.

ADO.NET Entity Framework and the Entity Data Model

SQL Server 2008 databases store data in a relational format, but developers typically access the data they contain by using an application that was developed in an object-oriented programming language. Creating such applications can be made more complex if you need to build knowledge of the underlying database schema into the applications.


The ADO.NET Entity Framework allows a database to be abstracted and modeled into business objects, or entities, which can be more efficiently used by object-oriented programming languages such as C# and VB.NET. Applications can then use LINQ to query these entities without having to understand the underlying physical database schema.

Synchronization Services for ADO.NET

Synchronization Services for ADO.NET enables developers to build occasionally connected systems (OCSs) such as personal digital assistants (PDAs), laptop computers, and mobile phones to synchronize with server based databases. Users can work with a copy of the data that is cached on their local device and then synchronize changes with a server when a connection becomes available.

CLR Improvements

Common Language Runtime functionality in SQL Server 2008 has been improved in several areas. User-defined aggregates (UDAs) now support up to 2GB of data and can accept multiple inputs. User-defined types (UDTs) are, like UDAs, and also support up to 2GB of data. CLR table-valued functions now feature an optional ORDER clause in the CREATE FUNCTION statement, which helps the optimizer to run the query more efficiently.

Conflict Detection in Peer-to-Peer Replication

In a peer-to-peer replication scenario, all nodes in the replication topology contain the same data and any node can replicate to any other node, leading to the possibility of data conflicts. Use conflict detection to make sure that no such errors go undetected and that data remains consistent.

Service Broker Priorities and Diagnostics

Service Broker provides an asynchronous communication mechanism that allows servers to communicate by exchanging queued messages. Service Broker can be configured to prioritize certain messages so that they are sent and processed before other lower priority messages. Use the Service Broker Diagnostic Utility to investigate communication problems between participating Service Broker services.

ADO.NET Data Services

Microsoft ADO.NET Data Services provides a data access infrastructure for Internet applications by enabling Web applications to expose SQL Server data as a service that can be consumed by client applications in corporate networks and across the Internet.

Beyond Relational

Spatial data with GEOGRAPHY and GEOMETRY data types

New GEOGRAPHY and GEOMETRY data types allow spatial data to be stored directly in a SQL Server 2008 database. Use these spatial data types to work with location-based data that describes physical locations, such as longitude and latitude.


GEOGRAPHY enables you to represent three-dimensional geodetic data such as GPS applications use. GEOMETRY enables you to represent two-dimensional planar data such as points on maps. Spatial data types help you to answer questions like ‘How many of our stores are located within 20 miles of Seattle?’

Virtual Earth Integration

Use the new spatial data types in SQL Server 2008 with Microsoft Virtual Earth to deliver rich graphical representations of the physical locations stored in a database. Use Virtual Earth support to create applications that display data about locations in desktop maps or web pages. For example, SQL Server 2008 makes it easy to show the locations of all company sites that are less than 50 kilometers from Denver.

Sparse Columns

Sparse columns provide an efficient way to store NULL data in tables by not requiring NULL values to take up space. Applications that reference sparse columns can access them in the same way as they access regular columns. Multiple sparse columns in a table are supported by using a column set.

Filtered Indexes

A filtered index is essentially an index that supports a WHERE condition and includes only matching rows. It is a non-clustered index that is created on a subset of rows. Because filtered indexes generally do not contain all rows in the table, they are smaller and deliver faster performance for queries that reference the rows it contains.


Use filtered indexes to optimize performance for specific queries by ensuring that they contain only the rows referenced by the queries.

Integrated Full-Text Search

Full text indexes enable queries to be performed for words and phrases on text stored in your databases. The Full-Text Engine in SQL Server 2008 is fully integrated into the database and full-text indexes are stored within database files rather than externally in the file system. This allows Full text indexes to be fully backed up and restored along with the rest of the database. Full-text indexes are also integrated with the Query Processor, so they are used more efficiently.

FILESTREAMData
FILESTREAM enables binary large object (BLOB) data to be stored in the Microsoft Windows NTFS file system instead of in a database file. Data that is stored using FILESTREAM behaves like any other data type and can be manipulated using T-SQL select, insert, update and delete statements.


Unlike traditional BLOB storage, FILESTREAM data is logically shackled to the database while being stored efficiently outside the database in the NTFS file system. FILESTREAM data participates in all SQL Server transactions and backup operations, along with the rest of the database.

Large User-Defined Types (UDTs)

Create user-defined types (UDTs) that go beyond the traditional data types supported to describe custom data types. UDTs in SQL Server 2008 are more extensible than previous versions since the 8KB size limit has been increased to 2GB. Note that the powerful new spatial data types GEOMETRY and GEOGRAPHY in SQL Server 2008 were developed using this new UDT architecture.

Large User-Defined Aggregates (UDAs)

SQL Server 2008 features a set of built-in aggregate functions that can be used to perform common aggregations such as summing or averaging data. Create custom, user-defined aggregates (UDAs) to manage custom aggregations. UDAs in SQL Server 2008 are more extensible than previous versions since the 8KB size limit has been increased to 2GB.

DATE / TIME Data Types

SQL Server 2008 introduces several new date and time based data types. DATETIME2 references the Coordinated Universal Time (UTC) instead of the system time for greater accuracy and can store date and time data to a precision of 100 nanoseconds. The new DATE and TIME data types enable you to store date and time data separately. The new DATETIMEOFFSET data type introduces time zone support by storing date, time and offset such as ‘plus 5 hours’.

Improved XML Support

SQL Server 2008 features several XML enhancements including Lax validation, the DATETIME data type, and union functionality for list types all provide greater flexibility for defining XML schemas. XQuery includes support for the let clause, and the modify method of the xml data type now accepts xml variables as input for an insert expression.

ORDPATH

Hierarchical data is organized differently to relational data, typically in the form of a tree. An example of hierarchical data is a typical organization chart that outlines the relationships between managers and the employees they manage. A column in a table that uses the HierarchyID data type contains data that describes the hierarchical relationships between rows explicitly in the form of a path. ORDPATH makes it efficient to program hierarchical data by using the HierarchyID data type.

Pervasive Insight


Fixed Query Plan Guides (Plan Freezing)

Freezing Query Plans enables you to influence how the SQL Server query optimizer executes queries. SQL Server 2008 allows existing query execution plans to be imported. Plan Guide to force the query optimizer to always use a particular execution plan for a specific query. Using fixed query plans ensures that queries will be executed in the same way every time they run.

Star Join Query Optimization

Data warehouses are often implemented as star schemas. A star schema has a fact table at its centre, which typically contains a very large number of rows. Star join query optimization can provide improvements in performance for queries that select a subset of those rows. When SQL Server processes queries using star join query optimization, bitmap filters eliminate rows that do not qualify for inclusion in the result set very early on, so that the rest of the query is processed more efficiently.

Enterprise Reporting Engine

The reporting engine in SQL Server 2008 Reporting Services enables the pulling together of data from multiple heterogeneous sources from across an Enterprise. Large and complex reports can be produced in various formats, including list, chart, table, matrix, and tablix (a table/matrix hybrid).


Access and manage reports through a Microsoft SharePoint Services site, simplifying administration, security, and collaboration, and making reports more easily available.

Report Builder Enhancements

Report Builder is an end-user tool for the creation and editing of reports. Report Builder in SQL Server 2008 has an interface that is consistent with Microsoft Office 2007 products, and because it masks the underlying complexity of report building, nontechnical users can create sophisticated reports with relative ease.

Improving Rendering for Microsoft Office® Word and Excel

Reports generated by SQL Server 2008 Reporting Services can be viewed and edited by using Microsoft Office Excel and Microsoft Office Word. The Excel rendering extension produces .xls files that are compatible with versions of Microsoft Office Excel from version 97 upwards.


It offers improved options over previous versions, such as the rendering of subreports. The Word rendering extension, which new in SQL Server 2008 Reporting Services, produces .doc files that are compatible with versions of Microsoft Office Word from version 2000 upwards.

Partitioned Table Parallelism

Parallelism refers to using multiple processors in parallel to process a query, which improves query response time. On a multiprocessor system, SQL Server 2008 uses parallel processing when you run queries that reference partitioned tables.


When SQL Server 2008 processes such a query, rather than allocating just one processor for each partition referenced by the query, it can allocate all available processors, regardless of the number of partitions referenced.

IIS Agnostic Report Deployments

Reporting Services in SQL Server 2008 does not depend on IIS to provide core functionality as it did in SQL Server 2005. Reporting Services can directly generate and deliver reports by accessing the HTTP.SYS driver directly. This has the effect of simplifying the deployment and management of Reporting Services in addition to offering better performance when generating larger reports.

Persistent Lookups

SQL Server Integration Services packages use lookups to reference external data rows in the data flow. Lookup data flow transformations load the external data into cache to improve the performance of this operation. SQL Server 2008 Integration Services uses persistent lookups so that data loaded into the lookup cache is available to other packages, or to multiple pipelines within the same package, without the need to reload the cache.

Analysis Services Query and Writeback Performance

Cell writeback in SQL Server Analysis Services enables users to perform speculative analysis on data. Users can modify specific data values and then issuing queries to see the effect of the changes. This can be useful for forecasting, for example.


In SQL Server 2008 Analysis Services, the values that a user changes are stored in a MOLAP format writeback partition, which results in better query and writeback performance than the ROLAP format that was used in SQL Server 2005 Analysis Services.

Best Practice Design Alerts

Good design is fundamental to creating optimal Analysis Services solutions. SQL Server 2008 Analysis Services uses Analysis Management Objects (AMO) warnings to alert you when the choices you make in your design deviate from best practice.


Design problems are underlined in blue, similar to the way spelling mistakes are underlined in red in Microsoft Office Word. You can see the full text of warning by placing your arrow over the underlined object. You can disable AMO warnings if you choose.

Analysis Services Dimension Design

Various new features in SQL Server 2008 Analysis Services contribute to improving and simplifying the dimension design process. Analysis Management Objects (AMO) warnings help ensure designs comply with best practice, the Attribute Relationship Designer is a visual tool for defining attribute relationships, and key column management is easier with the key columns dialog box.

Analysis Services Time Series

Microsoft Time Series enables trends over time to be forecasted. For example, you can use it to predict product sales over the coming 12 month period. SQL Server 2008 Analysis Services includes the same algorithm for short term analysis that SQL Server 2005 Analysis Services used, and additionally introduces an algorithm for long term trend analysis. Both algorithms are used by default and you can also choose to use just one or the other.

Data Profiling

SQL Server 2008 Integration Services includes the Data Profiling task, which enables the quality of data to be inspected before adding it to your databases. The task creates a profile that includes information such as the number of rows, NULL values, and distinct values that are present. Read profiles created by the Data Profiling task by using the Data Profile Viewer, and then clean and standardize the data as appropriate.

IIS Information Retrieval 2

The following program helps you to retrieve basic IIS information.



Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click
Try
Dim dt As DataTable = getPropertyStruct()
Dim dr As DataRow
Dim searcher As New ManagementObjectSearcher("root\MicrosoftIISv2", "SELECT * FROM IIsWebInfo")
For Each queryObj As ManagementObject In searcher.Get()

dr = dt.NewRow
dr("Name") = "Caption"
dr("value") = queryObj("Caption")
dt.Rows.Add(dr)

dr = dt.NewRow
dr("Name") = "Description"
dr("value") = queryObj("Description")
dt.Rows.Add(dr)

dr = dt.NewRow
dr("Name") = "InstallDate"
dr("value") = queryObj("InstallDate")
dt.Rows.Add(dr)

dr = dt.NewRow
dr("Name") = "MajorIIsVersionNumber"
dr("value") = queryObj("MajorIIsVersionNumber")
dt.Rows.Add(dr)


dr = dt.NewRow
dr("Name") = "MinorIIsVersionNumber"
dr("value") = queryObj("MinorIIsVersionNumber")
dt.Rows.Add(dr)

dr = dt.NewRow
dr("Name") = "Name"
dr("value") = queryObj("Name")
dt.Rows.Add(dr)

dr = dt.NewRow
dr("Name") = "Status"
dr("value") = queryObj("Status")
dt.Rows.Add(dr)
Next
Me.GridView1.DataSource = dt
Me.GridView1.DataBind()
Catch err As ManagementException
Response.Write(err.Message)
End Try
End Sub

Details about the program :

Even though the above program is a bit lengthy, it is very simple to understand. In the above program I am using “getPropertyStruct,” which is used to create and return a data table which can hold any number of “key,value” pairs. You can also work with “collections” instead of working with the “data table.” “getPropertyStruct” is defined as follows:

Private Function getPropertyStruct() As DataTable
Dim dt As New DataTable
dt.Columns.Add(New DataColumn("Name"))
dt.Columns.Add(New DataColumn("Value"))
Return dt
End Function


The main statements within the above program (in the previous section) are as follows:
Dim searcher As New ManagementObjectSearcher("root\MicrosoftIISv2", "SELECT * FROM IIsWebInfo")
For Each queryObj As ManagementObject In searcher.Get()

The first statement connects to the “root\MicrosoftIISv2” namespace. We are trying to issue a SELECT statement based on a built-in class, “IIsWebInfo.” It would return all the objects associated with the class “IIsWebInfo” and I handle them (each) with “queryObj” which is of type “ManagementObject.”

With every object I receive from “queryObj,” I extract the information into a new “data row” which would finally be added to the “data table” (as follows).
dr = dt.NewRow
dr("Name") = "Name"
dr("value") = queryObj("Name")
dt.Rows.Add(dr)

Finally, I assign the data table information to the grid by issuing the following statements:
Me.GridView1.DataSource = dt
Me.GridView1.DataBind()

Programming Languages: Managed versus Native


While there are a plethora of languages to choose from, the vast majority approach coding from one of two opposed philosophies: managed or native.


Coding languages are divided into two or three different groups depending on the level of their complexity. "Level of complexity" in this case indicates how abstract they are, and at which level they communicate with the hardware.

First there are the low-level programming languages. These have only a minimal level of abstraction, and no concept of Object Oriented Programming (OOP). They interact directly with almost every level of hardware you possess. Assembly belongs at this level, as does any native code that does not require a compiler or interpreter to run.

At the highest level you will not find anything like pointers; nor will you work with the memory directly. You just create the objects and let the Garbage Collector take care of this task. Languages that work like this include C#, Java, Python and Perl. They are called managed codes.

The third road is in the middle between the two: middle-level coding. This is the most low-level high-level coding, to be more exact; here we are talking about the C/C++ group. Although this group has a minimal idea of management, it is still strongly native code, with everything that implies (i.e. leaving many tasks to the user).

Consider the costs of managed code. This code is achieved by creating additional classes/objects (“cheap objects”) that operate behind the scenes and offer security, garbage collection and other improvements. The presence of this additional object, however, will take up additional memory segments, and their creation takes time, further slowing down an application and increasing its memory usage. You have to pay the price for everything somewhere.

Managed code is superior to native code in the development time for a stable application. This takes up less time and generates a more secure application. Vendors can no longer wait years to develop a program; it needs to be done now, in the shortest period of time.C# will increase the efficiency of your company, and with it, provide the productivity that will result in extra cash.
Attention is paid to the design and configuration.

The portability issue is a little more complicated. C# follows the tongue-in-cheek phrase "code once, debug everywhere" by promising to run on any machine where a virtual CLR machine can be started. This offers a double-edged portability, as many devices developed by Microsoft have this issue. That is also the situation with the Linux OS.

C++ offers portability, but for every operating system you need to do a recompile of the application with new libraries and spread the program differently for different operating systems and devices. You need to be aware, inside C++, of the divergences in the allocation/de-allocation of the objects between libraries, which at times use multiples of an object. Inside managed code, AL will pass these in a reasonable way so you do not have to worry about them. The price paid is that additional objects are created.

But device drivers can be written only at this low level, because you need to directly access the hardware. Also, it is much easier to debug a project of this type, as you get to see what happens; it isn't just done by some libraries where you can’t examine what's going on.

In addition, managed code is not suitable for every single situation. Medical devices are one of the prime examples. You need to access all of the data now; you cannot allow any time lags. Besides, you want to work with the hardware directly.


When to use each :


C# was built for speed of development and minimizing the amount of bugs that can occur. C++, on the other hand, gives you control over the hardware but makes you responsible for doing that efficiently; also, it is up to you to write as bug-free as you can. If you pull all of this off gracefully, you will be rewarded with extra speed and greater control over what is happening behind the scenes.

In general, it is a good idea to use managed code for business applications, as these require stability, efficiency, and security with a short development time, unless you already have a big company that can support that financially.

Managed code should also be used for web sites, unless you write one with a real time system at its center, like a consistently updated sales site. You can choose ASP.Net or Java as managed codes, or even interpreted ones like PHP and PERL.

When you're creating real time, mission-critical applications, the perfect solution looks to be native code. This is the case for operating systems, RDBMS engines, kernels, drivers, games, real time simulations, and so forth. If you generally write applications that require speed, you want to stick with these. This would be true of applications that play music, edit videos, or process photos.

The portability issue can also be a major deciding factor. If you use the application only on devices that have the .Net Framework already installed, you may want to reduce the development time and stick with C#.

Just like you can skip to the Assembly language when you want to have absolute control in C/C++, in the same manner you may integrate the C++ code inside C# where it is required.

Adobe tried to move over to managed code with its Photoshop, but as inside it has many, many objects, it turned out to be a bad decision and the company abandoned that path. It's also worth mentioning that 3dmx, Office and Windows are written in something very close to C++.


History is repeating itself. This war was already fought on a different level, between Assembly code and C. As those results showed, there is no such thing as an absolute winner, as each one has its place, and both of them can co-exist. You still need C++ to write managed code, as we should not forget that C# eventually is created inside C++.Ha Ha Ha...Enjoy reading.

SQL Server - Aggregate Functions

Aggregate functions
will find out a calculation process on a collection of values and return a result which will be a single value.

All the aggregate functions ignore Null Values except COUNT. Aggregate functions are frequently used with the GROUP BY clause of the SELECT statement.

SQL Server provides the following aggregate functions:


AVG - Returns the average of the values in a group. Null values are ignored. May be followed by the OVER clause.

CHECKSUM_AGG - Returns the checksum of the values in a group. Null values are ignored. Can be followed by the OVER clause.

COUNT - Returns the number of items in a group. COUNT works like the COUNT_BIG function. The only difference between the two functions is their return values. COUNT always returns an int data type value. COUNT_BIG always returns a bigint data type value. May be followed by the OVER clause.

COUNT_BIG - Returns the number of items in a group. COUNT_BIG works like the COUNT function. The only difference between the two functions is their return values. COUNT_BIG always returns a bigint data type value. COUNT always returns an int data type value. May be followed by the OVER Clause (Transact-SQL).

GROUPING - Indicates whether a specified column expression in a GROUP BY list is aggregated or not. GROUPING returns 1 for aggregated or 0 for not aggregated in the result set. GROUPING can be used only in the SELECT list, HAVING, and ORDER BY clauses when GROUP BY is specified.

MAX - Returns the maximum value in the expression. May be followed by the OVER clause.

MIN - Returns the minimum value in the expression. May be followed by the OVER clause.

SUM - Returns the sum of all the values, or only the DISTINCT values, in the expression. SUM can be used with numeric columns only. Null values are ignored. May be followed by the OVER Clause (Transact-SQL).

STDEV - Returns the statistical standard deviation of all values in the specified expression. May be followed by the OVER clause.

STDEVP - Returns the statistical standard deviation for the population for all values in the specified expression. May be followed by the OVER clause.

VAR - Returns the statistical variance of all values in the specified expression. May be followed by the OVER clause.

VARP - Returns the statistical variance for the population for all values in the specified expression. May be followed by the OVER clause.

Stored Procedure

What is Stored procedure?


A stored procedure is a set of Structured Query Language (SQL) statements that you assign a name to and store in a database in compiled form so that you can share it between a number of programs.

* They allow modular programming.
* They allow faster execution.
* They can reduce network traffic.
* They can be used as a security mechanism.




Query to find 5th Highest Salary from Employee Table:



SELECT TOP 1 salary
FROM
(
SELECT DISTINCT TOP 5 salary
FROM Employee
ORDER BY salary DESC
) a

ORDER BY salary

Cookies Handling

Write Cookie

[VB]




Dim cookie As HttpCookie = New HttpCookie("UID")


cookie.Value = "myid"


cookie.Expires = #10/12/2010#


Response.Cookies.Add(cookie)


cookie = New HttpCookie("PASS")


cookie.Value = "mypass"


cookie.Expires = #10/12/2010#


Response.Cookies.Add(cookie)





Read & display all the cookies and its values in Listbox1 control

[VB.NET]



Dim cookieCols As New HttpCookieCollection


cookieCols = Request.Cookies


Dim str As String


For Each str In cookieCols


ListBox1.Items.Add("Cookie: " + str)


ListBox1.Items.Add("Value:" & _


Request.Cookies(str).Value)


Next





Delete Cookies

[VB.NET]


Dim cookieCols As New HttpCookieCollection


cookieCols = Request.Cookies


Request.Cookies.Remove("PASS")


Request.Cookies.Remove("UID")

Determining your browser :

Here’s a more advanced example that determines which browser the user has and lets you execute code depending on browser type to display the browser version. This example puts to use the if and else statements as well as several built-in JavaScript functions that handle strings. In JavaScript, text strings are considered objects, and they have some built-in properties and methods that make life easier.





span style="font-weight:bold;">Determining your browser :/span In angle brackets


script language=”javascript” In angle brackets

var versionBegin, versionEnd

function checkBrowser()

{

if(navigator.appName == “Netscape”) {

if(navigator.userAgent.indexOf(“Firefox”) > 0) {

versionBegin = navigator.userAgent.indexOf(“Firefox”) +

“Firefox”.length + 1;

versionEnd = navigator.userAgent.length;

document.getElementById(“targetDiv”).innerHTML =

“You have Firefox “ +

navigator.userAgent.substring(versionBegin, versionEnd);

}

}

if (navigator.appName == “Microsoft Internet Explorer”) {

versionBegin = navigator.userAgent.indexOf(“MSIE “) +

“MSIE “.length;

if(navigator.userAgent.indexOf(“;”, versionBegin) > 0) {

versionEnd = navigator.userAgent.indexOf(“;”, versionBegin);

} else {

versionEnd = navigator.userAgent.indexOf(“)”, versionBegin)

+ 2;

}

document.getElementById(“targetDiv”).innerHTML =

“You have Internet Explorer “ +

navigator.userAgent.substring(versionBegin, versionEnd);

}

}

/script In angle brackets



body onload=”checkBrowser()” ; Use this in angle brackets..

CODES : Beginners delight

[E] : Some more codes


Imports System.Data
Imports System.Data.SqlClient

Partial Class Adminfinance_Loaninterest
Inherits System.Web.UI.Page

Protected Sub calinterest_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles calinterest.Click
Dim rdr As SqlDataReader
Dim param As SqlParameter
Dim sb As New StringBuilder
sb = sb.Append("SELECT rate_of_interest,amount_received ")
sb = sb.Append("FROM new_loans_db ")
sb = sb.Append("WHERE loan_id=@loan_id")

Dim objconn As New SqlConnection
objconn.ConnectionString = ConfigurationManager.ConnectionStrings("hcms").ConnectionString
Try
objconn.Open()
Dim objcmd As New SqlCommand
objcmd.Connection = objconn
objcmd.CommandText = sb.ToString()
param = objcmd.Parameters.Add("@loan_id", SqlDbType.VarChar)
param.Value = Me.ddnloanid.SelectedItem.Text
rdr = objcmd.ExecuteReader()
If rdr.Read Then
Me.txtroi.Text = rdr(0).ToString
Me.amountreceived.Text = rdr(1).ToString

End If
rdr.Close()
Catch ex As SqlException
message.Text = "xception " + ex.Message.ToString()

objconn.Dispose()
Finally

objconn.Close()
End Try

'filling up amount of loan left

Dim rdr1 As Object
Dim param1 As SqlParameter
Dim sb1 As New StringBuilder
sb1 = sb1.Append("SELECT SUM(amount) as rdr1 ")
sb1 = sb1.Append("FROM loan_repayment_db ")
sb1 = sb1.Append("WHERE loan_id=@loan_id")
Dim objconn1 As New SqlConnection
objconn1.ConnectionString = ConfigurationManager.ConnectionStrings("hcms").ConnectionString
Try
objconn1.Open()
Dim objcmd1 As New SqlCommand
objcmd1.Connection = objconn1
objcmd1.CommandText = sb1.ToString()
param1 = objcmd1.Parameters.Add("@loan_id", SqlDbType.VarChar)
param1.Value = Me.ddnloanid.SelectedItem.Text
rdr1 = objcmd1.ExecuteScalar()
Me.txtamountpaid.Text = rdr1.ToString()

Catch ex1 As SqlException
message.Text = "xception " + ex1.Message.ToString()

objconn1.Dispose()
Finally

objconn1.Close()
End Try

CODES : DB entry codes

[D] : Some other codes

Admin section codes :
'vacancies db
Partial Class Adminrse_Setvacancies
Inherits System.Web.UI.Page
Protected Sub calldoa_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles calldoa.SelectionChanged
Me.txtldoa.Text = Me.calldoa.SelectedDate

End Sub

Protected Sub caled_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caledrd.SelectionChanged
Me.txted.Text = Me.caledrd.SelectedDate
End Sub

Protected Sub calid_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caliddoj.SelectionChanged
Me.txtid.Text = Me.caliddoj.SelectedDate
End Sub

Protected Sub calrd_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caledrd.SelectionChanged
Me.txtrd.Text = Me.caledrd.SelectedDate
End Sub

Protected Sub caldoj_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caliddoj.SelectionChanged
Me.txtdoj.Text = Me.caliddoj.SelectedDate
End Sub

Protected Sub setvacancies_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles setvacancies.Click
Dim sb As New StringBuilder()
sb = sb.Append("insert into vacancies_db")
sb = sb.Append("(job_code,job_specification,no_of_vacancies,")
sb = sb.Append("eligibility,salary_perks,ldoa,")
sb = sb.Append("ed,id,rd,doj)")
sb = sb.Append("values")
sb = sb.Append("('" + Me.jobcode.Text.ToString + "','" + Me.txtjobs.Text.ToString + "','" + Me.txtnoofv.Text.ToString + "',")
sb = sb.Append("'" + Me.txtelig.Text.ToString + "','" + Me.txtsp.Text.ToString + "','" + Me.txtldoa.Text.ToString + "' ,")
sb = sb.Append("'" + Me.txted.Text.ToString + "','" + Me.txtid.Text + "','" + Me.txtrd.Text + "','" + Me.txtdoj.Text.ToString + "')")
Dim setvacancies As New hcms
If (setvacancies.makeentry(sb.ToString)) Then
Me.message.Text = "Vacancies Set Succesfully"
Else
Me.message.Text = "Vacancies Set Failed"
End If

End Sub
End Class



'salary db
Partial Class Adminrse_Setsalary
Inherits System.Web.UI.Page
Protected Sub setsalary_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles setsalary.Click
Dim sb As New StringBuilder()
sb = sb.Append("insert into salary_db")
sb = sb.Append("(designation,salary)")
sb = sb.Append("values")
sb = sb.Append("('" + Me.txtdesig.Text.ToString + " ','" + Me.txtsal.Text.ToString + "')")
Dim setsalary As New hcms
If (setsalary.makeentry(sb.ToString)) Then
Me.message.Text = "Salary Set Succesfully"
Else
Me.message.Text = "Salary Set Failed"
End If
End Sub
End Class



Partial Class Adminrse_Setovertimes
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load


End Sub

Protected Sub caldatalltd_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caldatalltd.SelectionChanged
Me.txtdatalltd.Text = Me.caldatalltd.SelectedDate
End Sub
End Class



' bonusdb
Partial Class Adminrse_Setbonus
Inherits System.Web.UI.Page
Protected Sub setbonus_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles setbonus.Click
Dim sb As New StringBuilder()
sb = sb.Append("insert into bonus_db")
sb = sb.Append("(month_year,designation,bonus_amount,")
sb = sb.Append("overtime_per_hour)")
sb = sb.Append("values")
sb = sb.Append("('" + Me.ddnmon.SelectedItem.Text.ToString + " " + Me.txtyear.Text.ToString + "',")
sb = sb.Append("'" + Me.txtdesig.Text.ToString + "','" + Me.txtbon.Text.ToString + "','" + Me.txtoph.Text.ToString + "')")
Dim setbonus As New hcms
If (setbonus.makeentry(sb.ToString)) Then
Me.message.Text = "Bonus Set Succesfully"
Else
Me.message.Text = "Bonus Set Failed"
End If

End Sub
End Class

Here the authorised accesibility has been granted to some user so that specific data can be set up...

CODES : hcms class file

[C] : Very Simple , I hope there won't be any doubt.now u can relate the other class files with this class.

Imports System.Data
Imports System.Data.SqlClient

Public Class hcms
' try to incorporate the concept of session here as well
Public Function authenticateuser(ByVal login As String, ByVal pwd As String) _
As Boolean
Dim objconn As New SqlConnection
objconn.ConnectionString = ConfigurationManager.ConnectionStrings("hcms").ConnectionString
Try
objconn.Open()
Dim cmd As New SqlCommand("select * from member_professional where loginid=" + login, objconn)
Dim dr As SqlDataReader = cmd.ExecuteReader
If dr.Read AndAlso dr("password") = pwd Then
authenticateuser = True
Else
authenticateuser = False
End If
Catch ex As SqlException
authenticateuser = False
objconn.Dispose()
Finally
objconn.Close()

End Try

End Function

Public Function authenticateadmin(ByVal login As String, ByVal pwd As String) _
As Boolean


Dim objconn As New SqlConnection
objconn.ConnectionString = ConfigurationManager.ConnectionStrings("hcms").ConnectionString

Try
objconn.Open()
Dim cmd As New SqlCommand("select * from admin_db where loginid=" + login, objconn)
Dim dr As SqlDataReader = cmd.ExecuteReader
If dr.Read AndAlso dr("password") = pwd Then
authenticateadmin = True
Else
authenticateadmin = False
End If
Catch ex As Exception
authenticateadmin = False
objconn.Dispose()
Finally
objconn.Close()
End Try

End Function


Public Function makeentry(ByVal insertelement As String) As Boolean
Dim objconn As New SqlConnection
objconn.ConnectionString = ConfigurationManager.ConnectionStrings("hcms").ConnectionString
Try
objconn.Open()
Dim strsql As String = insertelement
Dim objcmd As New SqlCommand
objcmd.Connection = objconn
objcmd.CommandText = strsql
objcmd.ExecuteNonQuery()
makeentry = True
Catch ex As Exception
makeentry = False
objconn.Dispose()
Finally
objconn.Close()
End Try

End Function




End Class

CODES : Application form

[A] : Apply for a job by filling up the recruitment form .
Partial Class recruitment_form
Inherits System.Web.UI.Page
Protected Sub caldob_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caldob.SelectionChanged
Me.txtdob.Text = Me.caldob.SelectedDate
End Sub

Protected Sub btnsub_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnsub.Click

Try
Me.txtmarks10.Text = Format(CType(Me.txtmarks10.Text, Decimal), "##0.00")
Catch ex As Exception
Me.txtmarks10.Text = "0"
Me.txtmarks10.Text = Format(CType(Me.txtmarks10.Text, Decimal), "##0.00")
End Try
Try
Me.txtmarks12.Text = Format(CType(Me.txtmarks12.Text, Decimal), "##0.00")
Catch ex As Exception
Me.txtmarks12.Text = "0"
Me.txtmarks12.Text = Format(CType(Me.txtmarks12.Text, Decimal), "##0.00")
End Try
Try
Me.txtmarksg.Text = Format(CType(Me.txtmarksg.Text, Decimal), "##0.00")
Catch ex As Exception
Me.txtmarksg.Text = "0"
Me.txtmarksg.Text = Format(CType(Me.txtmarksg.Text, Decimal), "##0.00")
End Try
Try
Me.txtmarkspg.Text = Format(CType(Me.txtmarkspg.Text, Decimal), "##0.00")
Catch ex As Exception
Me.txtmarkspg.Text = "0"
Me.txtmarkspg.Text = Format(CType(Me.txtmarkspg.Text, Decimal), "##0.00")
End Try
Dim sb As New StringBuilder()
sb = sb.Append("insert into recruitment_db")
sb = sb.Append("(name,address,date_of_apply,dob,sex,")
sb = sb.Append("designation,specialization,")
sb = sb.Append("marks_10,marks_12,marks_g,marks_pg,subject_g,subject_pg,")
sb = sb.Append("extra_qualification,login_id,password,pwdrq,pwdra,")
sb = sb.Append("quarter_desired,experience,code,interview_marks,exam_marks)")
sb = sb.Append("values")
sb = sb.Append("('" + Me.txtfname.Text.ToString + Me.txtmname.Text.ToString + Me.txtlname.Text.ToString + "',")
sb = sb.Append("'" + Me.txtaddress1.Text.ToString + Me.txtaddress2.Text.ToString + Me.txtpin.Text.ToString + Me.txtcountry.Text.ToString + "',")
sb = sb.Append("'" + Me.txtdoa.Text.ToString + "','" + Me.txtdob.Text.ToString + "','" + Me.radiosex.SelectedItem.Text.ToString + "',")
sb = sb.Append("'" + Me.ddndesig.SelectedItem.Text.ToString + "',")
sb = sb.Append("'" + Me.ddnspecialize.SelectedItem.Text.ToString + "',")
sb = sb.Append("'" + Me.txtmarks10.Text + "','" + Me.txtmarks12.Text + "','" + Me.txtmarksg.Text + "','" + Me.txtmarkspg.Text + "',")
sb = sb.Append("'" + Me.txtsubg.Text.ToString + "','" + Me.txtsubpg.Text.ToString + "','" + Me.txtextrq.Text.ToString + "',")
sb = sb.Append("'" + Me.txtloginid.Text.ToString + "','" + Me.txtpwd.Text.ToString + "',")
sb = sb.Append("'" + Me.ddnprq.SelectedItem.Text.ToString + "','" + Me.txtpra.Text.ToString + "',")
sb = sb.Append("'" + Me.radioquarter.SelectedValue.ToString + "',")
sb = sb.Append("'" + Me.ddnexpe.SelectedItem.Text.ToString + "','" + Me.ddncode.SelectedItem.Text.ToString + "',")
sb = sb.Append("'" + Format(CType("0", Decimal), "##0.00") + "','" + Format(CType("0", Decimal), "##0.00") + "')")
Dim recruit As New hcms
If (recruit.makeentry(sb.ToString)) Then
Me.message.Text = "member registration succesfull"
Else
Me.message.Text = "member registration unsuccesfull"
End If



End Sub

Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Me.txtdoa.Text = Date.UtcNow
End Sub
End Class


//hcms is a class file which has some reusable codes in it.
// i shall put it now

CODES : Forgot pwd

[A] : Here we can retrieve the loginid,password of any user via its three parameters . if u get doubt put comments in here :
Imports System.Data
Imports System.Data.SqlClient
Partial Class forgot_password
Inherits System.Web.UI.Page

Protected Sub btnpwd_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnpwd.Click
Dim rdr As SqlDataReader
Dim param1, param2, param3 As SqlParameter
Dim sb As New StringBuilder
sb = sb.Append("SELECT login_id,password ")
sb = sb.Append("FROM recruitment_db ")
sb = sb.Append("WHERE pwdrq=@pwdrq and pwdra=@pwdra and dob=@dob")
Dim objconn As New SqlConnection
objconn.ConnectionString = ConfigurationManager.ConnectionStrings("hcms").ConnectionString
Try
objconn.Open()
Dim objcmd As New SqlCommand
objcmd.Connection = objconn
objcmd.CommandText = sb.ToString()
param1 = objcmd.Parameters.Add("@pwdrq", SqlDbType.VarChar)
param1.Value = Me.ddnprq.SelectedItem.Text
param2 = objcmd.Parameters.Add("@pwdra", SqlDbType.VarChar)
param2.Value = Me.txtpra.Text.ToString
param3 = objcmd.Parameters.Add("@dob", SqlDbType.DateTime)
param3.Value = Me.txtdob.Text
rdr = objcmd.ExecuteReader()
If rdr.Read Then
Me.message.Text = "Login Id : " + rdr(0).ToString + " " + "Password :" + rdr(1).ToString
End If
rdr.Close()
Catch ex As SqlException
message.Text = "xception " + ex.Message.ToString()
objconn.Dispose()
Finally
objconn.Close()
End Try
End Sub

Protected Sub caldob_SelectionChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles caldob.SelectionChanged
Me.txtdob.Text = Me.caldob.SelectedDate

End Sub
End Class

SQL Server Endpoints :

A]Overview of Endpoint :

Endpoints control the capability to connect to an instance of SQL Server as well as dictating the communications methods that are acceptable.

1. Endpoint types of payloads
An endpoint has two basic parts: a transport and payload.
Transport Payload
TCP TSQL
TCP SERVICE BROKER
TCP DATABASE MIRRORING
HTTP SOAP
By combing an endpoint transport and payload, SQL Server can filter acceptable traffic before a command event reached the SQL Server instance. (First the validate the transport and payload, then authenticate)

2. Endpoint access
(1) Even if traffic going to the endpoint matches the correct transport and payload, a connection is still not allowed unless access has been granted on the endpoint.
(2) The first layer of access security is determined by the endpoint state. An endpoint can have one of three states: STARTED, STOPPED, and DISABLED.
* STARTED: The endpoint is actively listening for connections and will replay to an application
* STOPPED: The endpoint is actively listening, but returns a connection error to an application
* DISABLED: The endpoint does not listen and does not respond to any connection attempted
(3) The second layer of security is permission to connect to the endpoint. An application muse have a login created in SQL Server hat has the CONNECT permission granted on the endpoint before the connection is allowed through the endpoint.
(4) SQL Server 2005 ensures that only valid requests can be submitted by a valid user before a request is scheduled within the engine. Administrators also have a master switch to immediately shut off access if they feel someone is attempting to compromise their SQL Server, by setting the state of the endpoint being used to DISABLED.

3. Practice: Inspecting existing endpoints
select * from sys.endpoints
select * from sys.tcp_endpoints
select * from sys.http_endpoints
select * from sys.database_mirroring_endpoints
select * from sys.service_broker_endpoints

B] TCP Endpoints

1. TCP protocol arguments
(1) TCP endpoints are configured to listen on specific IP addresses and port numbers. The two arguments that can be specified that are universal for all TCP endpoints are the following. LISENER_PORT and LISENER_IP.
(2) LISENER_PORT argument is required. The TCP or TSQL endpoint that is created for each instance during installation is already configured for port 1433 o the alternative port number for the instance.
(3) LISENER_IP argument is an optional argument that can provide a powerful security layer for some types of applications. You can specify a specific IP address for the endpoint to listen on. The default setting is ALL.

2. Database mirroring and service broker common arguments
(1) Database mirroring and service broker endpoints provide options to specify the authentication method and the encryption setting. You can use either Microsoft Windows-based authentication or certificates.
(2) Windows-based authentication: NTLM, KERBEROS, NEGOTIATE (Negotiate means that dynamical select the authentication method.)
(3) Best practices
* If the same domain or across trusted domain, use the Windows-based authentication
* If different non-trusted domain, use the certification
(4) All communication between endpoints can be encrypted, and you can specify which algorithm to use for the communications. The default algorithm is RC4, but you can specify the much stronger advanced encryption standard (AES) algorithm.

3. Database mirroring specific arguments
(1) Database mirroring endpoints include a third argument related to the role within the database mirroring session.
(2) Database mirroring endpoints role
Role Description
PARTNER The endpoint can be only as the principal or the mirror
WITNESS The endpoint can be only as the witness
ALL The endpoints can be either partner or witness
(3) Other
4. Database mirroring Practice
(1) structure
(2) preparing works
* Set the recovery mode of the principal to FULL.
* Backup the database on principal
* Restore the database on mirror with NORECOVERY
* Backup the transaction log on principal, restore the transaction log on the mirror
* Transfer to the instance hosting the mirror all logins, jobs, linked server, and other objects external to the database.
--on the principal server
use master
go
backup database DB_Mirror_Sample
to disk = 'c:\test\DB_Mirror_Sample.bak'
with format
go

backup log DB_Mirror_Sample
to disk = 'c:\test\DB_Mirror_Sample_Log.bak'
with norecovery
go

--on the mirroring server
use master
go

restore database DB_Mirror_Sample
from disk='c:\test\DB_Mirror_Sample.bak'
with file=1, norecovery
go

restore log DB_Mirror_Sample
from disk='c:\test\DB_Mirror_Sample_Log.bak'
with file=1, norecovery
go
(3) Establishing endpoints.
Enable the database mirror

Configure security
Change the SQL Server Service Account for Principal, Mirror, and Witness.

(4) You can change operation mode if possible
Mode Witness Explanation
High performance (asynchronous) N/A To maximize performance, the mirror database always lags somewhat behind the principal database, never quite catching up. However, the gap between the databases is typically small. The loss of a partner has the following effect:
* If the mirror server instance becomes unavailable, the principal continues.
* If the principal server instance becomes unavailable, the mirror stops; but if the session has no witness (as recommended) or the witness is connected to the mirror server, the mirror server is accessible as a warm standby; the database owner can force service to the mirror server instance (with possible data loss).
High safety without automatic failover (synchronous) No All committed transactions are guaranteed to be written to disk on the mirror server.
Manual failover is possible when the partners are connected to each other and the database is synchronized.
The loss of a partner has the following effect:
* If the mirror server instance becomes unavailable, the principal continues.
* If the principal server instance becomes unavailable, the mirror stops but is accessible as a warm standby; the database owner can force service to the mirror server instance (with possible data loss).
High safety with automatic failover (synchronous) Yes All committed transactions are guaranteed to be written to disk on the mirror server.
Availability is maximized by including a witness server instance to support automatic failover. Note that you can select the High safety with automatic failover (synchronous) option only if you have first specified a witness server address.
Manual failover is possible when the partners are connected to each other and the database is synchronized.
Important: If the witness becomes disconnected, the partners must be connected to each other for the database to be available.
In the presence of a witness, the loss of a partner has the following effect:
* If the principal server instance becomes unavailable, automatic failover occurs. The mirror server instance switches to the role of principal, and it offers its database as the principal database.
* If the mirror server instance becomes unavailable, the principal continues.

(5) Failover

(6) Removing the mirror
alter database DB_Mirror_Sample set partner OFF

5. Service broker-specific arguments
(1) In addition to authentication modes and encryption, the service broker endpoints implement arguments related to message forwarding.
(2) The MESSAGE_FORWAREDING (DISABLED | ENABLED) option enables messages destined for a different broker instance to be forwarded to a specified forwarding address.
6. Service broker practice


use master
go
alter database DB_SB
set enable_broker
go

use DB_SB
go

create master key
encryption by password = 'Pa$$w0rd'
go


--message type
create message type SubmitBOMProduct
validation = well_formed_xml
create message type ReceiveBOM
validation = well_formed_xml
-- create contract
create contract BOMContract
(SubmitBOMProduct sent by initiator,
ReceiveBOM sent by target)


[ create queue ]
1. create queue
create queue BOMProductQueue
create queue BOMResultQueue

2 create a service

create service BOMRequestService
on queue BOMProductQueue(BOMContract)
create service BOMResultService
on queue BOMResultQueue(BOMContract)

[ create a conversation ]
declare @dialoghandle uniqueidentifier

begin dialog conversation @dialoghandle
from service BOMRequestService
to service 'BOMResultService'
on contract BOMContract

select @dialoghandle

[ send and receive message ]
select * from BOMProductQueue
select * from BOMResultQueue

1. send msg
send on conversation 'AC0996FF-1C16-DE11-AA62-0003FF1D2E78'
message type SubmitBOMProduct
(N'123')

select * from BOMProductQueue
select * from BOMResultQueue


2. receive msg
receive top(1) *
from BOMResultQueue

select * from BOMProductQueue
select * from BOMResultQueue


C] HTTP Endpoints :
1. HTTP endpoint security
(1) In addition to specifying the HTTP protocol with a SOAP payload that restricts the endpoints to accepting only a well-formed SOAP Request, HTTP endpoints provide additional layers of security.
(2) Authentication method
Type details
Windows NTLM, KERBEROS, or NEGOTIATE (dynamic select)
Certificate Use a certificate from a trusted authority or generate your own Windows certificate
(3) Encryption
Clear text or SSL
(4) Login type
Windows or Mixed
(5) Specifying web methods

2. Creating an endpoint
CREATE ENDPOINT sample_endpoint
STATE = STARTED
AS HTTP
(
PATH='/hp',
AUTHENTICATION=(INTEGRATED),
PORTS=(SSL),
SSL_PORT = 1233,
SITE='http://csesupport.blogspot.com/'
)

FOR SOAP
(
WEBMETHOD 'ListCourse' (NAME='DB_Mirror_Sample.dbo.Course', SCHEMA=DEFAULT, FORMAT=ALL_RESULTS),
WSDL = DEFAULT,
DATABASE = 'DB_Mirror_Sample',
NAMESPACE='http://temUri.org'
)