Tips for Database Query Optimization

  1. Read queries are frequently very inefficient. Consider writing a stored procedure for complex Read queries.
  2. Ensure that the SQL Server instance has ‘Optimize for Ad Hoc’ enabled. This will store a plan stub in memory the first time a query is passed, rather than storing a full plan. This can help with memory management.
  3. SELECT * is not always a good idea and you should only move the data you really need to move and only when you really need it, in order to avoid network, disk and memory contention on your server.
  4. Keep transactions as short as possible and never use them unnecessarily. The longer a lock is held the more likely it is that another user will be blocked. Never hold a transaction
open after control is passed back to the application – use optimistic locking instead.
  1. For small sets of data that are infrequently updated such as lookup values, build a method of caching them in memory on your application server rather than constantly querying
them in the database.
  1. When processing within a transaction, do not  updates until last if possible, to minimize the need for exclusive locks.
  2. Cursors within SQL Server can cause severe performance bottlenecks.
  3. The WHILE loop within SQL Server is just as bad as a cursor.
  4. Ensure your variables and parameters are the same data types as the columns. An implicit or explicit conversion can lead to table scans and slow performance.
  5. A function on columns in the WHERE clause or JOIN criteria means that SQL Server can’t use indexes appropriately and will lead to table scans and slow performance.
  6. Use the  DISTINCT, ORDER BY,  UNION carefully.
  7. Table variables do not have any statistics within SQL Server. This makes them useful for working in situations where a statement level recompile can slow performance. But, that lack of statistics makes them very inefficient where you need to do searches or joins. Use table variables only
where appropriate.
  1. Multi-statement user-defined functions work through table variables- which don’t work well in situations where statistics are required. Avoid using them If a join or filtering is required.
  2. One of the most abused query hints is NO_LOCK. This can lead to extra or missing rows in data sets. Instead of using NO_LOCK consider using a snapshot isolation level such as READ_COMMITTED_SNAPSHOT.
  3. Avoid creating stored procedures that have a wide range of data supplied to them as parameters. These are compiled to use just one query plan.
  4. Try not to interleave data definition language with your data manipulation language queries within SQL Server. This can lead to recompiles which hurts performance.
  5. Temporary tables have statistics which get updated as data is inserted into them. As these updates occur, you can get recompiles. Where possible, substitute table
variables to avoid this issue.
  1. If possible, avoid NULL values in your database. If not, use the appropriate IS NULL and IS NOT NULL code.
  2. A view is meant to mask or modify how tables are presented to the end user. These are fine constructs. But when you start joining one view to another or nesting views within views, performance will suffer. Refer only to tables within a view.
  3. Use extended events to monitor the queries in your system in order to identify slow running queries.
  4. You get exactly one clustered index on a table. Ensure you have it in the right place. First choice is the most frequently accessed column, which may or may not be the primary key. Second choice is a column that structures the storage in a way that helps performance.
This is a must for partitioning data.
  1. Clustered indexes work well on columns that are used a lot for ‘range’ WHERE clauses such as BETWEEN and LIKE, where it is frequently used in ORDER BY clauses or in GROUP BY clauses.
  2. If clustered indexes are narrow (involve few columns) then this will mean that less storage is needed for non-clustered indexes for that table.
  3. Avoid using a column in a clustered index that has values that are frequently updated.
  4. Keep your indexes as narrow as possible. This means reducing the number and size of the columns used in the index key. This helps make the index more efficient.
  5. Always index your foreign key columns if you are likely to delete rows from the referenced table. This avoids a table scan.
  6. A clustered index on a GUID can lead to serious fragmentation of the index due to the random
nature of the GUID. You can use the function NEWSEQUENTIALID() to generate a GUID that will
not lead to as much fragmentation.

  1. Performance is enhanced when indexes are placed on columns used in WHERE, JOIN, ORDER BY, GROUP, and TOP.
  2. A unique index absolutely performs faster than a non-unique index, even with the same values.
  3. Data normalization is a performance tuning technique as well as a storage mechanism.
  4. Referential integrity constraints such as foreign keys actually help performance, because the optimizer can recognize these enforced constraints and make better choices for joins and other data access.
  5. Make sure your database doesn’t hold ‘historic’ data that is no longer used. Archive it out, either into a special ‘archive’ database, a reporting OLAP database, or on file. Large tables mean longer table scans and deeper indexes. This in turn can mean that locks are held for longer. Admin tasks such as Statistics  updates, DBCC checks, and index builds take longer, as do backups.
  6. Separate out the reporting functions from the OLTP production functions. OLTP databases usually have short transactions with a lot of updates whereas reporting databases, such as OLAP and data warehouse systems, have longer data-heavy queries. If possible, put them on different servers.

Post a Comment


Close Menu