Index
Performance Tips
• Create indexes the
query optimizer can use. Generally speaking, clustered indexes are best for
range
selections and
ordered queries. Clustered indexes are also appropriate for keys with a high
density
(those with many
duplicate values). Since rows are physically sorted, queries that search using
these
nonunique values
will find them with a minimum number of I/O operations. Nonclustered indexes
are
better for singleton
selects and individual row lookups.
• Make nonclustered
indexes as highly selective (i.e., with as low densities) as possible. Index
selectivity can be
calculated using the formula Selectivity = # of Unique Keys / # of Rows.
Nonclustered indexes
with a selectivity less than 0.1 are not efficient, and the optimizer will
refuse to
use them.
Nonclustered indexes are best used to find single rows. Obviously, duplicate
keys force the
server to work
harder to locate a particular row.
• Along the lines of
making indexes highly selective, order the key columns in a multicolumn index
by
selectivity, placing
more selective columns first. As the server traverses the index tree to find a
given
key column value,
the use of highly selective key columns means that it will have to perform
fewer
I/Os to reach the
leaf level of the index, resulting in a faster query.
• Keep key database
operations and transactions in mind as you construct indexes. Build indexes
that
the query optimizer
can use to service your more crucial transactions.
• Consider creating
indexes to service popular join conditions. If you frequently join two tables
on a set
of columns, consider
building an index to speed the join.
• Drop indexes that
aren't being used. If you inspect the execution plans for the queries that
should be
using an index and
find that the index can't be used as is, consider getting rid of it. Redesign
it if that
makes sense, or
simply omit it—whatever works best in your particular situation.
• Consider creating
indexes on foreign key references. Foreign keys require a unique key index on
the
referenced table but
make no index stipulations on the table making the reference. Creating an index
on the dependent
table can speed up foreign key integrity checks that result from modifications
to the
referenced table and
can improve join performance between the two tables.
• Create temporary
indexes to service infrequent reports and user queries. A report that's run
only
annually or
semiannually may not merit an index that has to be maintained year-round.
Consider
creating the index
just before you run the report and dropping it afterward if that's faster than
running
the report without
the index.
• It may be
advantageous to drop and recreate indexes during BULK INSERT operations. BULK
INSERT operations,
especially those involving multiple clients, will generally be faster when
indexes
aren't present. This
is no longer the maxim it once was, but common sense tells us the less work
that
has to occur during
a bulk load, the faster it should be.
• If the optimizer
can retrieve all the data it needs from a nonclustered index without having to
reference
the underlying
table, it will do so. This is called index covering, and indexes that
facilitate it are known
as covered
indexes. If adding a small column or columns to an existing nonclustered
index would give
it all the data a
popular query needs, you may find that it speeds up the query significantly.
Covered
indexes are the
closest you'll get to having multiple clustered indexes on the same table.
• Allow SQL Server
to maintain index statistic information for your databases automatically. This
helps
ensure that it's
kept reasonably up to date and alleviates the need by most apps to rebuild
index
statistics manually.
• Because SQL
Server's automatic statistics facility uses sampling to generate statistical
info as quickly
as possible, it may
not be as representative of your data as it could be. If the query optimizer
elects
not to use indexes
that you think it should be using, try updating the statistics for the index
manually
using UPDATE
STATISTICS...WITH FULLSCAN.
• You can use DBCC
DBREINDEX() to rebuild the indexes on a table. This is one way of removing
dead space from a
table or changing the FILLFACTOR of one of its indexes. Here's an example:
• DBCC
DBREINDEX('Customers','PK_Customers')
• DBCC
DBREINDEX('Customers','',100)
•Both of these
examples cause all indexes on the Northwind Customers table to be rebuilt. In
the first
example, we pass the
name of the clustered index into DBREINDEX. Rebuilding its clustered index
rebuilds a table's
nonclustered indexes as well. In the second example, we pass an empty string
for
the index name. This
also causes all indexes on the table to be rebuilt.
The nice thing about
DBREINDEX is that it's atomic—either the specified index or indexes are all
dropped and
recreated or none of them are. This includes indexes set up by the server to
maintain
constraints, such as
primary and unique keys. In fact, DBREINDEX is the only way to rebuild primary
and unique key
indexes without first dropping their associated constraints. Since other tables
may
depend upon a
table's primary or unique key, this can get quite complicated. Fortunately,
DBREINDEX takes care
of it automatically—it can drop and recreate any of a table's indexes
regardless of
dependent tables and constraints.
• You can use DBCC
SHOWCONTIG to list fragmentation information for a table and its indexes. You
can use this info to
decide whether to reorganize the table by rebuilding its clustered index.
• As mentioned in
the section "Database Design Performance Tips," if an index regularly
experiences a
significant level of
contention during inserts by multiple users, page locking may be the culprit.
Consider using the
sp_indexoptions system procedure to disable page locks for the index. Disabling
page locks forces
the server to use row locks and table locks. As long as row locks do not
escalate to
table locks
inordinately often, this should result in improved concurrency.
• Thanks to the
query optimizer's use of multiple indexes on a single table, multiple
single-key indexes
can yield better
overall performance than a compound-key index. This is because the optimizer
can
query the indexes
separately and then merge them to return a result set. This is more flexible
than
using a compound-key
index because the single-column index keys can be specified in any
combination. That's
not true with a compound key—you must use compound-key columns in a left-to right
order.
• Use the Index Tuning
Wizard to suggest the optimal indexes for queries. This is a sophisticated tool
that can scan SQL
Profiler trace files to recommend indexes that may improve performance. You can
access it via the
Management|Index Tuning Wizard option on the Tools|Wizards menu in Enterprise
Manager or the
Perform Index Analysis option on the Query menu in Query Analyzer.
No comments:
Post a Comment