1.) Table row and key lengths should be as short as sensible. Be efficient, but
don't be a miser. Trimming
one
byte per row isn't much of a savings if you have only a few rows, or, worse
yet, you end up
needing
that one byte. The reason for narrow rows is obvious—the less work the server
has to do to
satisfy
a query, the quicker it finishes. Using shorter rows allows more rows per page
and more data
in
the same amount of cache space. This is also true for index pages—narrow keys
allow more rows
per
page than wider ones.
2.) Keeping clustered index keys as narrow as possible will help reduce the size of
nonclustered indexes
since
they now reference the clustered index (if one exists) rather than referencing
the table directly.
3.) Begin by normalizing every database you build at least to third normal form.
You can denormalize the
design
later if the need arises.
4.) Use Declarative Referential Integrity constraints to ensure relational
integrity when possible because
they're
generally faster than triggers and stored procedures. DRI constraints cause
highly optimized
native
machine code internal to SQL Server to run. Triggers and stored procedures, by
contrast,
consist
of pseudo compiled Transact-SQL code. All other things being equal, native
machine code is
clearly
the better performer of the two.
5.) Use fixed-length character data types when the length of a column's data
doesn't vary significantly
throughout
a table. Processing variable-length columns requires more processing resources
than
handling
fixed-length columns.
6.) Disallow NULLs when possible—handling NULLs adds extra overhead to storage and
query
processing.
It's not unheard of for developers to avoid NULLs altogether, using
placeholders to signify
missing
values as necessary.
7.) Consider using filegroups to distribute large tables over multiple drives and
to separate indexes from
data.
If possible, locate the transaction log on a separate drive or drives from the
filegroups that
compose
the database, and separate key tables from one another. This is especially
appropriate for
very
large database (VLDB) implementations.
8.)If the primary key for a given table is sequential (e.g., an identity column),
consider making it a
nonclustered
primary key. A clustered index on a monotonically increasing key is less than
optimal
since
you probably won't ever query the table for a range of key values or use the
primary key
column(s)
with ORDER BY. A clustered sequential primary key can cause users to contend
for the
same
area of the database as they add rows to the table, creating what's known as a
"hotspot." Avoid
this
if you can by using clustered keys that sort the data more evenly across the
table.
9.) If a table frequently experiences severe contention, especially when multiple
users are attempting to
insert
new rows, page locks may be at fault. Consider using the sp_indexoptions system
stored
procedure
to disable page locks on the suspect table. Disabling page locks forces the
server to use
row
locks and table locks. This will prevent the automatic escalation of row locks
to page locks from
reducing
concurrency.
10.) Use computed columns to render common column calculations rather than deriving
them via SQL
each
time you query a table. This is syntactically more compact, reduces the work
required to
generate
an execution plan, and cuts down on the SQL that must traverse the network for routine
queries.
11.) Test your database with different row volumes in order to get a feel for the
amount of data the design
will
support. This will let you know early on what the capacity of your model is,
possibly pointing out
serious
problems in the design. A database that works fine for a few thousand rows may
collapse
miserably
under the weight of a few million.
12.) When all else fails, consider limited database denormalization to improve
performance.
No comments:
Post a Comment