We are inserting and updating large amounts of rows and find that users are
complaining that their SELECT queries on the same tables are blocking (not
finishing) until our INSERT or UPDATE finishes.
Is there any way to tell an INSERT or UPDATE to not lock the whole table so
that SELECT's can still be run on the table?Mike W wrote:
> We are inserting and updating large amounts of rows and find that
> users are complaining that their SELECT queries on the same tables
> are blocking (not finishing) until our INSERT or UPDATE finishes.
> Is there any way to tell an INSERT or UPDATE to not lock the whole
> table so that SELECT's can still be run on the table?
That's probably not the problem. While it's likey that SQL Server is
escalating row locks to page locks and even possibly a table lock if
enough rows are affected by the insert/update, while that transaction is
running, there are exclusive locks on those rows/pages/table.
When a page has an exclusive lock, no other readers or writers can touch
the page. They are blocked until the update/insert transaction
completes. That is, unless they use the read uncommited or NOLOCK table
hint on the tables in the Select statements. But since you are updating
information, is it ok for your users to read dirty data? I don't know.
That's up to you to determine. Dirty data arises when data is
updated/inserted in a transaction and another user reads the data using
read uncommitted isolation level. if the update transaction then rolls
back the changes the user that selected the data is staring at data that
doesn't exist any longer in the database.
The other option is to keep the insert/update transactions as short as
possible. Use batches if you need to to. That will keep the outstanding
locks to a minimim.
SQL Server 2005 offers a method for readers to see the original data
even if it's being updated by another user, but this option will likley
introduce overhead in the database because the data is temporarily
written to tempdb so it's available for other users to read. And it's a
database-wide setting.
For SQL 2000, it's either dirty reads or blocking. But short
transactions mitigate most of these problems.
David Gugick
Imceda Software
www.imceda.com|||Mike W wrote:
> We are inserting and updating large amounts of rows and find that
> users are complaining that their SELECT queries on the same tables
> are blocking (not finishing) until our INSERT or UPDATE finishes.
> Is there any way to tell an INSERT or UPDATE to not lock the whole
> table so that SELECT's can still be run on the table?
Another thing to consider is where the newly inserted rows are going.
For example, if you have a clustered index on an IDENTITY column, then
inserting new rows will have less of an effect on existing data because
the new most of the rows are inserted on new pages. Updated rows will
still cause problems.
What is your clustered index on? What data are you inserting? Can you
insert and update in different transactions? Can you also update and
insert in small amounts, say 1,000 rows at a time, rather than all at
once?
David Gugick
Imceda Software
www.imceda.com|||Thanks for your responses. I believe the table that is being updated does
have a Clustered index on the Primary key field, but it's not an Identity
field.
The data we are inserting is customer lead information.
I will ask about the last 2 questions since it is not me that is doing the
updates.
Thanks again.
"David Gugick" <davidg-nospam@.imceda.com> wrote in message
news:On30MJv4EHA.1596@.tk2msftngp13.phx.gbl...
> Mike W wrote:
>> We are inserting and updating large amounts of rows and find that
>> users are complaining that their SELECT queries on the same tables
>> are blocking (not finishing) until our INSERT or UPDATE finishes.
>> Is there any way to tell an INSERT or UPDATE to not lock the whole
>> table so that SELECT's can still be run on the table?
> Another thing to consider is where the newly inserted rows are going. For
> example, if you have a clustered index on an IDENTITY column, then
> inserting new rows will have less of an effect on existing data because
> the new most of the rows are inserted on new pages. Updated rows will
> still cause problems.
> What is your clustered index on? What data are you inserting? Can you
> insert and update in different transactions? Can you also update and
> insert in small amounts, say 1,000 rows at a time, rather than all at
> once?
>
> --
> David Gugick
> Imceda Software
> www.imceda.com|||Mike W wrote:
> Thanks for your responses. I believe the table that is being updated
> does have a Clustered index on the Primary key field, but it's not an
> Identity field.
> The data we are inserting is customer lead information.
> I will ask about the last 2 questions since it is not me that is
> doing the updates.
> Thanks again.
>
If the clustered index is on something other than a date or an identity,
you have a few potential issues:
1-The clustered index is probably causing page splits as new rows are
inserted. This is a very expensive operation because it requires a page
is split, a new one created, and rows moved around. While this operation
is going on, both pages are locked by SQL Server, adding to the locking
overhead of this operation.
2- The clustered index is likely requiring the disk heads move all
around the physical disk to locate the page to update. There is nothing
slower than random disk access for a database, further slowing down the
operation.
You can mitigate some of the problems here (not the physical disk heads
moving around) by leaving space in your clustered index using a fill
factor. However, this requires you rebuild the clustered index as needed
to maintain the free space before a bulk update of data. Using a fill
factor will leave a percentage of space available in each row and help
prevent page splitting. However, it will make the table a percentage
larger, but this may not be a problem if the free space is going to be
filled with new data anyway.
And the rebuilding of the clustered index will likely affect concurrency
for that table while the rebuild occurs.
If your clustered index is, in fact, on a column or set of columns that
are causing these problem, you may need to consider changing the index
to something that keeps all new data at the end of the table. The other
option is to temporarily remove the clustered index during the load and
rebuild when complete, but this will cause an automatic rebuild of all
non-clustered indexes as well and may be more expensive than what you
want.
A clustered index on an identity can prevent a host of problems and
speed the load process significantly in this case. It's worth a test to
see if it helps the load and if it affects other queries run on that
table.
David Gugick
Imceda Software
www.imceda.com
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment